Sign in to follow this  

A fundamental question about Ray Tracing.

This topic is 4717 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

So I started to read an article about Ray Tracing and it got me all interested in that technique of rendering a scene. I understand, so far, that you only draw the pixels that you will see on the screen - which is one of the reasons why ray tracing is so neat. But let's say I am trying to render a landscape (or any scene for that matter) using a ray tracing engine. So when the engine casts a ray (vector) for each and every pixel on the screen, what does it actually check against???? Do the rays check against virtual objects represented by a algorithma and/or formulas? Or do the rays check against actual geometry (which do not go thru the rasterizer of the graphics pipeline)? If it is the latter, I'd be happy since it makes my life simple. If it is the former, then I don't think it's too bad. If it is a combination of both, then I guess I can live too. Come to think of it, I guess my first two question scenarios are really one in the same. Am I correct on this? Thanks.

Share this post


Link to post
Share on other sites
I don't think you understand it quite right... the raytracer IS the rasterizer. "Rasterizing" just means converting a pixel coordinate into a color value. 3D graphics hardware is one way of doing this by specializing in transforming 3D triangle vertices into screen coordinates and filling the triangle with colors depending on textures and colors and lighting.

Raytracing is an alternative, where you are not bound by drawing triangles. You can represent any geometry with which you can collide a ray. This opens up awesome possibilities, but you can no longer render in real-time (if you consider 20fps minimum to mean 'real-time').

So the question is do you want to render in real-time, or do you want to render virtual objects represented by formulas (or "implicit surfaces", which are a specialty of raytracing).

Share this post


Link to post
Share on other sites
but couldnt you combine the best of both worlds?

instead of having those implicit functions and such and draw pixels....why not work with the traditional geometry data.

in other words, you'd translate and rotate geometry, but not texture nor light them.

then you simply blast them with rays and draw the pixel.

and are you sure that real-time ray tracing can't be done? because i read somewhere that a realtime terrain engine has been done with a ray tracer, and its faster than a quadtree based terrain engine.

Share this post


Link to post
Share on other sites
Mmm, I don't understand you too... Raytracing and rasterizing are two different rendering methods... raytrace shots a lot of rays from the 'eye' to the scene. For each pixel at least a ray is traced (except some exhotic optimizations) and then the nearest intersection with the geometry is computed. Then the engine shades this point and put it on the screen. (read the articles written by Jacco Bikker on www.flipcode.org).

Yes, real-time ray tracing (RTRT) is possible, but, at least now, cannot compete with hardware accelerated rasterization used by todays games. This is due to the big amount of rays needed and the ray-triangle intersection issue.
Someone says that RT is more efficient than rasterization in complexity order, so there is a chance that RTRT will repalce it soon.

If you use the search tool here in Gamedev for "realtime ray tracing", "RTRT" or something like this, you will find a lot of informations.

Share this post


Link to post
Share on other sites
you can actually write a simple real time raytracer in about 300 lines of code.

to raytrace a single triangle in realtime set up a loop like this

for(i=0;i<screen_width;i++)
{
for(j=0;j<screen_height;j++)
{
//compute ray vector from screen FOV compared to size of screen
//and camera direction

true=intersect_triangle(ray_vector, tripos[0], tripos[1],tripos[2])

if true draw pixel
}
}
so you can see its just like a normal engine cept you cast rays instead.
the geometry stays the same, just triangles.

you still use octrees kdtrees etc... spacial division or it goes dog slow.
its pretty much the same cept you get certain advantages (like shadows and
refraction through water) its quite intriguing, i suggest you write a little
one yourself like i showed you so you know what your up against.

Share this post


Link to post
Share on other sites
Quote:
Original post by GekkoCube
If it is the latter, I'd be happy since it makes my life simple.
If it is the former, then I don't think it's too bad.
If it is a combination of both, then I guess I can live too.

Come to think of it, I guess my first two question scenarios are really one in the same. Am I correct on this?

Thanks.

both are possible, but they are not the same.

raytracing can deal with both implicit geometry, for example a sphere defined like x*x+y*y+z*z = r*r, AND explicit geometry, for example good old triangles.

implicit geometry isnt really usefull in any real-world sense though. its uninituative to model using implicit geometry, and generall is a bitch to deal with in anything except raytracing. its more of an oddity really.

however, raytracing is also more flexible when it comes to explicit geometry. for example, higher order surfaces such as beziers or nurbs can be raytraced directly in full detail, whereas a rasterizer has to convert them to triangles first.

the reason why raytracing is actually faster than rasterizing (when done right), is because a raytracer only draws whats on screen. finding out whats on screen takes log(n) time, whereas a rasterizer just brute-forces everything to screen in linear time.

also, because raytracing is a much better model of how actual light behaves, lots of usefull effects come as a natural extension, which are all a pain in the ass to hack into a rasterizer, such as shadows and reflection.

Share this post


Link to post
Share on other sites
Eelco. that is exactly what I thought.

So back on the explicit geometry thing.
Im thinking about rendering via ray-tracing on explicit geometry.

example:
I have a typical 3d mesh of a cube and/or triangle.
I even have a 3d mesh of a terrrain.

I do whatever trasformations on the geomtry, such as rotation.

Then instead rendering them thru the rasterizer, I ray-trace the scene in the viewport (somehow), and then draw the pixels respectively.

This way I have explicit geometry, as well as 3d models i can easily load AND do ray-tracing.

Or is this not a good idea? If not...why not?

Share this post


Link to post
Share on other sites
youre useage of terms indicates some confusion about them. a triangle mesh is nothing but a bunch of explicit geometry lumped together.

so yes, you can load the same triangle meshes commonly used for rasterization with dx or ogl, and raytrace those. this usually includes some preprocesing steps though.

however, getting it fast isnt a walk in the park, so dont see it as a usefull extension to your game, but rather as an interesting project to learn from. even if you have a fast raytracer, its only faster than rasterization on models with for todays standards enormous numbers of triangles.

i can recommend you the raytracing tutorial series on flipcode. you cant miss them when you navigate to flipcode, they should be on the main page.

Share this post


Link to post
Share on other sites

This topic is 4717 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this