Raytracing for dummies

Started by
20 comments, last by cignox1 15 years, 7 months ago
Fellow programmers, With all that talking about Global Illumination, guys like Vlad woke up my interrest to write a (simple) raytracer. I believe raytracing will take over some day, so I'd better be prepared :) My learning curve is ussually pretty long, and I don't have much free time (especially not with my other "normally rendered" hobby project), so the sooner I start, the better. Besides, I think its fun and very refreshing to do something 'different'. I know the basic principles of ray-tracing, and especially the (dis)advantages when comparing to the rasterization methods we use every day. But I wouldn't know how/where to start with RT. So, I made a list of questions. I must add that I'm looking for practical (read fast) implementations. I'm not looking for the cutting-edge or highest quality graphics. Let's say I'd like to program games, not graphics allone. In other words, the technique should be able to produce realtime results in the next ~4 years, suitable for a game. 1.- So... which technique is the most practical(fastest) for realtime/games? I've read a little bit ray-tracing and photon mapping. Are these 2 different things? 2.- Lights. From what I read so far I shoot rays from my eyes. They collide somewhere, but at that point we don't know yet if that point is litten yes/no. How to find that out? Shoot a ray from that point to all possible lightsources? And how about indirect lighting then? I could do it reversed, starting at the lights, but then there is not telling if its rays ever reach the camera. 3.- Does a RT still need OpenGL/DirectX/shaders ? I guess you can combine both (for example, render a scene normally, and add special effects such as GI/Caustics/Reflections via RT). What is used in common?' I can imagine a shader is used on top to smooth/blur the somewhat noisy results of a RT produced screen. 4.- How does RT access textures? I suppose you can use diffuse textures and normal/specular/gloss Maps just as well. You just access them via the RAM and eventually write your own filtering method? If that is true, it would mean you have lots of 'texture memory' and can directly change a texture as well (draw on it for example). 5.- Ray tracing has lots to do with collision detection. Now this is the part where I'm getting scared since my math is not very well. I wrote octrees and several collision detection functions, but I can't imagine them fast enough to run millions of rays... I mean 800x600 pixels = 480.000 rays. And that number multiplies if I want reflections/refractions(and we most certainly want that!). Do I underestemate the power of the CPU('s), do I count way too much rays, or is it indeed true that VERY OPTIMIZED algorithms are required here? 6.- Overall, how difficult is writing a RT? Writing a basic OpenGL program is simple, but implementating billions of effects with all kind of crazy tricks (FBO's, shaders for each and every material type, alpha blending, (cascaded) shadowMaps, mirroring, cubeMaps, probes, @#%$#@$) is difficult as well. At least, it takes a long time before you know all of them. Shoot me if I'm wrong, but I think a raytracer is "smaller"/simpler because all the effects you can achieve are done in the same way, based on relative simple physic laws. On the other hand, if you want to write a fast RT, you need to know your optimizations very well. Lousy programming leads to unacceptable slow rendering I guess. Although this was probably also true with rasterization when writing Quake 1. As the hardware speeds up, the tolerance for "bad programming" grows. But at this point, would you say writing a Raytracer is more difficult than a normal renderer(with all the special effects used nowadays) 7.- I'm not planning to use RT for big projects anywhere soon. I just like to play around for now. But nevertheless, what can I expect in the nearby future(5 years)? I think some of the RayTracers made are already capable of running simple games. But how does a RT coop with - Big/open scenes (Farcry) - Lots of local lights (Doom3) - Lots of dynamic objects (a race game) - Sprites / particles / fog - Post FX (blurring, DoF, Tone Mapping, Color enhancement, ...) - Memory requirements Or maybe any other big disadvantage that I need to be aware of before using RT blindly? 8.- So, where to start? Is there something like a "ray-tracing for dummies", or a Nehe tutorial kinda like website? Allrighty. Looking forward to your responses! Rick
Advertisement
Quote:Original post by spek

1.- So... which technique is the most practical(fastest) for realtime/games?
I've read a little bit ray-tracing and photon mapping. Are these 2 different things?


Photon mapping is a technique used to solve the global illumination problem, expecially the indirect contribute of lighting and for things like caustics. Basic Raytracing doesn't take into account indirect illumination, so the two tecniques can be used togheter: PM to precompute indirect illumination and caustics, and RT to render the scene using the Photon map to add the indirect contribute.

Quote:
2.- Lights. From what I read so far I shoot rays from my eyes. They collide somewhere, but at that point we don't know yet if that point is litten yes/no. How to find that out? Shoot a ray from that point to all possible lightsources? And how about indirect lighting then? I could do it reversed, starting at the lights, but then there is not telling if its rays ever reach the camera.

You are right. You shot the 'shadow ray' from your point to each light source (at least, those that you want to compute, you can discard those that are too far or too weak if you want).
Indirect illumination, as said, is not included by standard RT, so you need to use something else (radiosity, photon mapping, path tracing and so on).

Of course you can also generate rays from the lights: this is what the original raytracing was about: what we call raytracing actually is Backward Raytracing. Anyway, Bidirectional Path Tracing and Photon Mapping both shot rays from light sources, and both use methods to ensure that rays are not wasted (in both rays starting from lights are only a step of the whole rendering: there are rays starting from the eye anyway)

Quote:
3.- Does a RT still need OpenGL/DirectX/shaders ? I guess you can combine both (for example, render a scene normally, and add special effects such as GI/Caustics/Reflections via RT). What is used in common?' I can imagine a shader is used on top to smooth/blur the somewhat noisy results of a RT produced
screen.

Raytracing is sometimes used by games engine (IIRC) to achieve some effects, but GPU are not suited for these tasks. Many real time raytracers (like Arauna) render to a texture and then use shaders to perform tone mapping and appy other filters.

Quote:
4.- How does RT access textures? I suppose you can use diffuse textures and normal/specular/gloss Maps just as well. You just access them via the RAM and eventually write your own filtering method? If that is true, it would mean you have lots of 'texture memory' and can directly change a texture as well (draw on it for example).

As long as you write a software raytracer you can do what you want with textures (images, procedural, functions of other parameters like distance from the camera and so on). In my raytracer I can use a texture to modulate another. Lightwave let you use a texture as a render target, so you can have a texture that displays the same scene from another point of view (as in a security camera).

Quote:
5.- Ray tracing has lots to do with collision detection. Now this is the part where I'm getting scared since my math is not very well. I wrote octrees and several collision detection functions, but I can't imagine them fast enough to run millions of rays... I mean 800x600 pixels = 480.000 rays. And that number multiplies if I want reflections/refractions(and we most certainly want that!). Do I underestemate the power of the CPU('s), do I count way too much rays, or is it indeed true that VERY OPTIMIZED algorithms are required here?

Download Arauna by Jacco Bikker from the web: it is most probably the faster raytracer you can find, so you can see it for yourself what you can get from raytracing and what not. Be warned that such speed can be achieved only with a VERY HUGE work!

Quote:
6.- Overall, how difficult is writing a RT? Writing a basic OpenGL program is simple, but implementating billions of effects with all kind of crazy tricks
(FBO's, shaders for each and every material type, alpha blending, (cascaded) shadowMaps, mirroring, cubeMaps, probes, @#%$#@$) is difficult as well. At least, it takes a long time before you know all of them. Shoot me if I'm wrong, but I think a raytracer is "smaller"/simpler because all the effects you can achieve are done in the same way, based on relative simple physic laws.

Writing a raytracer is not all that hard: you must write everything from scratch, but if you already wrote spatial structures and ray/triangle routines, then you can go with the interesting part soon. You will discover how easy is getting new effects once the core is working. Of course, designing a full-featured raytracer is another beast...

Quote:
On the other hand, if you want to write a fast RT, you need to know your optimizations very well. Lousy programming leads to unacceptable slow rendering I guess. Although this was probably also true with rasterization when writing
Quake 1. As the hardware speeds up, the tolerance for "bad programming" grows. But at this point, would you say writing a Raytracer is more difficult than a normal renderer(with all the special effects used nowadays)

The main performances related critical points with raytracing are well known: ray/primitive intersections, bad spatial structures, cache misses, texture sampling and so on. The there are higher and lower levels of optimization (ray packing to enhance cache coherence and SSE patterns, multiple importance sampling to reduce noise in monte carlo sampling and so on...)

Quote:
7.- I'm not planning to use RT for big projects anywhere soon. I just like to play around for now. But nevertheless, what can I expect in the nearby future(5 years)? I think some of the RayTracers made are already capable of running
simple games. But how does a RT coop with
- Big/open scenes (Farcry)
- Lots of local lights (Doom3)
- Lots of dynamic objects (a race game)
- Sprites / particles / fog
- Post FX (blurring, DoF, Tone Mapping, Color enhancement, ...)
- Memory requirements
Or maybe any other big disadvantage that I need to be aware of before using RT blindly?

IMHO RT should handle open scenes better than rasterization.
Raytracing handles local lights better than rasterization.
Raytracing handles dynamic object (probably) not as easily as rasterization.
Sprites and so on... no problem.
Post FX: everithing you want and even more (you can made separate channels for everithing :-)
Memory Req: kd-tree can be a problem with very complex geometry, and probably memory usage of a rasterized scene will require less memory (probably)

Quote:
8.- So, where to start? Is there something like a "ray-tracing for dummies", or a Nehe tutorial kinda like website?

On DevMaster.net you can find a good RT tutorial series.


Good luck with your RT :-)

EDIT: when I say RT handles this better than rasterization, I don't mean that doing the same thing with RT is faster... there are other parameters (quality, special cases to take into account, efficency just to tell some).

EDIT 2: there are not many good & free resources that cover RT on the web. There are a few tutorial (the one I linked is the best one IMHO) and many papers written by researchers. But if you want to avoid wasting hours, I suggest you to write your first small RT following the tutorials, and then buy a book.
You can take a look on ompf.org where there are highly skilled people working on RT and related techniques.
Maybe you will also find this book interesting:
http://www.pbrt.org

Regarding realtime raytracing you may also find this interesting:
http://www.mpi-inf.mpg.de/~guenther/BVHonGPU/index.html

Quote:Original post by nmi
Maybe you will also find this book interesting:
http://www.pbrt.org

Regarding realtime raytracing you may also find this interesting:
http://www.mpi-inf.mpg.de/~guenther/BVHonGPU/index.html


PBRT is a wonderful book, but I would never suggest it to someone who is going to write his first raytracer: it focuses on physically based rendering, and most of it covers advanced techniques like sampling, global illumination, sampling, BSDF, sampling and design issues (sorry for the 3 'sampling', but pbrt really gives a LOT of pages to this subject)...

I never read 'Raytracing from the ground up', but from the table of contents it seems that it might be better suited for beginners (I'm thinking about buying it, since I sometimes find very hard to understand PBRT)...
Merci beaucoup for this kickstart! I'm exited about writing my first RT. It doesn't have to be high quality at all, I have my other (rasterization) game/hobby/project for that. I hope I can find time though. 24 hours per day is just not enough to work, please a girlfriend, hang out with friends, learn cooking, do some sports, and raise a little kid :)

Therefore I forgot one question. Are there already API's like OpenGL/DirectX for RT? Writing everything yourself is more fun, but... I guess the answer is 'yes', but maybe there are not really high quality or 'universal' libraries (yet). I've seen the name "Arauna" flashing by several times. Is this based on an existing API/tools, is it a library itself?


And then there is the hardware. From what I understand, Intel Larrabee is trying to give a boost. But what exactly is it? A specialized CPU, like the GPU? In which ways is it going to help, does it come with an API, ... And when is it available?

Probably there won't be any computers that get this piece of hardware by default. Just like the Agea physics card. I have no idea if that card works good, but as long as the average user/gamer does not have this equipment, its not really helping the developer, unless he/she is willing to write additional code that supports this hardware. So, I guess its wise just to write a RT focussing on my current hardware (Intel dual core CPU, 2000 Mhz).

Ok, time to click your link :)
Quote from devmaster Jacco
"And believe me, you haven't really lived until you see your first colors bleeding from one surface to another due to diffuse photon scattering…
"
That reminds me of seeing my first 2D sprite tank moving 8 years ago :) Pure magic

Thanks!
Rick
Quote:Original post by spek
Merci beaucoup for this kickstart! I'm exited about writing my first RT. It doesn't have to be high quality at all, I have my other (rasterization) game/hobby/project for that. I hope I can find time though. 24 hours per day is just not enough to work, please a girlfriend, hang out with friends, learn cooking, do some sports, and raise a little kid :)

Yeah, I know, that's why my current RT is currently waiting on my hdd :-)

Quote:
Therefore I forgot one question. Are there already API's like OpenGL/DirectX for RT? Writing everything yourself is more fun, but... I guess the answer is 'yes', but maybe there are not really high quality or 'universal' libraries (yet). I've seen the name "Arauna" flashing by several times. Is this based on an existing API/tools, is it a library itself?

There is something around, but AFAIK nothing really interesting nor standard. There has been a lib named OpenRT somewhere, but I don't know if it is free, or still developed.
Arauna has been developed from scratch, and has been already used for two small games, so I suppose thatcan be used as an engine (the author is a member of Gamedev, chance are he will reply here as well). If what you want is a game, then you might use existing tools, but really, I think that you will feel happier by doing it yourself :-)


Quote:
And then there is the hardware. From what I understand, Intel Larrabee is trying to give a boost. But what exactly is it? A specialized CPU, like the GPU? In which ways is it going to help, does it come with an API, ... And when is it available?

Larrabee will be a x86 based processor (up to 32 cores IIRC). It's much like as if we had to work with a standard CPU, it will just be optimized for highly parallel tasks. Since Intel used to advertise it using the 'raytracing' word thousand of times, I suppose they will provide a RT api, but I'm not sure. We wont see Larrabee until late 2009 (perhaps 2010), so you still have time to learn RT :-)

Quote:
Probably there won't be any computers that get this piece of hardware by default. Just like the Agea physics card. I have no idea if that card works good, but as long as the average user/gamer does not have this equipment, its not really helping the developer, unless he/she is willing to write additional code that supports this hardware. So, I guess its wise just to write a RT focussing on my current hardware (Intel dual core CPU, 2000 Mhz).

Intel states that Larrabee will enter the market as a competitor to nVidia and ATI, and they will provide OpenGL and DX driver. Selling it as a specialized device would be a suicide. The only question is: will it be able to offer the same performances as nVidia and ATI will?

Quote:
Ok, time to click your link :)
Quote from devmaster Jacco
"And believe me, you haven't really lived until you see your first colors bleeding from one surface to another due to diffuse photon scattering…
"
That reminds me of seeing my first 2D sprite tank moving 8 years ago :) Pure magic

Well, i never implemented GI yet, but also looking at your first RT shaded sphere is a wonderful experience.

Quote:Original post by spek
And then there is the hardware. From what I understand, Intel Larrabee is trying to give a boost. But what exactly is it? A specialized CPU, like the GPU? In which ways is it going to help, does it come with an API, ... And when is it available?


Here is a paper about the Larrabee architecture:
http://softwarecommunity.intel.com/UserFiles/en-us/File/larrabee_manycore.pdf

Basically they just put many Pentium processors into one chip. Single-threaded applications will not benefit from this, but parallel applications like a RT will.
Glad I am no the only one. I don't understand why the code snippets in the pbrt book recursively refer to other code snippets. Having code is already a distraction in understanding the concepts.

Do you recommend other real time ray tracers?

Quote:
I never read 'Raytracing from the ground up', but from the table of contents it seems that it might be better suited for beginners (I'm thinking about buying it, since I sometimes find very hard to understand PBRT)...


Quote:Original post by rumble
Glad I am no the only one. I don't understand why the code snippets in the pbrt book recursively refer to other code snippets. Having code is already a distraction in understanding the concepts.

Do you recommend other real time ray tracers?

Quote:
I never read 'Raytracing from the ground up', but from the table of contents it seems that it might be better suited for beginners (I'm thinking about buying it, since I sometimes find very hard to understand PBRT)...


For code reference there are many OS RT:
-PovRay
-YAFRAY
-PBRT
-WinOSI
-SunFlow
-Blender

-Just to tell a few on the top of my head. None of them aim to real time though and honestly I don't know of any other OS RTRT except of Arauna (I wont be interested in RTRT until I feel comfortable with RT in the first place, because making RT real time is way beyond my possibilities :-(
I don't think there are many others OS RTRT that can compete with Arauna (and that are still actively mantained), if any.
There's a tutorial here (not yet complete, I know, need to translate the rest..) :

Raytracer in C++

LeGreg

This topic is closed to new replies.

Advertisement