Sign in to follow this  
spek

OpenGL Raytracing for dummies

Recommended Posts

Fellow programmers, With all that talking about Global Illumination, guys like Vlad woke up my interrest to write a (simple) raytracer. I believe raytracing will take over some day, so I'd better be prepared :) My learning curve is ussually pretty long, and I don't have much free time (especially not with my other "normally rendered" hobby project), so the sooner I start, the better. Besides, I think its fun and very refreshing to do something 'different'. I know the basic principles of ray-tracing, and especially the (dis)advantages when comparing to the rasterization methods we use every day. But I wouldn't know how/where to start with RT. So, I made a list of questions. I must add that I'm looking for practical (read fast) implementations. I'm not looking for the cutting-edge or highest quality graphics. Let's say I'd like to program games, not graphics allone. In other words, the technique should be able to produce realtime results in the next ~4 years, suitable for a game. 1.- So... which technique is the most practical(fastest) for realtime/games? I've read a little bit ray-tracing and photon mapping. Are these 2 different things? 2.- Lights. From what I read so far I shoot rays from my eyes. They collide somewhere, but at that point we don't know yet if that point is litten yes/no. How to find that out? Shoot a ray from that point to all possible lightsources? And how about indirect lighting then? I could do it reversed, starting at the lights, but then there is not telling if its rays ever reach the camera. 3.- Does a RT still need OpenGL/DirectX/shaders ? I guess you can combine both (for example, render a scene normally, and add special effects such as GI/Caustics/Reflections via RT). What is used in common?' I can imagine a shader is used on top to smooth/blur the somewhat noisy results of a RT produced screen. 4.- How does RT access textures? I suppose you can use diffuse textures and normal/specular/gloss Maps just as well. You just access them via the RAM and eventually write your own filtering method? If that is true, it would mean you have lots of 'texture memory' and can directly change a texture as well (draw on it for example). 5.- Ray tracing has lots to do with collision detection. Now this is the part where I'm getting scared since my math is not very well. I wrote octrees and several collision detection functions, but I can't imagine them fast enough to run millions of rays... I mean 800x600 pixels = 480.000 rays. And that number multiplies if I want reflections/refractions(and we most certainly want that!). Do I underestemate the power of the CPU('s), do I count way too much rays, or is it indeed true that VERY OPTIMIZED algorithms are required here? 6.- Overall, how difficult is writing a RT? Writing a basic OpenGL program is simple, but implementating billions of effects with all kind of crazy tricks (FBO's, shaders for each and every material type, alpha blending, (cascaded) shadowMaps, mirroring, cubeMaps, probes, @#%$#@$) is difficult as well. At least, it takes a long time before you know all of them. Shoot me if I'm wrong, but I think a raytracer is "smaller"/simpler because all the effects you can achieve are done in the same way, based on relative simple physic laws. On the other hand, if you want to write a fast RT, you need to know your optimizations very well. Lousy programming leads to unacceptable slow rendering I guess. Although this was probably also true with rasterization when writing Quake 1. As the hardware speeds up, the tolerance for "bad programming" grows. But at this point, would you say writing a Raytracer is more difficult than a normal renderer(with all the special effects used nowadays) 7.- I'm not planning to use RT for big projects anywhere soon. I just like to play around for now. But nevertheless, what can I expect in the nearby future(5 years)? I think some of the RayTracers made are already capable of running simple games. But how does a RT coop with - Big/open scenes (Farcry) - Lots of local lights (Doom3) - Lots of dynamic objects (a race game) - Sprites / particles / fog - Post FX (blurring, DoF, Tone Mapping, Color enhancement, ...) - Memory requirements Or maybe any other big disadvantage that I need to be aware of before using RT blindly? 8.- So, where to start? Is there something like a "ray-tracing for dummies", or a Nehe tutorial kinda like website? Allrighty. Looking forward to your responses! Rick

Share this post


Link to post
Share on other sites
Quote:
Original post by spek

1.- So... which technique is the most practical(fastest) for realtime/games?
I've read a little bit ray-tracing and photon mapping. Are these 2 different things?


Photon mapping is a technique used to solve the global illumination problem, expecially the indirect contribute of lighting and for things like caustics. Basic Raytracing doesn't take into account indirect illumination, so the two tecniques can be used togheter: PM to precompute indirect illumination and caustics, and RT to render the scene using the Photon map to add the indirect contribute.

Quote:

2.- Lights. From what I read so far I shoot rays from my eyes. They collide somewhere, but at that point we don't know yet if that point is litten yes/no. How to find that out? Shoot a ray from that point to all possible lightsources? And how about indirect lighting then? I could do it reversed, starting at the lights, but then there is not telling if its rays ever reach the camera.

You are right. You shot the 'shadow ray' from your point to each light source (at least, those that you want to compute, you can discard those that are too far or too weak if you want).
Indirect illumination, as said, is not included by standard RT, so you need to use something else (radiosity, photon mapping, path tracing and so on).

Of course you can also generate rays from the lights: this is what the original raytracing was about: what we call raytracing actually is Backward Raytracing. Anyway, Bidirectional Path Tracing and Photon Mapping both shot rays from light sources, and both use methods to ensure that rays are not wasted (in both rays starting from lights are only a step of the whole rendering: there are rays starting from the eye anyway)

Quote:

3.- Does a RT still need OpenGL/DirectX/shaders ? I guess you can combine both (for example, render a scene normally, and add special effects such as GI/Caustics/Reflections via RT). What is used in common?' I can imagine a shader is used on top to smooth/blur the somewhat noisy results of a RT produced
screen.

Raytracing is sometimes used by games engine (IIRC) to achieve some effects, but GPU are not suited for these tasks. Many real time raytracers (like Arauna) render to a texture and then use shaders to perform tone mapping and appy other filters.

Quote:

4.- How does RT access textures? I suppose you can use diffuse textures and normal/specular/gloss Maps just as well. You just access them via the RAM and eventually write your own filtering method? If that is true, it would mean you have lots of 'texture memory' and can directly change a texture as well (draw on it for example).

As long as you write a software raytracer you can do what you want with textures (images, procedural, functions of other parameters like distance from the camera and so on). In my raytracer I can use a texture to modulate another. Lightwave let you use a texture as a render target, so you can have a texture that displays the same scene from another point of view (as in a security camera).

Quote:

5.- Ray tracing has lots to do with collision detection. Now this is the part where I'm getting scared since my math is not very well. I wrote octrees and several collision detection functions, but I can't imagine them fast enough to run millions of rays... I mean 800x600 pixels = 480.000 rays. And that number multiplies if I want reflections/refractions(and we most certainly want that!). Do I underestemate the power of the CPU('s), do I count way too much rays, or is it indeed true that VERY OPTIMIZED algorithms are required here?

Download Arauna by Jacco Bikker from the web: it is most probably the faster raytracer you can find, so you can see it for yourself what you can get from raytracing and what not. Be warned that such speed can be achieved only with a VERY HUGE work!

Quote:

6.- Overall, how difficult is writing a RT? Writing a basic OpenGL program is simple, but implementating billions of effects with all kind of crazy tricks
(FBO's, shaders for each and every material type, alpha blending, (cascaded) shadowMaps, mirroring, cubeMaps, probes, @#%$#@$) is difficult as well. At least, it takes a long time before you know all of them. Shoot me if I'm wrong, but I think a raytracer is "smaller"/simpler because all the effects you can achieve are done in the same way, based on relative simple physic laws.

Writing a raytracer is not all that hard: you must write everything from scratch, but if you already wrote spatial structures and ray/triangle routines, then you can go with the interesting part soon. You will discover how easy is getting new effects once the core is working. Of course, designing a full-featured raytracer is another beast...

Quote:

On the other hand, if you want to write a fast RT, you need to know your optimizations very well. Lousy programming leads to unacceptable slow rendering I guess. Although this was probably also true with rasterization when writing
Quake 1. As the hardware speeds up, the tolerance for "bad programming" grows. But at this point, would you say writing a Raytracer is more difficult than a normal renderer(with all the special effects used nowadays)

The main performances related critical points with raytracing are well known: ray/primitive intersections, bad spatial structures, cache misses, texture sampling and so on. The there are higher and lower levels of optimization (ray packing to enhance cache coherence and SSE patterns, multiple importance sampling to reduce noise in monte carlo sampling and so on...)

Quote:

7.- I'm not planning to use RT for big projects anywhere soon. I just like to play around for now. But nevertheless, what can I expect in the nearby future(5 years)? I think some of the RayTracers made are already capable of running
simple games. But how does a RT coop with
- Big/open scenes (Farcry)
- Lots of local lights (Doom3)
- Lots of dynamic objects (a race game)
- Sprites / particles / fog
- Post FX (blurring, DoF, Tone Mapping, Color enhancement, ...)
- Memory requirements
Or maybe any other big disadvantage that I need to be aware of before using RT blindly?

IMHO RT should handle open scenes better than rasterization.
Raytracing handles local lights better than rasterization.
Raytracing handles dynamic object (probably) not as easily as rasterization.
Sprites and so on... no problem.
Post FX: everithing you want and even more (you can made separate channels for everithing :-)
Memory Req: kd-tree can be a problem with very complex geometry, and probably memory usage of a rasterized scene will require less memory (probably)

Quote:

8.- So, where to start? Is there something like a "ray-tracing for dummies", or a Nehe tutorial kinda like website?

On DevMaster.net you can find a good RT tutorial series.


Good luck with your RT :-)

EDIT: when I say RT handles this better than rasterization, I don't mean that doing the same thing with RT is faster... there are other parameters (quality, special cases to take into account, efficency just to tell some).

EDIT 2: there are not many good & free resources that cover RT on the web. There are a few tutorial (the one I linked is the best one IMHO) and many papers written by researchers. But if you want to avoid wasting hours, I suggest you to write your first small RT following the tutorials, and then buy a book.
You can take a look on ompf.org where there are highly skilled people working on RT and related techniques.

Share this post


Link to post
Share on other sites
Quote:
Original post by nmi
Maybe you will also find this book interesting:
http://www.pbrt.org

Regarding realtime raytracing you may also find this interesting:
http://www.mpi-inf.mpg.de/~guenther/BVHonGPU/index.html


PBRT is a wonderful book, but I would never suggest it to someone who is going to write his first raytracer: it focuses on physically based rendering, and most of it covers advanced techniques like sampling, global illumination, sampling, BSDF, sampling and design issues (sorry for the 3 'sampling', but pbrt really gives a LOT of pages to this subject)...

I never read 'Raytracing from the ground up', but from the table of contents it seems that it might be better suited for beginners (I'm thinking about buying it, since I sometimes find very hard to understand PBRT)...

Share this post


Link to post
Share on other sites
Merci beaucoup for this kickstart! I'm exited about writing my first RT. It doesn't have to be high quality at all, I have my other (rasterization) game/hobby/project for that. I hope I can find time though. 24 hours per day is just not enough to work, please a girlfriend, hang out with friends, learn cooking, do some sports, and raise a little kid :)

Therefore I forgot one question. Are there already API's like OpenGL/DirectX for RT? Writing everything yourself is more fun, but... I guess the answer is 'yes', but maybe there are not really high quality or 'universal' libraries (yet). I've seen the name "Arauna" flashing by several times. Is this based on an existing API/tools, is it a library itself?


And then there is the hardware. From what I understand, Intel Larrabee is trying to give a boost. But what exactly is it? A specialized CPU, like the GPU? In which ways is it going to help, does it come with an API, ... And when is it available?

Probably there won't be any computers that get this piece of hardware by default. Just like the Agea physics card. I have no idea if that card works good, but as long as the average user/gamer does not have this equipment, its not really helping the developer, unless he/she is willing to write additional code that supports this hardware. So, I guess its wise just to write a RT focussing on my current hardware (Intel dual core CPU, 2000 Mhz).

Ok, time to click your link :)
Quote from devmaster Jacco
"And believe me, you haven't really lived until you see your first colors bleeding from one surface to another due to diffuse photon scattering…
"
That reminds me of seeing my first 2D sprite tank moving 8 years ago :) Pure magic

Thanks!
Rick

Share this post


Link to post
Share on other sites
Quote:
Original post by spek
Merci beaucoup for this kickstart! I'm exited about writing my first RT. It doesn't have to be high quality at all, I have my other (rasterization) game/hobby/project for that. I hope I can find time though. 24 hours per day is just not enough to work, please a girlfriend, hang out with friends, learn cooking, do some sports, and raise a little kid :)

Yeah, I know, that's why my current RT is currently waiting on my hdd :-)

Quote:

Therefore I forgot one question. Are there already API's like OpenGL/DirectX for RT? Writing everything yourself is more fun, but... I guess the answer is 'yes', but maybe there are not really high quality or 'universal' libraries (yet). I've seen the name "Arauna" flashing by several times. Is this based on an existing API/tools, is it a library itself?

There is something around, but AFAIK nothing really interesting nor standard. There has been a lib named OpenRT somewhere, but I don't know if it is free, or still developed.
Arauna has been developed from scratch, and has been already used for two small games, so I suppose thatcan be used as an engine (the author is a member of Gamedev, chance are he will reply here as well). If what you want is a game, then you might use existing tools, but really, I think that you will feel happier by doing it yourself :-)


Quote:

And then there is the hardware. From what I understand, Intel Larrabee is trying to give a boost. But what exactly is it? A specialized CPU, like the GPU? In which ways is it going to help, does it come with an API, ... And when is it available?

Larrabee will be a x86 based processor (up to 32 cores IIRC). It's much like as if we had to work with a standard CPU, it will just be optimized for highly parallel tasks. Since Intel used to advertise it using the 'raytracing' word thousand of times, I suppose they will provide a RT api, but I'm not sure. We wont see Larrabee until late 2009 (perhaps 2010), so you still have time to learn RT :-)

Quote:

Probably there won't be any computers that get this piece of hardware by default. Just like the Agea physics card. I have no idea if that card works good, but as long as the average user/gamer does not have this equipment, its not really helping the developer, unless he/she is willing to write additional code that supports this hardware. So, I guess its wise just to write a RT focussing on my current hardware (Intel dual core CPU, 2000 Mhz).

Intel states that Larrabee will enter the market as a competitor to nVidia and ATI, and they will provide OpenGL and DX driver. Selling it as a specialized device would be a suicide. The only question is: will it be able to offer the same performances as nVidia and ATI will?

Quote:

Ok, time to click your link :)
Quote from devmaster Jacco
"And believe me, you haven't really lived until you see your first colors bleeding from one surface to another due to diffuse photon scattering…
"
That reminds me of seeing my first 2D sprite tank moving 8 years ago :) Pure magic

Well, i never implemented GI yet, but also looking at your first RT shaded sphere is a wonderful experience.

Share this post


Link to post
Share on other sites
Quote:
Original post by spek
And then there is the hardware. From what I understand, Intel Larrabee is trying to give a boost. But what exactly is it? A specialized CPU, like the GPU? In which ways is it going to help, does it come with an API, ... And when is it available?


Here is a paper about the Larrabee architecture:
http://softwarecommunity.intel.com/UserFiles/en-us/File/larrabee_manycore.pdf

Basically they just put many Pentium processors into one chip. Single-threaded applications will not benefit from this, but parallel applications like a RT will.

Share this post


Link to post
Share on other sites
Glad I am no the only one. I don't understand why the code snippets in the pbrt book recursively refer to other code snippets. Having code is already a distraction in understanding the concepts.

Do you recommend other real time ray tracers?

Quote:

I never read 'Raytracing from the ground up', but from the table of contents it seems that it might be better suited for beginners (I'm thinking about buying it, since I sometimes find very hard to understand PBRT)...


Share this post


Link to post
Share on other sites
Quote:
Original post by rumble
Glad I am no the only one. I don't understand why the code snippets in the pbrt book recursively refer to other code snippets. Having code is already a distraction in understanding the concepts.

Do you recommend other real time ray tracers?

Quote:

I never read 'Raytracing from the ground up', but from the table of contents it seems that it might be better suited for beginners (I'm thinking about buying it, since I sometimes find very hard to understand PBRT)...


For code reference there are many OS RT:
-PovRay
-YAFRAY
-PBRT
-WinOSI
-SunFlow
-Blender

-Just to tell a few on the top of my head. None of them aim to real time though and honestly I don't know of any other OS RTRT except of Arauna (I wont be interested in RTRT until I feel comfortable with RT in the first place, because making RT real time is way beyond my possibilities :-(
I don't think there are many others OS RTRT that can compete with Arauna (and that are still actively mantained), if any.

Share this post


Link to post
Share on other sites
Quote:
Original post by LeGreg
There's a tutorial here (not yet complete, I know, need to translate the rest..) :

Raytracer in C++

LeGreg


Yeah, this is the other RT tut I was thinking of when I said 'few tutorials', but I wasn't able to remember where to find it. Thank you for posting :-)

Share this post


Link to post
Share on other sites
Thanks for the references guys :) I'm working through the Jacco Bikker tutorial. Funny, that guy teaches computer science on a school nearby my place.

Got my first raytracer working. Just a plane and a sphere with direct simple dot(L,N) lighting, no reflections, shadows or other 'wow, i wet my pants' stuff, but its a start. I love the amount of control you can have on everything. Basically your entire computer becomes a shader now, with 'no' limitations.

I wonder if my speed is correct though. It still takes ~150 ms to render 1 800x600 frame. I don't have optimizations of course, but I wonder what there is to optimize in a scene of 1 sphere, 1 light(sphere) and a plane...

Bad programming was my first thought. I tried the demo from the website I'm reading. A similiar scene there takes `2.3 seconds. That even much slower! I readed further. Tutorial 3 from that website contains some phong/reflections without any optimizations, takes ~9 seconds to update on his laptop. I assume that his 1700 Mhz laptop from 2005 or before should not be faster than my dual core (2x 1.66 Ghz). But guess what, my laptop needs ~2 minutes(!). Maybe only 1 core is used or something... but still, 2 minutes is really slow.

Something stinks here. But what could it be? Dual Core processors used wrong? Windows Vista? Any other particular setting wrong? The raytracer program gets ~50% CPU, the other processed are sleeping so that should not be a problem either.

Greetings,
Rick

Share this post


Link to post
Share on other sites
Quote:
Original post by spek
I wonder if my speed is correct though. It still takes ~150 ms to render 1 800x600 frame. I don't have optimizations of course, but I wonder what there is to optimize in a scene of 1 sphere, 1 light(sphere) and a plane...


Its hard and probably wrong to wonder about optimizations at this stage.

Quote:

Bad programming was my first thought. I tried the demo from the website I'm reading. A similiar scene there takes `2.3 seconds. That even much slower! I readed further. Tutorial 3 from that website contains some phong/reflections without any optimizations, takes ~9 seconds to update on his laptop. I assume that his 1700 Mhz laptop from 2005 or before should not be faster than my dual core (2x 1.66 Ghz). But guess what, my laptop needs ~2 minutes(!). Maybe only 1 core is used or something... but still, 2 minutes is really slow.

Chances are you have some issues somewhere. Remember that your code makes not use of the two cores: if you multithrad it you can get up to 2x speed. In the future you might also think to pack many rays and shot them togheter (this does wonders, they say). But most important is implementing a good spatial partitioning scheme.

Quote:

Something stinks here. But what could it be? Dual Core processors used wrong? Windows Vista? Any other particular setting wrong? The raytracer program gets ~50% CPU, the other processed are sleeping so that should not be a problem either.

Greetings,
Rick


Without more informations I cant help you much really... You should debug it to see (for example) that rays are tested on primitives exactly once. On the other side (and just to be sure) are you compiling in release mode, right?
How many primitives are you using? Without using spatial structures, increasing the number of primitives drastically increases the time required.

Share this post


Link to post
Share on other sites
Well, luckily I can't blame my own code so far. I downloaded the tutorials from devmasters.net. The very first tutorial already runs too slow I think. It's made of:
- 1 ground plane
- 2 spheres
- 2 lightsources(spheres)
- No trees or optimizations used. Each ray is checked on all 5 primitives
- Pixels have simple diffuse lighting. No additional rays are casted so far

According to the paper, the very same scene should be rendered within a second, and that includes shadow rays, reflections and a specular term. However, the simple program without this effects needs 2 seconds, and with this effects it takes 4 seconds.

My copy of both programs runs quite alot faster. 150 ms update interval for the simple scene, 380 ms for the same scene with phong/shadow/reflections. The tutorials have been re-compiled in C++ Visual Studio 2008 (the original ones were written in VC 6). My copies are written in Delphi.

It's strange that my program runs alot faster (but still not really fast overall). But if we forget about my version, the tutorial code also runs alot slower then the website suggests. Probably only 50% of my CPU is used indeed, so that would be at ~1.66 Ghz. A little bit slower than 1.7 Ghz from the author.

Greetings,
Rick

Share this post


Link to post
Share on other sites
Quote:
Original post by spek
Probably only 50% of my CPU is used indeed, so that would be at ~1.66 Ghz. A little bit slower than 1.7 Ghz from the author.

Greetings,
Rick


Without explicit use of more threads, there is no way that the raytracer uses more than one core, and this is true also for the delphy version. It seems a bit strange to me that for the same scene the delphy version runs so much faster than the c++ version. Have you put all the optimizations on in the VC compilers settings (there are many of them)? I had 10x/15x difference moving from debug to release with all optimizations on.

Share this post


Link to post
Share on other sites
Hmmm, that could be a reason. Basically the code of both versions is the same, so that shouldn't be a big problem. I'm not very familiar with Visual Studio, I downloaded there free version yesterday. I guess all the options are on the default.

The project properties/optimizations tab shows that all optimizations are disabled. If I switch to "maximum speed" or "full", I get a build error though ("'/Ox' and '/RTC1' command-line options are incompatible")

So I enabled "Favor Fast cpde /Ot" and Intrinsic functions. That didn't make a real difference though. Maybe there are other options I missed here...

I know its hard to tell what the speed should be. But a rather simple program like that first tutorial, would the 8 fps I get with the Delphi program be normal? I know the key to high speed is to avoid as much unnessary intersection tests as possible. But if there are only a few objects...

Thanks for helping,
Rick

Share this post


Link to post
Share on other sites
Go in the project property page (under the Project menu) and follow:
Configuration properties -> C/C++ -> Code generation

and set basic runtimecheck to default.

Then go to the optimization Tab and set everything to full optimization.
You might also try to set SSE/SSE2 on in the code generation panel and see if it makes any difference.

Share this post


Link to post
Share on other sites
That works. Speed gain of ~4x. Still slower than the website claims, but ok. And I can't optimize 100% thought. The Full Optimization option gives this error:
'/Ox' and '/ZI' command-line options are incompatible

Got refractions, reflections, (hard) shadows and diffuse lighting working now :) It's not for practical game usage, but its fun to do. At least I'm a little bit prepared if we ever switch over to raytracing.

I'm not sure if the RT produced scenes are looking better than "normal renderings" though... I'm not only talking about my own little experiment here, but about the screenshots I've seen in general. The reflections and refractions beat the hell out of a normal renderer, but most of the scenes I've seen still look fake though... Too sharp, too reflective, too glossy, too clean, too noisy somehow... While a rasterizer makes "thicker", more dirty/blurred scenes. The RT results remind me a little bit of older games that used a pre-rendered background (Myst, Phantasmagoria, 7th Guest). Beautifull for that time, but yet not 100% realistic.

Of course, that has everything to do with the limited speed and therefore relative simple environments/textures/effects. A RT follows physical laws (well, more or less) and therefore should be able to render true photo-realistic, in theory. I guess most cinematics and high quality 3D renderings are using raytracing on a higher level. While we programmers focus on simple scenes showing (too much) reflections and other technical stuff.

Oh well, that's another discussion :)
Rick

Share this post


Link to post
Share on other sites
If you're just looking for some reference source code (as opposed to complete tutorials), then you could take a look at RayWatch (http://sourceforge.net/projects/raywatch). See screenshots here: http://www.gamedev.net/community/forums/topic.asp?topic_id=481216

RayWatch is a simple raytracer, written in (OS-portable) C++, for educational purposes. It uses SDL for loading images (textures). The source code is written for clarity (not performance), is Object Oriented, and is released under the GPL license.

Share this post


Link to post
Share on other sites
The realism that RT can achieve is bound to many factor: for example, a very basic RT (like the one developed in the tutorial) usually does not implement the following features:
-fresnel reflections/transmission
-reflection models others than Phong
-HDR rendering with tone mapping operators

Actually, RT can produce stunning images, even without GI. But you need to implement a few more features. You will see that you just need to implement area shadows, bump mapping (normal mapping) and fresnel to get quite realistic images. Of course, you cant get photorealistic real time raytracing on common hardware yet, but there is virtually no limit to the accuracy a raytracer can render a scene.
The main reason why most renders on RT tutorials looks fake is that it is programmer art :-)

EDIT:
This is what I can get right now with my raytracer. Far from photorealism, but a lot of features have still to be implemented...

[Edited by - cignox1 on September 19, 2008 8:57:17 AM]

Share this post


Link to post
Share on other sites
You have the textures working :) Also normalMapping (I noticed the light scattared a little bit on the wall behind). I'm moving onto that part as well, can't live without textures.

You are right about programmer art and the missing effects such as HDR, DoF or tone mapping. I think its rather easy though to draw the raycast pixels to a OpenGL (or DX) buffer and then let the shaders do the rest. I was also thinking about interpolating to win speed. Never tried it of course, but how would it look if I skip 1 pixel each time? So each pixel will be surrounded by 9 empty pixels. Later on I can interpolate them on the CPU or GPU. I only have to 25% of the rays now, and I get some sort of AA in return for it. I loose quality and sharpness of course, but ironically, lately much effort is spend on making heavy blur shaders (HDR bloom, DoF, more filtering) in the GPU world.

Another interesting appliance of raycasting might be GI. As far as I know, the LightsMark benchmark program renders everything with normal shader techniques & OpenGL, but... the indirect lighting and reflections are sampled via raytracing and stored into a lightMap. It's still a heavy task, but then again, that lightMap does not have to be fully updated each cycle. I think its faster than rendering a realtime lightMap on the GPU (I have been doing that much lately :) ), and also more accurate. The downside is that CPU gets occupied. Could be a problem if you want AI and physics as well...

Greetings & success with your raytracer!
Rick

Share this post


Link to post
Share on other sites
Once I tried tracing one pixel every two and unlukily is not all that good as on might think. I also tried tracing one pixel every two and then tracing the missing pixel only if the difference between the sourrounding ones was above a threshold. Better, but performance gain was not so high (I don't remember how much though).
A few (perhaps) better approaches could be:
-If antialiasing is used, you can trace one pixel every two. Then if the difference is high, you trace only a few rays, taking into account also sourrounding pixels. That is, you improve the interpolation value by shooting a few rays (i.e: instead than tracing 16 rays you trace only four).
-You don't interpolate the color of the pixels, but instead surface properties, if all foour pixels belong to the same object (specifically position of the intersections and uv coodinates). This might become a bit tricky, for example when uv coords were modified by a texture mapper (you would need intrinsic surface uv) but this will save a lot of primary rays, wich in tipical cases are the most part.

There are tricks available also for shadow rays, but I wouldn't design my RT around them (unless I target a real time RT)...

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Forum Statistics

    • Total Topics
      627734
    • Total Posts
      2978840
  • Similar Content

    • By DelicateTreeFrog
      Hello! As an exercise for delving into modern OpenGL, I'm creating a simple .obj renderer. I want to support things like varying degrees of specularity, geometry opacity, things like that, on a per-material basis. Different materials can also have different textures. Basic .obj necessities. I've done this in old school OpenGL, but modern OpenGL has its own thing going on, and I'd like to conform as closely to the standards as possible so as to keep the program running correctly, and I'm hoping to avoid picking up bad habits this early on.
      Reading around on the OpenGL Wiki, one tip in particular really stands out to me on this page:
      For something like a renderer for .obj files, this sort of thing seems almost ideal, but according to the wiki, it's a bad idea. Interesting to note!
      So, here's what the plan is so far as far as loading goes:
      Set up a type for materials so that materials can be created and destroyed. They will contain things like diffuse color, diffuse texture, geometry opacity, and so on, for each material in the .mtl file. Since .obj files are conveniently split up by material, I can load different groups of vertices/normals/UVs and triangles into different blocks of data for different models. When it comes to the rendering, I get a bit lost. I can either:
      Between drawing triangle groups, call glUseProgram to use a different shader for that particular geometry (so a unique shader just for the material that is shared by this triangle group). or
      Between drawing triangle groups, call glUniform a few times to adjust different parameters within the "master shader", such as specularity, diffuse color, and geometry opacity. In both cases, I still have to call glBindTexture between drawing triangle groups in order to bind the diffuse texture used by the material, so there doesn't seem to be a way around having the CPU do *something* during the rendering process instead of letting the GPU do everything all at once.
      The second option here seems less cluttered, however. There are less shaders to keep up with while one "master shader" handles it all. I don't have to duplicate any code or compile multiple shaders. Arguably, I could always have the shader program for each material be embedded in the material itself, and be auto-generated upon loading the material from the .mtl file. But this still leads to constantly calling glUseProgram, much more than is probably necessary in order to properly render the .obj. There seem to be a number of differing opinions on if it's okay to use hundreds of shaders or if it's best to just use tens of shaders.
      So, ultimately, what is the "right" way to do this? Does using a "master shader" (or a few variants of one) bog down the system compared to using hundreds of shader programs each dedicated to their own corresponding materials? Keeping in mind that the "master shaders" would have to track these additional uniforms and potentially have numerous branches of ifs, it may be possible that the ifs will lead to additional and unnecessary processing. But would that more expensive than constantly calling glUseProgram to switch shaders, or storing the shaders to begin with?
      With all these angles to consider, it's difficult to come to a conclusion. Both possible methods work, and both seem rather convenient for their own reasons, but which is the most performant? Please help this beginner/dummy understand. Thank you!
    • By JJCDeveloper
      I want to make professional java 3d game with server program and database,packet handling for multiplayer and client-server communicating,maps rendering,models,and stuffs Which aspect of java can I learn and where can I learn java Lwjgl OpenGL rendering Like minecraft and world of tanks
    • By AyeRonTarpas
      A friend of mine and I are making a 2D game engine as a learning experience and to hopefully build upon the experience in the long run.

      -What I'm using:
          C++;. Since im learning this language while in college and its one of the popular language to make games with why not.     Visual Studios; Im using a windows so yea.     SDL or GLFW; was thinking about SDL since i do some research on it where it is catching my interest but i hear SDL is a huge package compared to GLFW, so i may do GLFW to start with as learning since i may get overwhelmed with SDL.  
      -Questions
      Knowing what we want in the engine what should our main focus be in terms of learning. File managements, with headers, functions ect. How can i properly manage files with out confusing myself and my friend when sharing code. Alternative to Visual studios: My friend has a mac and cant properly use Vis studios, is there another alternative to it?  
    • By ferreiradaselva
      Both functions are available since 3.0, and I'm currently using `glMapBuffer()`, which works fine.
      But, I was wondering if anyone has experienced advantage in using `glMapBufferRange()`, which allows to specify the range of the mapped buffer. Could this be only a safety measure or does it improve performance?
      Note: I'm not asking about glBufferSubData()/glBufferData. Those two are irrelevant in this case.
    • By xhcao
      Before using void glBindImageTexture(    GLuint unit, GLuint texture, GLint level, GLboolean layered, GLint layer, GLenum access, GLenum format), does need to make sure that texture is completeness. 
  • Popular Now