Cone Tracing and Path Tracing - differences.

Started by
71 comments, last by gboxentertainment 11 years, 3 months ago

Well, I was trying hard, but my 13 years old brain can't handle cone tracing.
Do you know any other, easy to implement, real time global illumination techniques?


The words easy and global illumination don't really get along all too well

Indirect lighting is an advanced technique, so if you want to implement it you'll pretty much have to get your hands dirty
When it comes to global illumination you have quite a few options each with their own advantages and disadvantages.

There are precomputed methods like precomputed radiance transfer (PRT), photon mapping, lightmap baking, etc. These techniques are mostly static and won't have any effect on dynamic objects in your scene, but they are very cheap to run since all the expensive calculations have been done in a pre-processing step. These only support diffuse indirect light bounces as far as I know.

When you look at more dynamic approaches you have VPL-based approaches like instant radiosity, which allow for dynamic objects and a single low-frequency light bounce. You could also directly use VPLs, but this will require some filtering algorithm if you want to get smooth results and prevent flickering.

Another interesting dynamic approach is the light propagation volume approach used by Crytek which uses reflective shadow maps to set up a 3D grid with indirect lighting values, and which then applies a propagation algorithm to correctly fill the grid. This is fast, but also only allows for a single low-frequency diffuse bounce.

There's also screen-space indirect lighting, which is an extension of SSAO. Of course this technique can only use the information available on screen and could possibly not give satisfying results.

I gets all your texture budgets!

Advertisement

Well, I was trying hard, but my 13 years old brain can't handle cone tracing.
Do you know any other, easy to implement, real time global illumination techniques?


I suggest you attempt to write a mirror/shadow only ray tracer. No high school level (or college, depending on POV) math required. This isn't true GI, but it's a very good start.

[quote name='MrOMGWTF' timestamp='1344152867' post='4966291']
Well, I was trying hard, but my 13 years old brain can't handle cone tracing.
Do you know any other, easy to implement, real time global illumination techniques?


I suggest you attempt to write a mirror/shadow only ray tracer. No high school level (or college, depending on POV) math required. This isn't true GI, but it's a very good start.
[/quote]

That's a good idea, i'll do it.
Just finished doing working ray-sphere intersection test.
Recently, Ritschel et al. wrote a nice state-of-the-art report on interactive global illumination techniques. Most of them require a solid amount of work, but it’s good to get an overview of what’s out there.

Once you get your ray tracer going, you can quite easily extend it to a path tracer or – with a little more effort – to a photon mapper. Extending to a path tracer is easier, but with a photon mapper you could compute indirect lighting for real-time applications, see McGuire et al., or you could look into progressive photon mapping (PPM, SPPM, Photon Beams) and all its extensions if you want photometrically correct lighting, which, however, takes a few more hours to compute.

Cheers!

Recently, Ritschel et al. wrote a nice state-of-the-art report on interactive global illumination techniques. Most of them require a solid amount of work, but it’s good to get an overview of what’s out there.

Once you get your ray tracer going, you can quite easily extend it to a path tracer or – with a little more effort – to a photon mapper. Extending to a path tracer is easier, but with a photon mapper you could compute indirect lighting for real-time applications, see McGuire et al., or you could look into progressive photon mapping (PPM, SPPM, Photon Beams) and all its extensions if you want photometrically correct lighting, which, however, takes a few more hours to compute.

Cheers!


Hey, thanks for good informations.
And thanks for getting me into photon mapping! This technique is awesome, and easy to understand. I'll try to do some optimizations for it.
Voxelizing the geometry before mapping photons will be big speed up, i think.
YES FINALLY I WROTE THE BASE FOR THE RAY TRACER!

[spoiler]cKWqN.png[/spoiler]

Here are the normals of the sphere:

[spoiler]ZK625.png[/spoiler]

It's just the base, it doesn't support lighting and many stuff. I'll work on lighting now.
I will say this paper is very complicated, and perhaps impossible to implement directly from the paper. Additionally, the technique was implemented taking advantage of some of the latest GPU features, some of which are only availible in OpenGL 4.3 or DirectX 11 and/or Nvidia-specific extensions, so unless you have the latest and greatest Nvidia (I think only Kepler technology), you will not be able to fully implement it. For example, in order to voxelize the dynamic objects per frame, it requires the access to a compute shader. In the absence of a compute shader, I guess you could do something with CUDA and OpenCL, but it would require OpenGL to interop and write data to a buffer that can be used by these other libraries to build the octree or voxelize. You could always pre-build the octree, but your scene would have to be static with no dynamic geometry.

That being said, if you would like more information on these techniques, you should check out Cyril Crassin's webpage as it contains other papers on techniques which this method uses (http://maverick.inria.fr/Members/Cyril.Crassin/). Also, the newest Unreal Engine 4 takes advantage of this technique (see demos on youtube).

I will say this paper is very complicated, and perhaps impossible to implement directly from the paper. Additionally, the technique was implemented taking advantage of some of the latest GPU features, some of which are only availible in OpenGL 4.3 or DirectX 11 and/or Nvidia-specific extensions, so unless you have the latest and greatest Nvidia (I think only Kepler technology), you will not be able to fully implement it. For example, in order to voxelize the dynamic objects per frame, it requires the access to a compute shader. In the absence of a compute shader, I guess you could do something with CUDA and OpenCL, but it would require OpenGL to interop and write data to a buffer that can be used by these other libraries to build the octree or voxelize. You could always pre-build the octree, but your scene would have to be static with no dynamic geometry.

That being said, if you would like more information on these techniques, you should check out Cyril Crassin's webpage as it contains other papers on techniques which this method uses (http://maverick.inria.fr/Members/Cyril.Crassin/). Also, the newest Unreal Engine 4 takes advantage of this technique (see demos on youtube).


I can do voxelization in vertex/fragment shader.
The technique is expained here: http://graphics.snu.ac.kr/class/graphics2011/materials/paper09_voxel_gi.pdf
You basically render model's vertex coordinates into texture, and for each pixel of that texture you add a voxel at position from current pixel's value.

Maybe I explained it wrong, see the paper for best explanation.

I can do voxelization in vertex/fragment shader.
The technique is expained here: http://graphics.snu....09_voxel_gi.pdf
You basically render model's vertex coordinates into texture, and for each pixel of that texture you add a voxel at position from current pixel's value.

Maybe I explained it wrong, see the paper for best explanation.


Well yeah, but that's not enough to implement the technique presented in the paper by Crassin. For cone tracing to work you'll need to generate mipmap data of your voxels stored in an octree, and to maintain any kind of performance this octree structure should be held entirely in GPU memory in a linear layout and it should be recalculated on each scene update by the GPU, and you'll really need a compute shader solution to do this.

I gets all your texture budgets!


Well yeah, but that's not enough to implement the technique presented in the paper by Crassin. For cone tracing to work you'll need to generate mipmap data of your voxels stored in an octree, and to maintain any kind of performance this octree structure should be held entirely in GPU memory in a linear layout and it should be recalculated on each scene update by the GPU, and you'll really need a compute shader solution to do this.
[/quote]

This is exactly right, the key being it is an octree structure and that it is entirely generated/updated/accessed on the GPU.

You might be in luck. A book titled OpenGL Insights just came out, and Cyril Crassin has a chapter in it entitled Octree-Based Sparse Voxelization Using the GPU Hardware Rasterizer in which he explains the Octree voxelization technique presented in the paper. It shows how to use the compute shader and all that. What's more, it's your lucky day because the website has a link to sample chapters you can download for free in PDF form, and this chapter is one of them.

See http://openglinsights.com/

This topic is closed to new replies.

Advertisement