Jump to content

  • Log In with Google      Sign In   
  • Create Account


Cone Tracing and Path Tracing - differences.


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
72 replies to this topic

#1 MrOMGWTF   Members   -  Reputation: 433

Like
0Likes
Like

Posted 26 July 2012 - 05:23 AM

Hi.
I'm trying to implement this technique: http://blog.icare3d....tion-using.html
The paper about it isn't detailed and it's missing some info.
I did research about voxels, I found technique for fast voxelization. But still, I don't know about cone tracing. There are no papers about this technique. I know the basic difference between path tracing and cone tracing. The difference is that you trace cones instead of rays. So instead of shooting hundreds of rays, you just shoot few cones. But still, I'm missing a lot of informations about this technique to implement it. Can somebody explain me cone tracing? Or give me link to paper. I WAS using google. I found nothing. All I found is undetailed wiki page about cone tracing. Thanks in advance. Sorry for my poor english.

Edited by MrOMGWTF, 26 July 2012 - 05:25 AM.


Sponsor:

#2 Pottuvoi   Members   -  Reputation: 258

Like
1Likes
Like

Posted 26 July 2012 - 07:08 AM

In cone tracing you sample a prefiltered geometry. (mipmaps)

Depending on radius of cone change to lower mipmap, thus when it hits you get sum of all information in the smaller voxels down the mipmap tree.
This is the reason you do not need to shoot so many rays/cones this way.

Older papers give more information on their cone-tracing..
http://www.icare3d.org/research-cat/publications/beyond-triangles-gigavoxels-effects-in-video-games.html

#3 MrOMGWTF   Members   -  Reputation: 433

Like
0Likes
Like

Posted 26 July 2012 - 08:58 AM

In cone tracing you sample a prefiltered geometry. (mipmaps)

Depending on radius of cone change to lower mipmap, thus when it hits you get sum of all information in the smaller voxels down the mipmap tree.
This is the reason you do not need to shoot so many rays/cones this way.

Older papers give more information on their cone-tracing..
http://www.icare3d.o...ideo-games.html


Is it possible to do cone tracing on polygons?

Edited by MrOMGWTF, 26 July 2012 - 10:18 AM.


#4 Pottuvoi   Members   -  Reputation: 258

Like
1Likes
Like

Posted 26 July 2012 - 10:11 AM

Cone tracing of polygons is possible...but beam/cone tracing is quite different when using polygons.
Voxels simply make it easy as they are easily prefilterable, polygons are not.

Easiest way to get it to work with polygons is to voxelize them and use voxel cone tracing.. (which is apparently what Unreal Engine 4 does.)

#5 MrOMGWTF   Members   -  Reputation: 433

Like
0Likes
Like

Posted 01 August 2012 - 07:17 AM

Oh, I have another question. If a cone intersects a voxel, then were is the hit point? Ray has no thickness, so it hits in one point, but cone has volume.

Edited by MrOMGWTF, 01 August 2012 - 07:18 AM.


#6 Necrolis   Members   -  Reputation: 1276

Like
1Likes
Like

Posted 01 August 2012 - 08:30 AM

The paper about it isn't detailed and it's missing some info.

just FYI, the first part relating to the voxelization was released as a free chapter from the OpenGL Insights book, you can find it here (according to Cressin's Twitter page, the full source will be released soon, probably to the git repo here).

#7 MrOMGWTF   Members   -  Reputation: 433

Like
0Likes
Like

Posted 02 August 2012 - 05:37 AM


The paper about it isn't detailed and it's missing some info.

just FYI, the first part relating to the voxelization was released as a free chapter from the OpenGL Insights book, you can find it here (according to Cressin's Twitter page, the full source will be released soon, probably to the git repo here).


I know how to do fast voxelization. I'm looking for info about cone tracing.

#8 Pottuvoi   Members   -  Reputation: 258

Like
1Likes
Like

Posted 03 August 2012 - 09:52 AM

Oh, I have another question. If a cone intersects a voxel, then were is the hit point? Ray has no thickness, so it hits in one point, but cone has volume.

I'm quite sure that cone size is byproduct of mipmaps, voxels get bigger so basically ray gets 'volume' as well.

#9 jameszhao00   Members   -  Reputation: 271

Like
1Likes
Like

Posted 04 August 2012 - 04:36 PM

Paper's at http://maverick.inria.fr/Publications/2011/CNSGE11b/index.php

Seems extremely tricky to implement well.

#10 MrOMGWTF   Members   -  Reputation: 433

Like
0Likes
Like

Posted 05 August 2012 - 01:47 AM

Well, I was trying hard, but my 13 years old brain can't handle cone tracing.
Do you know any other, easy to implement, real time global illumination techniques?

#11 Radikalizm   Crossbones+   -  Reputation: 2793

Like
1Likes
Like

Posted 05 August 2012 - 06:22 AM

Well, I was trying hard, but my 13 years old brain can't handle cone tracing.
Do you know any other, easy to implement, real time global illumination techniques?


The words easy and global illumination don't really get along all too well

Indirect lighting is an advanced technique, so if you want to implement it you'll pretty much have to get your hands dirty
When it comes to global illumination you have quite a few options each with their own advantages and disadvantages.

There are precomputed methods like precomputed radiance transfer (PRT), photon mapping, lightmap baking, etc. These techniques are mostly static and won't have any effect on dynamic objects in your scene, but they are very cheap to run since all the expensive calculations have been done in a pre-processing step. These only support diffuse indirect light bounces as far as I know.

When you look at more dynamic approaches you have VPL-based approaches like instant radiosity, which allow for dynamic objects and a single low-frequency light bounce. You could also directly use VPLs, but this will require some filtering algorithm if you want to get smooth results and prevent flickering.

Another interesting dynamic approach is the light propagation volume approach used by Crytek which uses reflective shadow maps to set up a 3D grid with indirect lighting values, and which then applies a propagation algorithm to correctly fill the grid. This is fast, but also only allows for a single low-frequency diffuse bounce.

There's also screen-space indirect lighting, which is an extension of SSAO. Of course this technique can only use the information available on screen and could possibly not give satisfying results.

I gets all your texture budgets!


#12 jameszhao00   Members   -  Reputation: 271

Like
1Likes
Like

Posted 05 August 2012 - 10:33 AM

Well, I was trying hard, but my 13 years old brain can't handle cone tracing.
Do you know any other, easy to implement, real time global illumination techniques?


I suggest you attempt to write a mirror/shadow only ray tracer. No high school level (or college, depending on POV) math required. This isn't true GI, but it's a very good start.

Edited by jameszhao00, 05 August 2012 - 10:35 AM.


#13 MrOMGWTF   Members   -  Reputation: 433

Like
0Likes
Like

Posted 05 August 2012 - 12:24 PM


Well, I was trying hard, but my 13 years old brain can't handle cone tracing.
Do you know any other, easy to implement, real time global illumination techniques?


I suggest you attempt to write a mirror/shadow only ray tracer. No high school level (or college, depending on POV) math required. This isn't true GI, but it's a very good start.


That's a good idea, i'll do it.
Just finished doing working ray-sphere intersection test.

#14 Tsus   Members   -  Reputation: 1002

Like
3Likes
Like

Posted 07 August 2012 - 01:59 AM

Recently, Ritschel et al. wrote a nice state-of-the-art report on interactive global illumination techniques. Most of them require a solid amount of work, but it’s good to get an overview of what’s out there.

Once you get your ray tracer going, you can quite easily extend it to a path tracer or – with a little more effort – to a photon mapper. Extending to a path tracer is easier, but with a photon mapper you could compute indirect lighting for real-time applications, see McGuire et al., or you could look into progressive photon mapping (PPM, SPPM, Photon Beams) and all its extensions if you want photometrically correct lighting, which, however, takes a few more hours to compute.

Cheers!

Acagamics e.V. – IGDA Student Game Development Club (University of Magdeburg, Germany)


#15 MrOMGWTF   Members   -  Reputation: 433

Like
0Likes
Like

Posted 07 August 2012 - 02:15 PM

Recently, Ritschel et al. wrote a nice state-of-the-art report on interactive global illumination techniques. Most of them require a solid amount of work, but it’s good to get an overview of what’s out there.

Once you get your ray tracer going, you can quite easily extend it to a path tracer or – with a little more effort – to a photon mapper. Extending to a path tracer is easier, but with a photon mapper you could compute indirect lighting for real-time applications, see McGuire et al., or you could look into progressive photon mapping (PPM, SPPM, Photon Beams) and all its extensions if you want photometrically correct lighting, which, however, takes a few more hours to compute.

Cheers!


Hey, thanks for good informations.
And thanks for getting me into photon mapping! This technique is awesome, and easy to understand. I'll try to do some optimizations for it.
Voxelizing the geometry before mapping photons will be big speed up, i think.

#16 MrOMGWTF   Members   -  Reputation: 433

Like
0Likes
Like

Posted 08 August 2012 - 02:29 PM

YES FINALLY I WROTE THE BASE FOR THE RAY TRACER!

Spoiler


Here are the normals of the sphere:

Spoiler


It's just the base, it doesn't support lighting and many stuff. I'll work on lighting now.

Edited by MrOMGWTF, 08 August 2012 - 02:41 PM.


#17 scyfris   Members   -  Reputation: 168

Like
0Likes
Like

Posted 08 August 2012 - 03:29 PM

I will say this paper is very complicated, and perhaps impossible to implement directly from the paper. Additionally, the technique was implemented taking advantage of some of the latest GPU features, some of which are only availible in OpenGL 4.3 or DirectX 11 and/or Nvidia-specific extensions, so unless you have the latest and greatest Nvidia (I think only Kepler technology), you will not be able to fully implement it. For example, in order to voxelize the dynamic objects per frame, it requires the access to a compute shader. In the absence of a compute shader, I guess you could do something with CUDA and OpenCL, but it would require OpenGL to interop and write data to a buffer that can be used by these other libraries to build the octree or voxelize. You could always pre-build the octree, but your scene would have to be static with no dynamic geometry.

That being said, if you would like more information on these techniques, you should check out Cyril Crassin's webpage as it contains other papers on techniques which this method uses (http://maverick.inria.fr/Members/Cyril.Crassin/). Also, the newest Unreal Engine 4 takes advantage of this technique (see demos on youtube).

#18 MrOMGWTF   Members   -  Reputation: 433

Like
0Likes
Like

Posted 09 August 2012 - 12:51 AM

I will say this paper is very complicated, and perhaps impossible to implement directly from the paper. Additionally, the technique was implemented taking advantage of some of the latest GPU features, some of which are only availible in OpenGL 4.3 or DirectX 11 and/or Nvidia-specific extensions, so unless you have the latest and greatest Nvidia (I think only Kepler technology), you will not be able to fully implement it. For example, in order to voxelize the dynamic objects per frame, it requires the access to a compute shader. In the absence of a compute shader, I guess you could do something with CUDA and OpenCL, but it would require OpenGL to interop and write data to a buffer that can be used by these other libraries to build the octree or voxelize. You could always pre-build the octree, but your scene would have to be static with no dynamic geometry.

That being said, if you would like more information on these techniques, you should check out Cyril Crassin's webpage as it contains other papers on techniques which this method uses (http://maverick.inria.fr/Members/Cyril.Crassin/). Also, the newest Unreal Engine 4 takes advantage of this technique (see demos on youtube).


I can do voxelization in vertex/fragment shader.
The technique is expained here: http://graphics.snu.ac.kr/class/graphics2011/materials/paper09_voxel_gi.pdf
You basically render model's vertex coordinates into texture, and for each pixel of that texture you add a voxel at position from current pixel's value.

Maybe I explained it wrong, see the paper for best explanation.

#19 Radikalizm   Crossbones+   -  Reputation: 2793

Like
1Likes
Like

Posted 09 August 2012 - 02:56 AM

I can do voxelization in vertex/fragment shader.
The technique is expained here: http://graphics.snu....09_voxel_gi.pdf
You basically render model's vertex coordinates into texture, and for each pixel of that texture you add a voxel at position from current pixel's value.

Maybe I explained it wrong, see the paper for best explanation.


Well yeah, but that's not enough to implement the technique presented in the paper by Crassin. For cone tracing to work you'll need to generate mipmap data of your voxels stored in an octree, and to maintain any kind of performance this octree structure should be held entirely in GPU memory in a linear layout and it should be recalculated on each scene update by the GPU, and you'll really need a compute shader solution to do this.

I gets all your texture budgets!


#20 scyfris   Members   -  Reputation: 168

Like
0Likes
Like

Posted 09 August 2012 - 10:12 AM

Well yeah, but that's not enough to implement the technique presented in the paper by Crassin. For cone tracing to work you'll need to generate mipmap data of your voxels stored in an octree, and to maintain any kind of performance this octree structure should be held entirely in GPU memory in a linear layout and it should be recalculated on each scene update by the GPU, and you'll really need a compute shader solution to do this.


This is exactly right, the key being it is an octree structure and that it is entirely generated/updated/accessed on the GPU.

You might be in luck. A book titled OpenGL Insights just came out, and Cyril Crassin has a chapter in it entitled Octree-Based Sparse Voxelization Using the GPU Hardware Rasterizer in which he explains the Octree voxelization technique presented in the paper. It shows how to use the compute shader and all that. What's more, it's your lucky day because the website has a link to sample chapters you can download for free in PDF form, and this chapter is one of them.

See http://openglinsights.com/




Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS