Cone Tracing and Path Tracing - differences.

Started by
71 comments, last by gboxentertainment 11 years, 3 months ago
I'm not sure whether I can follow your train of thought exactly here, but I would like to point a couple of things out:

First of all, be wary of making the comparison between reflective shadow maps and voxels. While an RSM can be used to generate a limited set of voxels it does not contain the same data as would be expected from a set of voxels. When working with voxels the world-space position of your sampled voxel for example is inferred by its neighbours and the size of your voxel volume (note: when you look at it formally voxels themselves do not have a size, just like pixels don't have a size), whereas the world position in an RSM is determined by reconstructing a light-space position from the stored depth which you then transform into a world-space position by applying the inverse light transformation.

The RSM method reminds me more of the global illumination technique described by Kaplanyan and Dachsbacher, but they create a limited low-frequency voxel representation of the scene (non-mipmapped!) which they use as a starting point for creating a light volume with a propagation algorithm.
The method of spawning VPLs using a RSM also sounds more like a technique called instant radiosity, which as far as I know has very little in common with the voxel cone tracing paper.


Second, the factor used for energy conservation for lambertian lighting (N.L lighting) is a fixed constant of 1/Pi. Since a cone trace in the presented paper is actually just a path trace of a correctly filtered mip-map of your high frequency voxel data (if I understand correctly, I haven't studied it in-depth yet) there's no need to include any other factor in your BRDF to maintain energy conservation while doing cone tracing.

Your assumption about determining the correct mip level based on the cone angle and and distance to the sampled surface sounds correct the way I understand it.

I gets all your texture budgets!

Advertisement
<br />Since a cone trace in the presented paper is actually just a path trace of a correctly filtered mip-map of your high frequency voxel data (if I understand correctly, I haven't studied it in-depth yet) there's no need to include any other factor in your BRDF to maintain energy conservation while doing cone tracing.<br />


So basically there are no any cones in this technique? Just ray-tracing of filtered geometry?

[quote name='Radikalizm' timestamp='1345473867' post='4971480']<br />Since a cone trace in the presented paper is actually just a path trace of a correctly filtered mip-map of your high frequency voxel data (if I understand correctly, I haven't studied it in-depth yet) there's no need to include any other factor in your BRDF to maintain energy conservation while doing cone tracing.<br />


So basically there are no any cones in this technique? Just ray-tracing of filtered geometry?
[/quote]

If I unterstand correctly it is just a path trace of your pre-filtered voxel data, but doing such a path trace is still a cone trace, so technically there are cones involved ;)

You can look at a cone trace as tracing a bundle of paths and weighting the results of each path, which is basically an integration over a disk-shaped surface. In this technique your voxel data is actually pre-integrated (=downsampled) for each step along your cone axis which means you only have to do a path trace on the pre-integrated data.

I gets all your texture budgets!

Has anyone tried to implement the soft-shadow cone tracing explained in Crassin's Thesis (p.162)? I think I might give this one a go first because it seems to be a lot more simpler to understand and possibly much more simpler to implement so would be a good starting point.

It is just a "cone" with its apex from the light's position traced in its direction. Opacity values are accumulated, which I believe can be based on the percentage of the shadow-caster lying within the cone at each mip-map level corresponding to the cone's radius.
With the voxelization of the scene, would you voxelize planar surfaces to the most detailed level? Crassin's voxel cone tracing video shows that the floor of the Sponza was fully voxelized but it seems like it would be wasting memory considering that planar objects are most efficient in traditional rasterization due to less triangles. But I guess for cone tracing gi you would need the higher detailed voxels to correctly transfer specular reflections to planar surfaces.
I don't understand one thing in this voxel cone tracing. When should I stop tracing a cone? In normal path tracing, I have to find closest intersection between ray and some geometry. What about voxel cone tracing?

With the voxelization of the scene, would you voxelize planar surfaces to the most detailed level? Crassin's voxel cone tracing video shows that the floor of the Sponza was fully voxelized but it seems like it would be wasting memory considering that planar objects are most efficient in traditional rasterization due to less triangles. But I guess for cone tracing gi you would need the higher detailed voxels to correctly transfer specular reflections to planar surfaces.


That would be up to your required level of detail and your rendering budget I suppose


I don't understand one thing in this voxel cone tracing. When should I stop tracing a cone? In normal path tracing, I have to find closest intersection between ray and some geometry. What about voxel cone tracing?


As I said in my previous post, you should look at cone tracing as tracing a dense bundle of paths, so the same rules apply for cone tracing as they do for path tracing

I gets all your texture budgets!

As I said in my previous post, you should look at cone tracing as tracing a dense bundle of paths, so the same rules apply for cone tracing as they do for path tracing


So, as the path gets longer, we lower the mipmap level?

I don't understand one thing in this voxel cone tracing. When should I stop tracing a cone? In normal path tracing, I have to find closest intersection between ray and some geometry. What about voxel cone tracing?


My thoughts would be that you would just define a maximum range for your cones and you'd probably determine this from approximating the distance where the illumination contribution becomes unnoticeable - because the further an object is, the less it contributes - attenuation in a BRDF.

I think I was quite a bit wrong with my previous understanding.

I've thought about it a bit more and here's my new understanding:

Let's assume that:
- the octree structure is already created
- the normals are stored in the leaves and averaged/filtered to the parent nodes.
- colors are stored in the leaves (but not averaged/filtered).

Referring to the attached drawing below:
For simplicity's sake, if we just take a single light ray/photon that hits a point on the surface of a blue wall:

wv1w4.jpg

- Sample the color and world position of this pixel from the light's point of view and do a lookup in the octree to find the index of this leaf voxel.
- The color values are filtered to higher levels by dividing by two (i'm not sure how correct this is).

Now for simplicity's sake, let's take a single pixel from our camera's point of view that we want to illuminate - let's assume this pixel is a point on the floor surface.

- If we trace just one "theoretical" cone in the direction of the wall (in reality you would trace several cones in several directions), the largest voxels that fall within the radius of the cone at every distance of the cone's range would be taken into consideration - as highlighted by the black squares. You wouldn't actually intersect a cone volume with voxel volumes because that would be inefficient, instead you would just specify a function that specifies that at a certain distance, this is the voxel level that should be considered.
- For each voxel captured, you would calculate NdotL/(distance squared), where N is a normal value stored in that voxel prior to rendering (would be a filtered value at higher level voxels) and L is the direction from the position on the wall to the point on the floor surface. The values calculated from this for each captured voxel would be added on to the color of the pixel corresponding to that point on the floor.

For speculars:

MBVfG.jpg

- You would make the "cone" radius smaller, thus at further distances from the point on the floor, lower level (more detailed) voxels would be captured. In this case, one leaf voxel is captured and the contribution is added. I think for speculars you would use a different formula for specular lighting to take into account the camera direction.
@gboxentertainment:
What if we have situation like this:
IKR09.png
The green wall will be still illuminating the floor, but it shouldn't, because blue wall is occluding green wall.

This topic is closed to new replies.

Advertisement