Jump to content

  • Log In with Google      Sign In   
  • Create Account

Cone Tracing and Path Tracing - differences.


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
72 replies to this topic

#21 scyfris   Members   -  Reputation: 168

Like
1Likes
Like

Posted 09 August 2012 - 10:15 AM

That being said, I still wouldn't recommend implementing this paper if you are new to this field, although it would be interesting to implement the octree-only part as you'll learn a lot about how all these data structures are fitting together within the GPU. If you decide to do that, please let us know how it went :-)

Sponsor:

#22 MrOMGWTF   Members   -  Reputation: 440

Like
0Likes
Like

Posted 09 August 2012 - 11:42 AM

Well yeah, actually I have like worst gpu ever.
This is my gpu: http://www.geforce.com/hardware/desktop-gpus/geforce-9500-gt/specifications
So I can't do anything in compute shader.

#23 gboxentertainment   Members   -  Reputation: 770

Like
0Likes
Like

Posted 14 August 2012 - 06:39 AM

I think its also the way the paper is organized which can be confusing. With the cone-tracing section, this is how I've interpreted it:

1. Capture direct illumination first (9.7.2) to store the incoming radiance into each leaf. This is done by placing a camera at the light's position and direction to create a light-view-map (just like shadow-mapping). I think each pixel location is transformed into world space position and the index of the leaf corresponding to this position is derived. Two lots of information: direction distribution and energy (which I think is color?) of that pixel is then stored in the leaf.

2. Correct me if I've misinterpreted the paper, but I think the values are averaged at each level from the bottom leaves to the top of the octree (is direction distribution also averaged?).

3. The actual cone tracing part is what I have the most trouble understanding, if the color values are already stored and averaged out across all nodes in the octree, wouldn't it just be a matter of projecting those colors onto the screen (or a screen texture) to obtain the indirect lighting?

#24 CryZe   Members   -  Reputation: 768

Like
0Likes
Like

Posted 14 August 2012 - 06:51 AM

3. The actual cone tracing part is what I have the most trouble understanding, if the color values are already stored and averaged out across all nodes in the octree, wouldn't it just be a matter of projecting those colors onto the screen (or a screen texture) to obtain the indirect lighting?

The octree only represents the scene with it's direct illumination and also features different 3 dimensional mip map levels of the scene. To obtain indirect lighting you still need to traces rays or even better cones through the scene as you would with Path Tracing. Cone Tracing a SVO is just a feasible way to realise Path Tracing in a real time application.

Edited by CryZe, 14 August 2012 - 06:52 AM.


#25 gboxentertainment   Members   -  Reputation: 770

Like
0Likes
Like

Posted 15 August 2012 - 04:02 AM

you still need to traces rays or even better cones through the scene as you would with Path Tracing


So for each pixel on the screen, I would send out a cone, with their apex starting from the pixel?
Or do the apex of the cones start from a point on every surface?

Edit: Okay, I think I get it now. For every pixel on the screen, a number of cones are spawned from the surface corresponding to that pixel in world-space. These are used to sample the pre-integrated information from the voxelized volumes intersecting with the cones. The final gathering involves averaging the total of all the information collected by the cones and this is projected onto the pixel.

Am I correct?

Edited by gboxentertainment, 15 August 2012 - 05:37 AM.


#26 CryZe   Members   -  Reputation: 768

Like
0Likes
Like

Posted 15 August 2012 - 05:57 AM

Yes you're mostly correct. The information is not averaged together though. The incoming radiance of the cones is evaluated against the BRDF of the surface to solve the rendering equation and is not just averaged together.

Edited by CryZe, 15 August 2012 - 05:58 AM.


#27 gboxentertainment   Members   -  Reputation: 770

Like
0Likes
Like

Posted 17 August 2012 - 06:38 AM

Okay, now I finally understand that the incoming radiance is "splatted" into the leaves of the structure, which correspond to the surfaces of the scene. This is done using the same concepts as Reflective Shadow Maps, but instead of generating Virtual Point Lights, color values are directly added to the leaves, then the values are transferred to neighbouring bricks and filtered upwards to each parent node.

However, if we are using leaves to transfer the light, does that mean we need to subdivide planar surfaces like floors and walls to the lowest level as well in order for these surfaces to contribute to the bounce lighting?

#28 MrOMGWTF   Members   -  Reputation: 440

Like
0Likes
Like

Posted 17 August 2012 - 10:25 AM

Oh god yes, I was waiting for this paper so long.
http://www.decom.ufop.br/sibgrapi2012/eproceedings/technical/ts5/102157_3.pdf

Ambient occlusion using cone tracing with scene voxelization.
It's only ambient occlusion but it still explains CONE TRACING! YES
http://www.youtube.com/watch?v=P3ALwKeSEYs

@edit:
This technique is more like cone-sphere tracing:

The volume of each cone is sampled by a series of spheres. The obstructed volumes of the spheres are used to estimate the amount of rays that are blocked by the scene geometry.


Edited by MrOMGWTF, 17 August 2012 - 10:28 AM.


#29 gboxentertainment   Members   -  Reputation: 770

Like
1Likes
Like

Posted 18 August 2012 - 04:34 AM

I'm starting to get the feeling the voxel octree cone tracing is very similar to, if not an upgrade to "voxel-based global illumination" by Thiedemann, et. al.
Thiedemann uses voxelization of RSMs with raytracing. I think that Crassin vastly improved on this by introducing an octree structure for the voxels and thus was able to approximate the raytracing into cone tracing by utilizing the uniform multi-level structure of the octree, which can approximate a cone-like shape using voxels that increase in size.

#30 gboxentertainment   Members   -  Reputation: 770

Like
1Likes
Like

Posted 20 August 2012 - 07:37 AM

I did a little research on RSMs in order to understand how voxel cone tracing works.


With RSMs, if I render the world-position map from the light's point of view into a texture, I can sample that texture to locate the world-position (corresponding to each texel of the light's position map) of each bounce - this is simulated by spawning VPLs at a number of these points. The indirect illumination is calculated the same way as for a direct light by taking I = NdotL/attenuation where L = (VPL position - position of triangle) for every triangle and every VPL. I is divided by a factor to account for energy conservation and this becomes the output color of the triangles. Through this process the result is also filtered using some filtering algorithm to reduce artifacts.
Now doing this accurately would be too expensive because for a 1280x720 display the cost would be: 1280*720 = 921,600 lights, each evaluated against the number of triangles on the screen (for a deferred approach). A tile-based approach could probably improve it but it'd be even more expensive if you wanted a second bounce.

Now, from this understanding, this is how I currently understand voxel cone tracing:

With a voxel octree structure, say for a single cone, the leaf-voxel corresponding to the world-space position from a light's point of view would be sampled by a texel from the light-position-map in the same way as RSMs.

After defining a cone angle, the maximum sized voxel level that falls within this cone at distances along the cone axis would receive the reflected color multiplied by the percentage emission factor stored in the first leaf-voxel and divided by the size of the voxel to conserve energy, with attenuation also taken into account. Then I assume that these values are inherited by their children down to the leaf nodes (then NdotL BRDF is solved to get the correct surface distribution, where N is stored in each receiving leaf voxels and L is the vector distance between each leaf and the leaf of the surface of the first bounce). Emission/absorption coefficients stored in this leaf also affect the resulting color.

For the voxel intersection test I'm assuming that you don't need to actually perform an intersection between a cone and voxels at every level. I'm assuming that because the direction of the cone is predefined (I'm setting it at the normal direction to the bounce surface), you can just take the rate of increase of the voxel size with the distance the cone has travelled (so for a 90degree cone, a maximum voxel size of 4 should fit inside a cone at distance 6) - I'm sure with some trigonometry you can work out the relationship between voxel size and cone distance so I won't go into detail here. You check all voxel sizes along the cone axis to find the position of each size corresponding to the distance.


I'm going to give some pseudo-HLSL-code a try (assuming octree structure is built and leaf voxels filled with pre-integrated material values):
[When I use "for" I mean allocate a thread for, with either the pixel shader or compute shader; and I also use 'direct' to represent the surface receiving direct lighting for the first bounce]


float3 coneOrigin = directPositionMap.Sample(TextureSampler, input.uv);
float3 coneDir = directNormalMap.Sample(TextureSampler, input.uv);
float3 coneColor = directColorMap.Sample(TextureSampler, input.uv);
float emissiveFactor = directMaterialMap.Sample(TextureSampler, input.uv).x;
float3 voxelPosInCone[numLevels];
float voxelSize;

float distance(voxelLevel)
{
voxelSize = ..some function of voxelLevel.. //should be easily worked out
float distance = ..some function of voxelSize // based on cone angle
return distance;
}

// run through every voxel level to find corresponding positions
for(int i=0; i<numLevels; i++)
{
voxelPosInCone[i] = coneOrigin + distance(i)*coneDir;
voxelIdx = ..derived from voxelPosInCone[i]... //still need to find out the most efficient way of doing this.
voxel[voxelIdx].color += coneColor*emissiveFactor/(distance(i)*distance(i))/voxelVolume;

	 for all levels of children beneath voxel[voxelIdx]
	 {
	   childVoxel.color = voxel[voxelIdx].color;
	   float3 N = normalize(leafVoxel.normal);
	   float3 L = normalize(coneOrigin - leafVoxel.pos);
	   leafVoxel.color = dot(N,L)*leafVoxel.absorptionFactor;
	 } //Also need to find out most efficient way of doing this
}

The latter part still needs a bit more thought - I think maybe the color values from the leaf could be transferred to the triangles that fall inside them and then the NdotL BRDF is evaluated. Also, I need to find out the quickest way of getting voxel index from position (currently, I just traverse the octree structure from top-to-bottom for each voxel position until I get to the required level).

Maybe I'll draw some diagrams as well to help explain the cone traversal.

Now all of this has just been a rough, educated guess so please let me know if parts of it are correct/incorrect or if I am completely off the mark.

Edited by gboxentertainment, 20 August 2012 - 07:54 AM.


#31 Radikalizm   Crossbones+   -  Reputation: 2985

Like
0Likes
Like

Posted 20 August 2012 - 08:44 AM

I'm not sure whether I can follow your train of thought exactly here, but I would like to point a couple of things out:

First of all, be wary of making the comparison between reflective shadow maps and voxels. While an RSM can be used to generate a limited set of voxels it does not contain the same data as would be expected from a set of voxels. When working with voxels the world-space position of your sampled voxel for example is inferred by its neighbours and the size of your voxel volume (note: when you look at it formally voxels themselves do not have a size, just like pixels don't have a size), whereas the world position in an RSM is determined by reconstructing a light-space position from the stored depth which you then transform into a world-space position by applying the inverse light transformation.

The RSM method reminds me more of the global illumination technique described by Kaplanyan and Dachsbacher, but they create a limited low-frequency voxel representation of the scene (non-mipmapped!) which they use as a starting point for creating a light volume with a propagation algorithm.
The method of spawning VPLs using a RSM also sounds more like a technique called instant radiosity, which as far as I know has very little in common with the voxel cone tracing paper.


Second, the factor used for energy conservation for lambertian lighting (N.L lighting) is a fixed constant of 1/Pi. Since a cone trace in the presented paper is actually just a path trace of a correctly filtered mip-map of your high frequency voxel data (if I understand correctly, I haven't studied it in-depth yet) there's no need to include any other factor in your BRDF to maintain energy conservation while doing cone tracing.

Your assumption about determining the correct mip level based on the cone angle and and distance to the sampled surface sounds correct the way I understand it.

I gets all your texture budgets!


#32 MrOMGWTF   Members   -  Reputation: 440

Like
0Likes
Like

Posted 24 August 2012 - 09:31 AM

<br />Since a cone trace in the presented paper is actually just a path trace of a correctly filtered mip-map of your high frequency voxel data (if I understand correctly, I haven't studied it in-depth yet) there's no need to include any other factor in your BRDF to maintain energy conservation while doing cone tracing.<br />


So basically there are no any cones in this technique? Just ray-tracing of filtered geometry?

#33 Radikalizm   Crossbones+   -  Reputation: 2985

Like
0Likes
Like

Posted 24 August 2012 - 11:33 AM

<br />Since a cone trace in the presented paper is actually just a path trace of a correctly filtered mip-map of your high frequency voxel data (if I understand correctly, I haven't studied it in-depth yet) there's no need to include any other factor in your BRDF to maintain energy conservation while doing cone tracing.<br />


So basically there are no any cones in this technique? Just ray-tracing of filtered geometry?


If I unterstand correctly it is just a path trace of your pre-filtered voxel data, but doing such a path trace is still a cone trace, so technically there are cones involved ;)

You can look at a cone trace as tracing a bundle of paths and weighting the results of each path, which is basically an integration over a disk-shaped surface. In this technique your voxel data is actually pre-integrated (=downsampled) for each step along your cone axis which means you only have to do a path trace on the pre-integrated data.

I gets all your texture budgets!


#34 gboxentertainment   Members   -  Reputation: 770

Like
0Likes
Like

Posted 24 August 2012 - 09:04 PM

Has anyone tried to implement the soft-shadow cone tracing explained in Crassin's Thesis (p.162)? I think I might give this one a go first because it seems to be a lot more simpler to understand and possibly much more simpler to implement so would be a good starting point.

It is just a "cone" with its apex from the light's position traced in its direction. Opacity values are accumulated, which I believe can be based on the percentage of the shadow-caster lying within the cone at each mip-map level corresponding to the cone's radius.

#35 gboxentertainment   Members   -  Reputation: 770

Like
0Likes
Like

Posted 25 August 2012 - 09:17 AM

With the voxelization of the scene, would you voxelize planar surfaces to the most detailed level? Crassin's voxel cone tracing video shows that the floor of the Sponza was fully voxelized but it seems like it would be wasting memory considering that planar objects are most efficient in traditional rasterization due to less triangles. But I guess for cone tracing gi you would need the higher detailed voxels to correctly transfer specular reflections to planar surfaces.

#36 MrOMGWTF   Members   -  Reputation: 440

Like
0Likes
Like

Posted 25 August 2012 - 10:03 AM

I don't understand one thing in this voxel cone tracing. When should I stop tracing a cone? In normal path tracing, I have to find closest intersection between ray and some geometry. What about voxel cone tracing?

#37 Radikalizm   Crossbones+   -  Reputation: 2985

Like
0Likes
Like

Posted 25 August 2012 - 05:18 PM

With the voxelization of the scene, would you voxelize planar surfaces to the most detailed level? Crassin's voxel cone tracing video shows that the floor of the Sponza was fully voxelized but it seems like it would be wasting memory considering that planar objects are most efficient in traditional rasterization due to less triangles. But I guess for cone tracing gi you would need the higher detailed voxels to correctly transfer specular reflections to planar surfaces.


That would be up to your required level of detail and your rendering budget I suppose

I don't understand one thing in this voxel cone tracing. When should I stop tracing a cone? In normal path tracing, I have to find closest intersection between ray and some geometry. What about voxel cone tracing?


As I said in my previous post, you should look at cone tracing as tracing a dense bundle of paths, so the same rules apply for cone tracing as they do for path tracing

I gets all your texture budgets!


#38 MrOMGWTF   Members   -  Reputation: 440

Like
0Likes
Like

Posted 26 August 2012 - 12:28 AM

As I said in my previous post, you should look at cone tracing as tracing a dense bundle of paths, so the same rules apply for cone tracing as they do for path tracing


So, as the path gets longer, we lower the mipmap level?

#39 gboxentertainment   Members   -  Reputation: 770

Like
0Likes
Like

Posted 26 August 2012 - 12:56 AM

I don't understand one thing in this voxel cone tracing. When should I stop tracing a cone? In normal path tracing, I have to find closest intersection between ray and some geometry. What about voxel cone tracing?


My thoughts would be that you would just define a maximum range for your cones and you'd probably determine this from approximating the distance where the illumination contribution becomes unnoticeable - because the further an object is, the less it contributes - attenuation in a BRDF.

I think I was quite a bit wrong with my previous understanding.

I've thought about it a bit more and here's my new understanding:

Let's assume that:
- the octree structure is already created
- the normals are stored in the leaves and averaged/filtered to the parent nodes.
- colors are stored in the leaves (but not averaged/filtered).

Referring to the attached drawing below:
For simplicity's sake, if we just take a single light ray/photon that hits a point on the surface of a blue wall:

Posted Image

- Sample the color and world position of this pixel from the light's point of view and do a lookup in the octree to find the index of this leaf voxel.
- The color values are filtered to higher levels by dividing by two (i'm not sure how correct this is).

Now for simplicity's sake, let's take a single pixel from our camera's point of view that we want to illuminate - let's assume this pixel is a point on the floor surface.

- If we trace just one "theoretical" cone in the direction of the wall (in reality you would trace several cones in several directions), the largest voxels that fall within the radius of the cone at every distance of the cone's range would be taken into consideration - as highlighted by the black squares. You wouldn't actually intersect a cone volume with voxel volumes because that would be inefficient, instead you would just specify a function that specifies that at a certain distance, this is the voxel level that should be considered.
- For each voxel captured, you would calculate NdotL/(distance squared), where N is a normal value stored in that voxel prior to rendering (would be a filtered value at higher level voxels) and L is the direction from the position on the wall to the point on the floor surface. The values calculated from this for each captured voxel would be added on to the color of the pixel corresponding to that point on the floor.

For speculars:

Posted Image

- You would make the "cone" radius smaller, thus at further distances from the point on the floor, lower level (more detailed) voxels would be captured. In this case, one leaf voxel is captured and the contribution is added. I think for speculars you would use a different formula for specular lighting to take into account the camera direction.

Edited by gboxentertainment, 26 August 2012 - 12:56 AM.


#40 MrOMGWTF   Members   -  Reputation: 440

Like
0Likes
Like

Posted 26 August 2012 - 01:39 AM

@gboxentertainment:
What if we have situation like this:
Posted Image
The green wall will be still illuminating the floor, but it shouldn't, because blue wall is occluding green wall.




Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS