• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.
Sign in to follow this  
Followers 0
MrOMGWTF

Cone Tracing and Path Tracing - differences.

72 posts in this topic

Yes you're mostly correct. The information is not averaged together though. The incoming radiance of the cones is evaluated against the BRDF of the surface to solve the rendering equation and is not just averaged together. Edited by CryZe
0

Share this post


Link to post
Share on other sites
Okay, now I finally understand that the incoming radiance is "splatted" into the leaves of the structure, which correspond to the surfaces of the scene. This is done using the same concepts as Reflective Shadow Maps, but instead of generating Virtual Point Lights, color values are directly added to the leaves, then the values are transferred to neighbouring bricks and filtered upwards to each parent node.

However, if we are using leaves to transfer the light, does that mean we need to subdivide planar surfaces like floors and walls to the lowest level as well in order for these surfaces to contribute to the bounce lighting?
0

Share this post


Link to post
Share on other sites
Oh god yes, I was waiting for this paper so long.
http://www.decom.ufop.br/sibgrapi2012/eproceedings/technical/ts5/102157_3.pdf

Ambient occlusion using cone tracing with scene voxelization.
It's only ambient occlusion but it still explains CONE TRACING! YES
[media]http://www.youtube.com/watch?v=P3ALwKeSEYs[/media]

@edit:
This technique is more like cone-sphere tracing:
[quote]The volume of each cone is sampled by a series of spheres. The obstructed volumes of the spheres are used to estimate the amount of rays that are blocked by the scene geometry.[/quote] Edited by MrOMGWTF
0

Share this post


Link to post
Share on other sites
I'm starting to get the feeling the voxel octree cone tracing is very similar to, if not an upgrade to "voxel-based global illumination" by Thiedemann, et. al.
Thiedemann uses voxelization of RSMs with raytracing. I think that Crassin vastly improved on this by introducing an octree structure for the voxels and thus was able to approximate the raytracing into cone tracing by utilizing the uniform multi-level structure of the octree, which can approximate a cone-like shape using voxels that increase in size.
1

Share this post


Link to post
Share on other sites
I did a little research on RSMs in order to understand how voxel cone tracing works.


With RSMs, if I render the world-position map from the light's point of view into a texture, I can sample that texture to locate the world-position (corresponding to each texel of the light's position map) of each bounce - this is simulated by spawning VPLs at a number of these points. The indirect illumination is calculated the same way as for a direct light by taking I = NdotL/attenuation where L = (VPL position - position of triangle) for every triangle and every VPL. I is divided by a factor to account for energy conservation and this becomes the output color of the triangles. Through this process the result is also filtered using some filtering algorithm to reduce artifacts.
Now doing this accurately would be too expensive because for a 1280x720 display the cost would be: 1280*720 = 921,600 lights, each evaluated against the number of triangles on the screen (for a deferred approach). A tile-based approach could probably improve it but it'd be even more expensive if you wanted a second bounce.

Now, from this understanding, this is how I currently understand voxel cone tracing:

With a voxel octree structure, say for a single cone, the leaf-voxel corresponding to the world-space position from a light's point of view would be sampled by a texel from the light-position-map in the same way as RSMs.

After defining a cone angle, the maximum sized voxel level that falls within this cone at distances along the cone axis would receive the reflected color multiplied by the percentage emission factor stored in the first leaf-voxel and divided by the size of the voxel to conserve energy, with attenuation also taken into account. Then I assume that these values are inherited by their children down to the leaf nodes (then NdotL BRDF is solved to get the correct surface distribution, where N is stored in each receiving leaf voxels and L is the vector distance between each leaf and the leaf of the surface of the first bounce). Emission/absorption coefficients stored in this leaf also affect the resulting color.

For the voxel intersection test I'm assuming that you don't need to actually perform an intersection between a cone and voxels at every level. I'm assuming that because the direction of the cone is predefined (I'm setting it at the normal direction to the bounce surface), you can just take the rate of increase of the voxel size with the distance the cone has travelled (so for a 90degree cone, a maximum voxel size of 4 should fit inside a cone at distance 6) - I'm sure with some trigonometry you can work out the relationship between voxel size and cone distance so I won't go into detail here. You check all voxel sizes along the cone axis to find the position of each size corresponding to the distance.


I'm going to give some pseudo-HLSL-code a try (assuming octree structure is built and leaf voxels filled with pre-integrated material values):
[When I use "for" I mean allocate a thread for, with either the pixel shader or compute shader; and I also use 'direct' to represent the surface receiving direct lighting for the first bounce]


[CODE]float3 coneOrigin = directPositionMap.Sample(TextureSampler, input.uv);
float3 coneDir = directNormalMap.Sample(TextureSampler, input.uv);
float3 coneColor = directColorMap.Sample(TextureSampler, input.uv);
float emissiveFactor = directMaterialMap.Sample(TextureSampler, input.uv).x;
float3 voxelPosInCone[numLevels];
float voxelSize;

float distance(voxelLevel)
{
voxelSize = ..some function of voxelLevel.. //should be easily worked out
float distance = ..some function of voxelSize // based on cone angle
return distance;
}

// run through every voxel level to find corresponding positions
for(int i=0; i<numLevels; i++)
{
voxelPosInCone[i] = coneOrigin + distance(i)*coneDir;
voxelIdx = ..derived from voxelPosInCone[i]... //still need to find out the most efficient way of doing this.
voxel[voxelIdx].color += coneColor*emissiveFactor/(distance(i)*distance(i))/voxelVolume;

for all levels of children beneath voxel[voxelIdx]
{
childVoxel.color = voxel[voxelIdx].color;
float3 N = normalize(leafVoxel.normal);
float3 L = normalize(coneOrigin - leafVoxel.pos);
leafVoxel.color = dot(N,L)*leafVoxel.absorptionFactor;
} //Also need to find out most efficient way of doing this
}[/CODE]

The latter part still needs a bit more thought - I think maybe the color values from the leaf could be transferred to the triangles that fall inside them and then the NdotL BRDF is evaluated. Also, I need to find out the quickest way of getting voxel index from position (currently, I just traverse the octree structure from top-to-bottom for each voxel position until I get to the required level).

Maybe I'll draw some diagrams as well to help explain the cone traversal.

Now all of this has just been a rough, educated guess so please let me know if parts of it are correct/incorrect or if I am completely off the mark. Edited by gboxentertainment
1

Share this post


Link to post
Share on other sites
I'm not sure whether I can follow your train of thought exactly here, but I would like to point a couple of things out:

First of all, be wary of making the comparison between reflective shadow maps and voxels. While an RSM can be used to generate a limited set of voxels it does not contain the same data as would be expected from a set of voxels. When working with voxels the world-space position of your sampled voxel for example is inferred by its neighbours and the size of your voxel volume (note: when you look at it formally voxels themselves do not have a size, just like pixels don't have a size), whereas the world position in an RSM is determined by reconstructing a light-space position from the stored depth which you then transform into a world-space position by applying the inverse light transformation.

The RSM method reminds me more of the global illumination technique described by Kaplanyan and Dachsbacher, but they create a limited low-frequency voxel representation of the scene (non-mipmapped!) which they use as a starting point for creating a light volume with a propagation algorithm.
The method of spawning VPLs using a RSM also sounds more like a technique called instant radiosity, which as far as I know has very little in common with the voxel cone tracing paper.


Second, the factor used for energy conservation for lambertian lighting (N.L lighting) is a fixed constant of 1/Pi. Since a cone trace in the presented paper is actually just a path trace of a correctly filtered mip-map of your high frequency voxel data (if I understand correctly, I haven't studied it in-depth yet) there's no need to include any other factor in your BRDF to maintain energy conservation while doing cone tracing.

Your assumption about determining the correct mip level based on the cone angle and and distance to the sampled surface sounds correct the way I understand it.
0

Share this post


Link to post
Share on other sites
[quote name='Radikalizm' timestamp='1345473867' post='4971480']<br />Since a cone trace in the presented paper is actually just a path trace of a correctly filtered mip-map of your high frequency voxel data (if I understand correctly, I haven't studied it in-depth yet) there's no need to include any other factor in your BRDF to maintain energy conservation while doing cone tracing.<br />[/quote]

So basically there are no any cones in this technique? Just ray-tracing of filtered geometry?
0

Share this post


Link to post
Share on other sites
[quote name='MrOMGWTF' timestamp='1345822309' post='4973012']
[quote name='Radikalizm' timestamp='1345473867' post='4971480']<br />Since a cone trace in the presented paper is actually just a path trace of a correctly filtered mip-map of your high frequency voxel data (if I understand correctly, I haven't studied it in-depth yet) there's no need to include any other factor in your BRDF to maintain energy conservation while doing cone tracing.<br />[/quote]

So basically there are no any cones in this technique? Just ray-tracing of filtered geometry?
[/quote]

If I unterstand correctly it is just a path trace of your pre-filtered voxel data, but doing such a path trace is still a cone trace, so technically there are cones involved ;)

You can look at a cone trace as tracing a bundle of paths and weighting the results of each path, which is basically an integration over a disk-shaped surface. In this technique your voxel data is actually pre-integrated (=downsampled) for each step along your cone axis which means you only have to do a path trace on the pre-integrated data.
0

Share this post


Link to post
Share on other sites
Has anyone tried to implement the soft-shadow cone tracing explained in Crassin's Thesis (p.162)? I think I might give this one a go first because it seems to be a lot more simpler to understand and possibly much more simpler to implement so would be a good starting point.

It is just a "cone" with its apex from the light's position traced in its direction. Opacity values are accumulated, which I believe can be based on the percentage of the shadow-caster lying within the cone at each mip-map level corresponding to the cone's radius.
0

Share this post


Link to post
Share on other sites
With the voxelization of the scene, would you voxelize planar surfaces to the most detailed level? Crassin's voxel cone tracing video shows that the floor of the Sponza was fully voxelized but it seems like it would be wasting memory considering that planar objects are most efficient in traditional rasterization due to less triangles. But I guess for cone tracing gi you would need the higher detailed voxels to correctly transfer specular reflections to planar surfaces.
0

Share this post


Link to post
Share on other sites
I don't understand one thing in this voxel cone tracing. When should I stop tracing a cone? In normal path tracing, I have to find closest intersection between ray and some geometry. What about voxel cone tracing?
0

Share this post


Link to post
Share on other sites
[quote name='gboxentertainment' timestamp='1345907842' post='4973256']
With the voxelization of the scene, would you voxelize planar surfaces to the most detailed level? Crassin's voxel cone tracing video shows that the floor of the Sponza was fully voxelized but it seems like it would be wasting memory considering that planar objects are most efficient in traditional rasterization due to less triangles. But I guess for cone tracing gi you would need the higher detailed voxels to correctly transfer specular reflections to planar surfaces.
[/quote]

That would be up to your required level of detail and your rendering budget I suppose

[quote name='MrOMGWTF' timestamp='1345910630' post='4973264']
I don't understand one thing in this voxel cone tracing. When should I stop tracing a cone? In normal path tracing, I have to find closest intersection between ray and some geometry. What about voxel cone tracing?
[/quote]

As I said in my previous post, you should look at cone tracing as tracing a dense bundle of paths, so the same rules apply for cone tracing as they do for path tracing
0

Share this post


Link to post
Share on other sites
[quote name='Radikalizm' timestamp='1345936681' post='4973360']As I said in my previous post, you should look at cone tracing as tracing a dense bundle of paths, so the same rules apply for cone tracing as they do for path tracing[/quote]

So, as the path gets longer, we lower the mipmap level?
0

Share this post


Link to post
Share on other sites
[quote name='MrOMGWTF' timestamp='1345910630' post='4973264']
I don't understand one thing in this voxel cone tracing. When should I stop tracing a cone? In normal path tracing, I have to find closest intersection between ray and some geometry. What about voxel cone tracing?
[/quote]

My thoughts would be that you would just define a maximum range for your cones and you'd probably determine this from approximating the distance where the illumination contribution becomes unnoticeable - because the further an object is, the less it contributes - attenuation in a BRDF.

I think I was quite a bit wrong with my previous understanding.

I've thought about it a bit more and here's my new understanding:

Let's assume that:
- the octree structure is already created
- the normals are stored in the leaves and averaged/filtered to the parent nodes.
- colors are stored in the leaves (but not averaged/filtered).

Referring to the attached drawing below:
For simplicity's sake, if we just take a single light ray/photon that hits a point on the surface of a blue wall:

[img]http://i.imgur.com/wv1w4.jpg[/img]

- Sample the color and world position of this pixel from the light's point of view and do a lookup in the octree to find the index of this leaf voxel.
- The color values are filtered to higher levels by dividing by two (i'm not sure how correct this is).

Now for simplicity's sake, let's take a single pixel from our camera's point of view that we want to illuminate - let's assume this pixel is a point on the floor surface.

- If we trace just one "theoretical" cone in the direction of the wall (in reality you would trace several cones in several directions), the largest voxels that fall within the radius of the cone at every distance of the cone's range would be taken into consideration - as highlighted by the black squares. You wouldn't actually intersect a cone volume with voxel volumes because that would be inefficient, instead you would just specify a function that specifies that at a certain distance, this is the voxel level that should be considered.
- For each voxel captured, you would calculate NdotL/(distance squared), where N is a normal value stored in that voxel prior to rendering (would be a filtered value at higher level voxels) and L is the direction from the position on the wall to the point on the floor surface. The values calculated from this for each captured voxel would be added on to the color of the pixel corresponding to that point on the floor.

For speculars:

[img]http://i.imgur.com/MBVfG.jpg[/img]

- You would make the "cone" radius smaller, thus at further distances from the point on the floor, lower level (more detailed) voxels would be captured. In this case, one leaf voxel is captured and the contribution is added. I think for speculars you would use a different formula for specular lighting to take into account the camera direction. Edited by gboxentertainment
0

Share this post


Link to post
Share on other sites
@gboxentertainment:
What if we have situation like this:
[img]http://i.imgur.com/IKR09.png[/img]
The green wall will be still illuminating the floor, but it shouldn't, because blue wall is occluding green wall.
0

Share this post


Link to post
Share on other sites
[quote name='MrOMGWTF' timestamp='1345966775' post='4973422']
The green wall will be still illuminating the floor, but it shouldn't, because blue wall is occluding green wall.
[/quote]

That's something that's crossed my mind.
In Crassin's thesis, he explains cone tracing soft shadows:
He says that you would accumulate opacity values as well and once the value "saturates", you would stop the trace.
I guess saturates means opacity = 1;
0

Share this post


Link to post
Share on other sites
[quote name='gboxentertainment' timestamp='1345973198' post='4973432']
[quote name='MrOMGWTF' timestamp='1345966775' post='4973422']
The green wall will be still illuminating the floor, but it shouldn't, because blue wall is occluding green wall.
[/quote]

That's something that's crossed my mind.
In Crassin's thesis, he explains cone tracing soft shadows:
He says that you would accumulate opacity values as well and once the value "saturates", you would stop the trace.
I guess saturates means opacity = 1;
[/quote]

Most of surfaces will have opacity of 1, so you basically stop at first intersection you find?
0

Share this post


Link to post
Share on other sites
The guys from Unreal Engine have just released the slides from the talk they did on Siggraph: http://www.unrealengine.com/resources/
1/3 of the slides are about cone how they do cone tracing. They say they are using the method in the paper with a few optimizations.
2

Share this post


Link to post
Share on other sites
Thanks! I was beginning to think that they made a promise to speak at Siggraph but then didn't and covered it up because I couldn't find any news or videos on it.
0

Share this post


Link to post
Share on other sites
I still don't quite understand how the concept of "bricks" fit into the whole data structure.
So instead of having just nodes to represent voxels and storing all voxel and indexing data into nodes,
It seems that they separate them so that nodes only contain the pointers/indexes to the bricks, which contain all the actual data.
In the papers, they state that there are bricks at every level, so at each level, the nodes point to bricks. If so, what's the purpose of having a "constant color" stored in each node, if filtered color data is already stored in the bricks at that level?

Also, how does grouping bricks as 8x8x8 voxels work? So say that for a complete 9-level octree structure you would have 512x512x512 voxels at the lowest level - this means that you would have 64x64x64 bricks. Then at the 3rd level, where you have 8x8x8 voxels, this would be a single 1x1x1 brick. Does this mean that bricks only work up to the 3rd level?

Are the voxels in each brick grouped according to their positions in world space? If so then for a sparse structure you would have bricks that have empty voxels in them?
0

Share this post


Link to post
Share on other sites
Which paper did you read? I remember that I read in the original paper for giga voxel they used larger bricks which they ray traced through using multiple texture lookups. But that was for volume data that is not always sparse. In that paper they used a constant color at the nodes only if the whole brick had a constant color so they could skip the ray tracing step.

In the global illumination paper they don't seem to store a constant color at the nodes, they only use the bricks.
This is also because bricks are always 2x2x2 and only require one lookup (well actually the bricks are 3x3x3 because you need a extra border for the interpolation to work).

Yes it seems that you will have bricks that partially lie in empty space. I assume they set the alpha for those bricks to zero so when doing the interpolated lookup they would have smooth transitions and thus less aliasing.
0

Share this post


Link to post
Share on other sites
yes, those were the papers I have read.

I'm still having trouble visualizing how these bricks are stored in 3d-textures.
In the GPU Gems 2 "Octree Textures", it seems that they store 2x2x2 bricks in a 3d texture (with each 1x1x1 voxel as a texel) from top-down by storing the brick for the root node at 0,0,0 and the brick at level 1 at 2,0,0, then the brick at level 2 at 4,0,0 until they reach the furthest non-empty level. Then the next set of texels they start over from a higher level brick.

Crassin says that instead of this he uses a tiling scheme. Does this mean that say for an octree of 9 levels, where there are a maximum of 512x512x512 voxels at the lowest level, he stores the bricks at that level sparsely in a 512x512x512 3d texture? So for the next level up, he'll use a 256x256x256 3d texture, then a 128x128x128 3d texture...
and all of these textures are stored as mip-map levels in a 9 level mip-mapped 3d texture?

Are the positions of the bricks in each 3d texture scaled versions of world positions? i.e. if two bricks are next to each other in world space does that mean that they are next to each other in the 3d texture? So for a sparse octree, you'll have 3d textures where there are lots of empty texels?
0

Share this post


Link to post
Share on other sites
In section 4.1 in this paper of him: http://maverick.inria.fr/Publications/2011/CNSGE11b/GIVoxels-pg2011-authors.pdf he says that in each node there is a pointer to a brick which makes me believe the bricks are stored in a compact way in the 3d texture and there is no correlation to brick position and world space position of the voxels.
0

Share this post


Link to post
Share on other sites
This stuff is awesome! But I'm worried about huge distances in terms of this, like in giant open world games. Memory is going to be eaten up, what, linearly based on area because you're also doing a progressively lower loded octree? Figure it out later, but point is a lot of memory. But what else are you going to do, especially for specular? It's going to be dead obvious that the highly reflective floor should really be reflecting that distant mountain, and break the visual suspension of disbelief if it's not.

I'm thinking there's got to be a faster, far less memory intensive way to get diffuse/specular reflections beyond some reasonable distance of voxelization. The "Realtime GI and Reflections in Dust 514" from here: http://advances.realtimerendering.com/s2012/index.html might offer promise, if the presentation ever gets up. The idea of just a heightfield like structure could work well for 99% of games and far distances.
0

Share this post


Link to post
Share on other sites
[quote name='PjotrSvetachov' timestamp='1346262028' post='4974500']
he says that in each node there is a pointer to a brick which makes me believe the bricks are stored in a compact way in the 3d texture
[/quote]

Yeh, I think it might just be this way. One thing I don't understand with this compact structure is how does trilinear filtering work with a compact structure? Wouldn't there be artifacts due to the blending of two neighbouring bricks in the 3d texture that are not actually neighbouring voxels in world space?

Here's a quote from chapter 5 of his thesis:

[quote]the brick pool is implemented in texture memory (cf. Section 1.3), in order to be able to use hardware texture interpolation, 3D addressing, as well as a caching optimized for 3D locality (cf. Section A.1). Brick voxels can store multiple scalar or vector values (Color, normal, texture coordinates, material information ...). These values must be linearly interpolable in order to ensure a correct reconstruction when sampled during rendering. Each value is stored in separate "layer" of the brick pool, similarly to the memory organization used for the node pool.[/quote]

Maybe someone can make sense of all of this. What's the difference between 3d addressing and 3d locality?
He says each value: color, normal, texture coordinates,... are stored in a separate "layer" - does this mean that for a 3d Texture of uvw coordinates, each "layer" is w-levels deep? Edited by gboxentertainment
0

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now
Sign in to follow this  
Followers 0