Cone Tracing and Path Tracing - differences.

Started by
71 comments, last by gboxentertainment 11 years, 3 months ago

The green wall will be still illuminating the floor, but it shouldn't, because blue wall is occluding green wall.


That's something that's crossed my mind.
In Crassin's thesis, he explains cone tracing soft shadows:
He says that you would accumulate opacity values as well and once the value "saturates", you would stop the trace.
I guess saturates means opacity = 1;
Advertisement

[quote name='MrOMGWTF' timestamp='1345966775' post='4973422']
The green wall will be still illuminating the floor, but it shouldn't, because blue wall is occluding green wall.


That's something that's crossed my mind.
In Crassin's thesis, he explains cone tracing soft shadows:
He says that you would accumulate opacity values as well and once the value "saturates", you would stop the trace.
I guess saturates means opacity = 1;
[/quote]

Most of surfaces will have opacity of 1, so you basically stop at first intersection you find?
The guys from Unreal Engine have just released the slides from the talk they did on Siggraph: http://www.unrealengine.com/resources/
1/3 of the slides are about cone how they do cone tracing. They say they are using the method in the paper with a few optimizations.
Thanks! I was beginning to think that they made a promise to speak at Siggraph but then didn't and covered it up because I couldn't find any news or videos on it.
I still don't quite understand how the concept of "bricks" fit into the whole data structure.
So instead of having just nodes to represent voxels and storing all voxel and indexing data into nodes,
It seems that they separate them so that nodes only contain the pointers/indexes to the bricks, which contain all the actual data.
In the papers, they state that there are bricks at every level, so at each level, the nodes point to bricks. If so, what's the purpose of having a "constant color" stored in each node, if filtered color data is already stored in the bricks at that level?

Also, how does grouping bricks as 8x8x8 voxels work? So say that for a complete 9-level octree structure you would have 512x512x512 voxels at the lowest level - this means that you would have 64x64x64 bricks. Then at the 3rd level, where you have 8x8x8 voxels, this would be a single 1x1x1 brick. Does this mean that bricks only work up to the 3rd level?

Are the voxels in each brick grouped according to their positions in world space? If so then for a sparse structure you would have bricks that have empty voxels in them?
Which paper did you read? I remember that I read in the original paper for giga voxel they used larger bricks which they ray traced through using multiple texture lookups. But that was for volume data that is not always sparse. In that paper they used a constant color at the nodes only if the whole brick had a constant color so they could skip the ray tracing step.

In the global illumination paper they don't seem to store a constant color at the nodes, they only use the bricks.
This is also because bricks are always 2x2x2 and only require one lookup (well actually the bricks are 3x3x3 because you need a extra border for the interpolation to work).

Yes it seems that you will have bricks that partially lie in empty space. I assume they set the alpha for those bricks to zero so when doing the interpolated lookup they would have smooth transitions and thus less aliasing.
yes, those were the papers I have read.

I'm still having trouble visualizing how these bricks are stored in 3d-textures.
In the GPU Gems 2 "Octree Textures", it seems that they store 2x2x2 bricks in a 3d texture (with each 1x1x1 voxel as a texel) from top-down by storing the brick for the root node at 0,0,0 and the brick at level 1 at 2,0,0, then the brick at level 2 at 4,0,0 until they reach the furthest non-empty level. Then the next set of texels they start over from a higher level brick.

Crassin says that instead of this he uses a tiling scheme. Does this mean that say for an octree of 9 levels, where there are a maximum of 512x512x512 voxels at the lowest level, he stores the bricks at that level sparsely in a 512x512x512 3d texture? So for the next level up, he'll use a 256x256x256 3d texture, then a 128x128x128 3d texture...
and all of these textures are stored as mip-map levels in a 9 level mip-mapped 3d texture?

Are the positions of the bricks in each 3d texture scaled versions of world positions? i.e. if two bricks are next to each other in world space does that mean that they are next to each other in the 3d texture? So for a sparse octree, you'll have 3d textures where there are lots of empty texels?
In section 4.1 in this paper of him: http://maverick.inria.fr/Publications/2011/CNSGE11b/GIVoxels-pg2011-authors.pdf he says that in each node there is a pointer to a brick which makes me believe the bricks are stored in a compact way in the 3d texture and there is no correlation to brick position and world space position of the voxels.
This stuff is awesome! But I'm worried about huge distances in terms of this, like in giant open world games. Memory is going to be eaten up, what, linearly based on area because you're also doing a progressively lower loded octree? Figure it out later, but point is a lot of memory. But what else are you going to do, especially for specular? It's going to be dead obvious that the highly reflective floor should really be reflecting that distant mountain, and break the visual suspension of disbelief if it's not.

I'm thinking there's got to be a faster, far less memory intensive way to get diffuse/specular reflections beyond some reasonable distance of voxelization. The "Realtime GI and Reflections in Dust 514" from here: http://advances.realtimerendering.com/s2012/index.html might offer promise, if the presentation ever gets up. The idea of just a heightfield like structure could work well for 99% of games and far distances.

he says that in each node there is a pointer to a brick which makes me believe the bricks are stored in a compact way in the 3d texture


Yeh, I think it might just be this way. One thing I don't understand with this compact structure is how does trilinear filtering work with a compact structure? Wouldn't there be artifacts due to the blending of two neighbouring bricks in the 3d texture that are not actually neighbouring voxels in world space?

Here's a quote from chapter 5 of his thesis:

the brick pool is implemented in texture memory (cf. Section 1.3), in order to be able to use hardware texture interpolation, 3D addressing, as well as a caching optimized for 3D locality (cf. Section A.1). Brick voxels can store multiple scalar or vector values (Color, normal, texture coordinates, material information ...). These values must be linearly interpolable in order to ensure a correct reconstruction when sampled during rendering. Each value is stored in separate "layer" of the brick pool, similarly to the memory organization used for the node pool.[/quote]

Maybe someone can make sense of all of this. What's the difference between 3d addressing and 3d locality?
He says each value: color, normal, texture coordinates,... are stored in a separate "layer" - does this mean that for a 3d Texture of uvw coordinates, each "layer" is w-levels deep?

This topic is closed to new replies.

Advertisement