Jump to content

  • Log In with Google      Sign In   
  • Create Account

Cone Tracing and Path Tracing - differences.


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
72 replies to this topic

#41 gboxentertainment   Members   -  Reputation: 766

Like
0Likes
Like

Posted 26 August 2012 - 03:26 AM

The green wall will be still illuminating the floor, but it shouldn't, because blue wall is occluding green wall.


That's something that's crossed my mind.
In Crassin's thesis, he explains cone tracing soft shadows:
He says that you would accumulate opacity values as well and once the value "saturates", you would stop the trace.
I guess saturates means opacity = 1;

Sponsor:

#42 MrOMGWTF   Members   -  Reputation: 439

Like
0Likes
Like

Posted 26 August 2012 - 05:11 AM


The green wall will be still illuminating the floor, but it shouldn't, because blue wall is occluding green wall.


That's something that's crossed my mind.
In Crassin's thesis, he explains cone tracing soft shadows:
He says that you would accumulate opacity values as well and once the value "saturates", you would stop the trace.
I guess saturates means opacity = 1;


Most of surfaces will have opacity of 1, so you basically stop at first intersection you find?

#43 PjotrSvetachov   Members   -  Reputation: 546

Like
2Likes
Like

Posted 27 August 2012 - 05:30 AM

The guys from Unreal Engine have just released the slides from the talk they did on Siggraph: http://www.unrealengine.com/resources/
1/3 of the slides are about cone how they do cone tracing. They say they are using the method in the paper with a few optimizations.

#44 gboxentertainment   Members   -  Reputation: 766

Like
0Likes
Like

Posted 27 August 2012 - 06:40 AM

Thanks! I was beginning to think that they made a promise to speak at Siggraph but then didn't and covered it up because I couldn't find any news or videos on it.

#45 gboxentertainment   Members   -  Reputation: 766

Like
0Likes
Like

Posted 28 August 2012 - 03:13 AM

I still don't quite understand how the concept of "bricks" fit into the whole data structure.
So instead of having just nodes to represent voxels and storing all voxel and indexing data into nodes,
It seems that they separate them so that nodes only contain the pointers/indexes to the bricks, which contain all the actual data.
In the papers, they state that there are bricks at every level, so at each level, the nodes point to bricks. If so, what's the purpose of having a "constant color" stored in each node, if filtered color data is already stored in the bricks at that level?

Also, how does grouping bricks as 8x8x8 voxels work? So say that for a complete 9-level octree structure you would have 512x512x512 voxels at the lowest level - this means that you would have 64x64x64 bricks. Then at the 3rd level, where you have 8x8x8 voxels, this would be a single 1x1x1 brick. Does this mean that bricks only work up to the 3rd level?

Are the voxels in each brick grouped according to their positions in world space? If so then for a sparse structure you would have bricks that have empty voxels in them?

#46 PjotrSvetachov   Members   -  Reputation: 546

Like
0Likes
Like

Posted 28 August 2012 - 03:41 PM

Which paper did you read? I remember that I read in the original paper for giga voxel they used larger bricks which they ray traced through using multiple texture lookups. But that was for volume data that is not always sparse. In that paper they used a constant color at the nodes only if the whole brick had a constant color so they could skip the ray tracing step.

In the global illumination paper they don't seem to store a constant color at the nodes, they only use the bricks.
This is also because bricks are always 2x2x2 and only require one lookup (well actually the bricks are 3x3x3 because you need a extra border for the interpolation to work).

Yes it seems that you will have bricks that partially lie in empty space. I assume they set the alpha for those bricks to zero so when doing the interpolated lookup they would have smooth transitions and thus less aliasing.

#47 gboxentertainment   Members   -  Reputation: 766

Like
0Likes
Like

Posted 29 August 2012 - 04:25 AM

yes, those were the papers I have read.

I'm still having trouble visualizing how these bricks are stored in 3d-textures.
In the GPU Gems 2 "Octree Textures", it seems that they store 2x2x2 bricks in a 3d texture (with each 1x1x1 voxel as a texel) from top-down by storing the brick for the root node at 0,0,0 and the brick at level 1 at 2,0,0, then the brick at level 2 at 4,0,0 until they reach the furthest non-empty level. Then the next set of texels they start over from a higher level brick.

Crassin says that instead of this he uses a tiling scheme. Does this mean that say for an octree of 9 levels, where there are a maximum of 512x512x512 voxels at the lowest level, he stores the bricks at that level sparsely in a 512x512x512 3d texture? So for the next level up, he'll use a 256x256x256 3d texture, then a 128x128x128 3d texture...
and all of these textures are stored as mip-map levels in a 9 level mip-mapped 3d texture?

Are the positions of the bricks in each 3d texture scaled versions of world positions? i.e. if two bricks are next to each other in world space does that mean that they are next to each other in the 3d texture? So for a sparse octree, you'll have 3d textures where there are lots of empty texels?

#48 PjotrSvetachov   Members   -  Reputation: 546

Like
0Likes
Like

Posted 29 August 2012 - 11:40 AM

In section 4.1 in this paper of him: http://maverick.inria.fr/Publications/2011/CNSGE11b/GIVoxels-pg2011-authors.pdf he says that in each node there is a pointer to a brick which makes me believe the bricks are stored in a compact way in the 3d texture and there is no correlation to brick position and world space position of the voxels.

#49 Frenetic Pony   Members   -  Reputation: 1311

Like
0Likes
Like

Posted 29 August 2012 - 02:47 PM

This stuff is awesome! But I'm worried about huge distances in terms of this, like in giant open world games. Memory is going to be eaten up, what, linearly based on area because you're also doing a progressively lower loded octree? Figure it out later, but point is a lot of memory. But what else are you going to do, especially for specular? It's going to be dead obvious that the highly reflective floor should really be reflecting that distant mountain, and break the visual suspension of disbelief if it's not.

I'm thinking there's got to be a faster, far less memory intensive way to get diffuse/specular reflections beyond some reasonable distance of voxelization. The "Realtime GI and Reflections in Dust 514" from here: http://advances.realtimerendering.com/s2012/index.html might offer promise, if the presentation ever gets up. The idea of just a heightfield like structure could work well for 99% of games and far distances.

#50 gboxentertainment   Members   -  Reputation: 766

Like
0Likes
Like

Posted 29 August 2012 - 04:28 PM

he says that in each node there is a pointer to a brick which makes me believe the bricks are stored in a compact way in the 3d texture


Yeh, I think it might just be this way. One thing I don't understand with this compact structure is how does trilinear filtering work with a compact structure? Wouldn't there be artifacts due to the blending of two neighbouring bricks in the 3d texture that are not actually neighbouring voxels in world space?

Here's a quote from chapter 5 of his thesis:

the brick pool is implemented in texture memory (cf. Section 1.3), in order to be able to use hardware texture interpolation, 3D addressing, as well as a caching optimized for 3D locality (cf. Section A.1). Brick voxels can store multiple scalar or vector values (Color, normal, texture coordinates, material information ...). These values must be linearly interpolable in order to ensure a correct reconstruction when sampled during rendering. Each value is stored in separate "layer" of the brick pool, similarly to the memory organization used for the node pool.


Maybe someone can make sense of all of this. What's the difference between 3d addressing and 3d locality?
He says each value: color, normal, texture coordinates,... are stored in a separate "layer" - does this mean that for a 3d Texture of uvw coordinates, each "layer" is w-levels deep?

Edited by gboxentertainment, 29 August 2012 - 04:49 PM.


#51 Necrolis   Members   -  Reputation: 1327

Like
0Likes
Like

Posted 30 August 2012 - 12:36 AM


he says that in each node there is a pointer to a brick which makes me believe the bricks are stored in a compact way in the 3d texture


Yeh, I think it might just be this way. One thing I don't understand with this compact structure is how does trilinear filtering work with a compact structure? Wouldn't there be artifacts due to the blending of two neighbouring bricks in the 3d texture that are not actually neighbouring voxels in world space?

thats what the border on the 2x2x2 voxel chunks is for (which effectively makes them 3x3x3). In one of the papers Crassin explains how the trilinear sampling works, pointing out that he sample from the center of the voxel, and thus requires the 3x3x3 brick to prevent artifacts.

#52 gboxentertainment   Members   -  Reputation: 766

Like
0Likes
Like

Posted 30 August 2012 - 03:58 AM

ah I think I get it now - the 3x3x3 bricks are theoretical so are only used to obtain the trilinearly filtered color in each node.
So in the 3d texture I would store the bricks by taking offset values of 0.5*voxel size from each node to store 3x3x3 voxels.
For sampling, if it were a 512x512x512 3d texture, we would just sample the first node at texture coordinate (1,1,1) and the second node at texture coordinate (3,1,1), etc.? so every node would be 2 texels apart?

Also, if I want to store other information like normals, would I store that in the w-direction of the 3d texture? Like, for the first node at texture coordinate (1,1,4) and the second node at (3,1,4), etc.?

Edited by gboxentertainment, 30 August 2012 - 05:28 AM.


#53 PjotrSvetachov   Members   -  Reputation: 546

Like
0Likes
Like

Posted 30 August 2012 - 09:38 AM

The node structure is stored in linear memory. The bricks are stored in the 3D texture but there are pointers in the node stucture that tell you where the bricks are. So the brick belonging to the first node could be at (1,1,1) but the brick belonging to the second node could be at (36,4,13). Of course it's not likely but this can be done. As for other information like normals, I'm not sure but I believe this is stored in a seperate 3D texture using the same brick layout. This way you only need to have one pointer to acces all the information.

#54 gboxentertainment   Members   -  Reputation: 766

Like
0Likes
Like

Posted 01 September 2012 - 02:29 AM

I'm going to begin actually implementing this technique now and I'll continue to provide updates on each step with some of my code. All coding is done in Directx11 with C++.

I'll start off by voxelizing my simple scene which consists of a plane for a floor, 3 walls and several spheres and cubes. There are only 3 models loaded into a single vertex buffer (cube, sphere, plane) with instances of the cube and sphere in the scene.

I'm pretty much going to follow the GPU Pro 3 example "Practical Binary Surface and Solid Voxelization with Direct3D 11" and I will use conservative surface voxelization as used in Crassin's OpenGL Insights chapter.

At the moment I will use three 3D Textures to store position (with the scene transformed to voxel space [0...1]), color (with opacity stored in the alpha float) and normals of each voxel. These will be format R8G8B8A8_UNORM, but I will work out later a more compact way of storing my voxels. I'm leaving out texture coordinates for the meanwhile and am going to just assign direct material colors to each instance to be voxelized.

I am going to use the compute shader to voxelize, because for some reason, I can't seem to get omsetrendertargetsandunorderedaccessviews to write to any buffer objects in the pixel shader.

For now, my lowest octree level will be 512x512x512 nodes - each pointing to bricks of 3x3x3 voxels. So there will be 513x513x513 voxels in a full grid - but since this is a sparse structure I think the 3d Textures that I will use will be 513x513x3.

Edited by gboxentertainment, 01 September 2012 - 07:05 AM.


#55 MrOMGWTF   Members   -  Reputation: 439

Like
0Likes
Like

Posted 01 September 2012 - 09:27 AM

I'm going to begin actually implementing this technique now and I'll continue to provide updates on each step with some of my code. All coding is done in Directx11 with C++.

I'll start off by voxelizing my simple scene which consists of a plane for a floor, 3 walls and several spheres and cubes. There are only 3 models loaded into a single vertex buffer (cube, sphere, plane) with instances of the cube and sphere in the scene.

I'm pretty much going to follow the GPU Pro 3 example "Practical Binary Surface and Solid Voxelization with Direct3D 11" and I will use conservative surface voxelization as used in Crassin's OpenGL Insights chapter.

At the moment I will use three 3D Textures to store position (with the scene transformed to voxel space [0...1]), color (with opacity stored in the alpha float) and normals of each voxel. These will be format R8G8B8A8_UNORM, but I will work out later a more compact way of storing my voxels. I'm leaving out texture coordinates for the meanwhile and am going to just assign direct material colors to each instance to be voxelized.

I am going to use the compute shader to voxelize, because for some reason, I can't seem to get omsetrendertargetsandunorderedaccessviews to write to any buffer objects in the pixel shader.

For now, my lowest octree level will be 512x512x512 nodes - each pointing to bricks of 3x3x3 voxels. So there will be 513x513x513 voxels in a full grid - but since this is a sparse structure I think the 3d Textures that I will use will be 513x513x3.


Great, will you share the source code if you manage to successfully implement it?

#56 gboxentertainment   Members   -  Reputation: 766

Like
0Likes
Like

Posted 01 September 2012 - 08:03 PM

Like I said, I will share some of my code - like the entire shader codes and the relevant setting-up code of the textures, buffers, etc. I won't provide my entire engine source code yet - its only experimental at this stage.

Anyway, at the moment I am trying to find out the most efficient way of visualizing my voxels and octree structure for debugging. Is raycasting generally the best method to use? Would it be faster to raycast in the pixel shader or the compute shader (because the GPU Pro 3 example does it in the pixel shader)?

#57 MrOMGWTF   Members   -  Reputation: 439

Like
0Likes
Like

Posted 08 September 2012 - 04:52 AM

Like I said, I will share some of my code - like the entire shader codes and the relevant setting-up code of the textures, buffers, etc. I won't provide my entire engine source code yet - its only experimental at this stage.

Anyway, at the moment I am trying to find out the most efficient way of visualizing my voxels and octree structure for debugging. Is raycasting generally the best method to use? Would it be faster to raycast in the pixel shader or the compute shader (because the GPU Pro 3 example does it in the pixel shader)?


What's up with your implementation? It's been a while.

#58 gboxentertainment   Members   -  Reputation: 766

Like
0Likes
Like

Posted 09 September 2012 - 01:04 AM

Glad you asked. I've spent all this time trying to get the voxelization working. I can't seem to get OMSetRenderTargetsAndUnorderedAccessViews working to voxelize using the pixel shader, so I have to use the compute shader, which I have had to gain an understanding of.

Anyway, a lot of time was spent trying to pass my instanced objects into the compute shader - i.e. using a single vertex buffer with each different mesh and then using another buffer to store instanced information - which I use as indexing for the vertex buffer to extract my scene in the compute shader.

I've just managed to get all my triangles into voxel space (so the minimum vertex in my scene is now located at 0.5,0.5,0.5, where the scene is within a 513x513x513 voxel coordinate system). I've also gotten the bounding box for each triangle in the compute shader, now I just need to find out the most efficient method of testing each triangle against each voxel within each bounding box, where I use 1 thread per triangle.

Once I have successfully voxelized, I will post that code as a start.
MrOMGWTF, you mentioned that you already know how to do fast scene voxelization? Have you successfully implemented? In the "Practical Binary Surface and Solid Voxelization with Direct3D 11" code, I have trouble understanding one thing that's done under the conservative surface voxelization method. I cannot understand the use of this statement: "if((flatDimensions & 3) >= 22)", when they test the case for 1D bounding boxes (1 voxel thick objects). I know that it is a bitwise & operation, but there does not seem to be any way that this statement can ever be true?

#59 MrOMGWTF   Members   -  Reputation: 439

Like
0Likes
Like

Posted 09 September 2012 - 04:53 AM

MrOMGWTF, you mentioned that you already know how to do fast scene voxelization? Have you successfully implemented? In the "Practical Binary Surface and Solid Voxelization with Direct3D 11" code, I have trouble understanding one thing that's done under the conservative surface voxelization method. I cannot understand the use of this statement: "if((flatDimensions & 3) >= 22)", when they test the case for 1D bounding boxes (1 voxel thick objects). I know that it is a bitwise & operation, but there does not seem to be any way that this statement can ever be true?


Yep, I know how to do a fast voxelization.
And as I said before, It's in this paper:
http://graphics.snu.ac.kr/class/graphics2011/materials/paper09_voxel_gi.pdf Page 6.
Since you're doing cone tracing you have to voxelize it few times, each time at lower resolution?

#60 gboxentertainment   Members   -  Reputation: 766

Like
0Likes
Like

Posted 09 September 2012 - 05:26 AM

i'm only voxelizing it once at the highest resolution. Then when it comes to creating the octree - i can use the voxels to subdivide the octree structure - much much faster instead of having to test triangle-box overlap at every level. I am thinking that maybe its better to prevoxelize objects into a file - and then load that file so I won't have to voxelize every time I compile the project.




Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS