Sign in to follow this  
jcabeleira

DX11 Results on Voxel Cone Tracing

Recommended Posts

This looks pretty cool!

- How are you choosing your grid bounds?
- Is the voxelization done at runtime? I assume no? ("voxelization is performed on the CPU with raytracing")
- The intersection generated by the +X ray is injected into the -X 3D texture?
- During cone trace, how are you dealing with occlusion?
- "which makes them look displaced from the scene even for glossy reflections." What does this mean? Shouldn't Eye <-> Glossy <-> Diffuse <-> Light work?

Also, is there SSAO in those screenshots?

Awesome stuff [img]http://public.gamedev.net//public/style_emoticons/default/biggrin.png[/img] Edited by jameszhao00

Share this post


Link to post
Share on other sites
Any chances of gettings source code?
And does it support indirect occlusion?

[quote]
- During cone trace, how are you dealing with occlusion?
[/quote]

This. Edited by MrOMGWTF

Share this post


Link to post
Share on other sites
Thanks for your replies, guys.

[quote name='D.V.D' timestamp='1348106382' post='4981886']
I downloaded the demo on your site, im pretty sure its the same one but I only get a black screen O.o
[/quote]

Sorry to disappoint you but the demo on my site is a bit old and doesn't include these new features. The black screen must be caused by some incompatibility with your graphics card, unfortunately the demo was only tested on an Nvidia GTX 260 so it should work well on any NVidia card above that. Also, make sure you have updated drivers.

[quote name='jameszhao00' timestamp='1348115315' post='4981916']
How are you choosing your grid bounds?
- Is the voxelization done at runtime? I assume no? ("voxelization is performed on the CPU with raytracing")
- The intersection generated by the +X ray is injected into the -X 3D texture?
- During cone trace, how are you dealing with occlusion?
- "which makes them look displaced from the scene even for glossy reflections." What does this mean? Shouldn't Eye <-> Glossy <-> Diffuse <-> Light work?

Also, is there SSAO in those screenshots?
[/quote]

The grid is fixed at the world origin with dimensions of 30x30x30 meters.
The voxelization can be redone in runtime but it is not real-time. The raytracing takes about 1 second and inserting the points into the volumes is slow as hell taking about 8 seconds. This step is far from optimized because I'm not even using vertex buffers for rendering the points, I'm rendering them with glBegin/glEnd (shame on me, I know). EDIT: i've replaced the point injection with vertex arrays and now it is done instantaneously.
The intersection generated by the +X ray is not necessarly injected into the -X volume. The rays are cast in all 6 directions only to ensure the voxel representation of the scene has no holes. The injection into the 6 destination volumes depends only on the normal of the point which is what represents the direction of the radiance.
Occlusion is dealt by keeping track of the accumulated opacity of all the samples processed for that cone. Think of it as regular transparency blending, where each sample is a semi transparent window that partially occludes the next sample.
The reflections don't look good because the voxel representation of the scene contains only direct lighting. So, what you'll see in the reflection is the scene lit by your light sources but being completely black on the shadows.
Regarding the SSAO, yes I have it. But the cool thing about this technique is that if you use enough cones you don't even need SSAO since the technique provides that effect for free and with much more realism. Edited by jcabeleira

Share this post


Link to post
Share on other sites
[quote name='jcabeleira' timestamp='1348159250' post='4982080']
Thanks for your replies, guys.

[quote name='D.V.D' timestamp='1348106382' post='4981886']
I downloaded the demo on your site, im pretty sure its the same one but I only get a black screen O.o
[/quote]

Sorry to disappoint you but the demo on my site is a bit old and doesn't include these new features. The black screen must be caused by some incompatibility with your graphics card, unfortunately the demo was only tested on an Nvidia GTX 260 so it should work well on any NVidia card above that. Also, make sure you have updated drivers.

[quote name='jameszhao00' timestamp='1348115315' post='4981916']
How are you choosing your grid bounds?
- Is the voxelization done at runtime? I assume no? ("voxelization is performed on the CPU with raytracing")
- The intersection generated by the +X ray is injected into the -X 3D texture?
- During cone trace, how are you dealing with occlusion?
- "which makes them look displaced from the scene even for glossy reflections." What does this mean? Shouldn't Eye <-> Glossy <-> Diffuse <-> Light work?

Also, is there SSAO in those screenshots?
[/quote]

The grid is fixed at the world origin with dimensions of 30x30x30 meters.
The voxelization can be redone in runtime but it is not real-time. The raytracing takes about 1 second and inserting the points into the volumes is slow as hell taking about 8 seconds. This step is far from optimized because I'm not even using vertex buffers for rendering the points, I'm rendering them with glBegin/glEnd (shame on me, I know).
The intersection generated by the +X ray is not necessarly injected into the -X volume. The rays are cast in all 6 directions only to ensure the voxel representation of the scene has no holes. The injection into the 6 destination volumes depends only on the normal of the point which is what represents the direction of the radiance.
Occlusion is dealt by keeping track of the accumulated opacity of all the samples processed for that cone. Think of it as regular transparency blending, where each sample is a semi transparent window that partially occludes the next sample.
The reflections don't look good because the voxel representation of the scene contains only direct lighting. So, what you'll see in the reflection is the scene lit by your light sources but being completely black on the shadows.
Regarding the SSAO, yes I have it. But the cool thing about this technique is that if you use enough cones you don't even need SSAO since the technique provides that effect for free and with much more realism.
[/quote]

What about my question, hm?

Share this post


Link to post
Share on other sites
[quote name='MrOMGWTF' timestamp='1348120592' post='4981930']
Any chances of gettings source code?
And does it support indirect occlusion?
[/quote]

[quote name='MrOMGWTF' timestamp='1348161293' post='4982092']
What about my question, hm?
[/quote]

Hehe. Sorry MrOMGWTF, I must have missed your questions.
I may release the source code along with a demo in the future, but for now the source code is a bit too messy for publishing.
What do you mean by indirect occlusion?

Share this post


Link to post
Share on other sites
[quote name='jcabeleira' timestamp='1348168964' post='4982134']
[quote name='MrOMGWTF' timestamp='1348120592' post='4981930']
Any chances of gettings source code?
And does it support indirect occlusion?
[/quote]

[quote name='MrOMGWTF' timestamp='1348161293' post='4982092']
What about my question, hm?
[/quote]

Hehe. Sorry MrOMGWTF, I must have missed your questions.
I may release the source code along with a demo in the future, but for now the source code is a bit too messy for publishing.
What do you mean by indirect occlusion?
[/quote]

I mean that, there is a white wall, a green wall, and a blue wall occluding green wall. The green wall will be still illuminationg the white wall, but it shouldn't. Because the blue wall is occluding the green wall. Shouldn't you stop tracing at the first intersection you find? Also, you do cone tracing for each pixel, yeah?

@To users that -1 my post:
Let the hate flow through you. Edited by MrOMGWTF

Share this post


Link to post
Share on other sites
[quote name='MrOMGWTF' timestamp='1348208565' post='4982249']
...
I mean that, there is a white wall, a green wall, and a blue wall occluding green wall. The green wall will be still illuminationg the white wall, but it shouldn't. Because the blue wall is occluding the green wall. Shouldn't you stop tracing at the first intersection you find? Also, you do cone tracing for each pixel, yeah?

[/quote]

Cone tracing voxel mipmaps means you progressively lookat higher and higher level mipmaps. A higher level mipmap stores an occlusion distribution built from child voxels (and not concrete occluders). In his case, I think he's just storing average occlusion for the voxel, and not something that varies by direction/position/etc.

Share this post


Link to post
Share on other sites
[quote name='MrOMGWTF' timestamp='1348208565' post='4982249']
I mean that, there is a white wall, a green wall, and a blue wall occluding green wall. The green wall will be still illuminationg the white wall, but it shouldn't. Because the blue wall is occluding the green wall. Shouldn't you stop tracing at the first intersection you find? Also, you do cone tracing for each pixel, yeah?
[/quote]

[quote name='jameszhao00' timestamp='1348211462' post='4982262']
Cone tracing voxel mipmaps means you progressively lookat higher and higher level mipmaps. A higher level mipmap stores an occlusion distribution built from child voxels (and not concrete occluders). In his case, I think he's just storing average occlusion for the voxel, and not something that varies by direction/position/etc
[/quote]

jameszhao00, you're right. The voxels of the highest resolution mipmap are either completely opaque or completely transparent. But for the lower resolution voxels they usually are partially transparent due to the averaging of the voxels. Therefore, calculating occlusion is not as simple as looking for the first intersection (i.e. the first fully opaque voxel), when sampling the voxels we need to keep track of the accumulated opacity of all the voxels we have sampled so far. When the accumulated opacity reaches 1.0 we can stop tracing because the next samples would be completely occluded.

Share this post


Link to post
Share on other sites
Hey man, I read your thesis, nice work there ! too much specialized for your particular scenery (not scalable enough) in my opinion but still interesting.
notably the novel par where you use screen space sky visibility.
anyways, the sparse voxel octree will allow you to go down to 512*512*512 leaf precision which is better than your 128 precision, particularly for light leaks in interiors.
also, i havn't read a leak suppressor technique in Crassin's paper, but clearly it should be possible to apply the central difference scheme used in Crytek's LPV to try to suppress leaks. Careful though, I have implemented it and I can tell you it sometime causes severe artefacts depending on the empirical strength of the anisotropic value you choose. But the good part is that with a 512 division it should be much less noticeable.

About the specular, you should really try to fix it because it is one of the things that gives the most of the wow factor for this technique and it only requires ONE cone tracing where you alreay have 20 ! so it should not hurt your perf.

about the perf using sparse octree, you will get lesser performance, i'm sure you are familiar with the horrors Crassin had to cope with using global shared list of sleeping threads and multi passes until they are empty, not even talking about the bricks lateral two passes communication, this is just hell on earth and I feel like he must be a god of GPU debugging to got that working OK, or he is just bullshitting us with his paper.

the only idea is to get better precision by avoiding the otherwise necessary 9GB of data with a dense grid. remainder: in his paper he already needs 1GB this is huge.

one question for you: I didn't follow very thoroughly the part in Crassin's paper where he talks about light-view buffer to perform multi bounce GI, though it seems like a crucial part of his method, did you implement that at all ??

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Forum Statistics

    • Total Topics
      628285
    • Total Posts
      2981836
  • Similar Content

    • By GreenGodDiary
      I'm attempting to implement some basic post-processing in my "engine" and the HLSL part of the Compute Shader and such I think I've understood, however I'm at a loss at how to actually get/use it's output for rendering to the screen.
      Assume I'm doing something to a UAV in my CS:
      RWTexture2D<float4> InputOutputMap : register(u0); I want that texture to essentially "be" the backbuffer.
       
      I'm pretty certain I'm doing something wrong when I create the views (what I think I'm doing is having the backbuffer be bound as render target aswell as UAV and then using it in my CS):
       
      DXGI_SWAP_CHAIN_DESC scd; ZeroMemory(&scd, sizeof(DXGI_SWAP_CHAIN_DESC)); scd.BufferCount = 1; scd.BufferDesc.Format = DXGI_FORMAT_R8G8B8A8_UNORM; scd.BufferUsage = DXGI_USAGE_RENDER_TARGET_OUTPUT | DXGI_USAGE_SHADER_INPUT | DXGI_USAGE_UNORDERED_ACCESS; scd.OutputWindow = wndHandle; scd.SampleDesc.Count = 1; scd.Windowed = TRUE; HRESULT hr = D3D11CreateDeviceAndSwapChain(NULL, D3D_DRIVER_TYPE_HARDWARE, NULL, NULL, NULL, NULL, D3D11_SDK_VERSION, &scd, &gSwapChain, &gDevice, NULL, &gDeviceContext); // get the address of the back buffer ID3D11Texture2D* pBackBuffer = nullptr; gSwapChain->GetBuffer(0, __uuidof(ID3D11Texture2D), (LPVOID*)&pBackBuffer); // use the back buffer address to create the render target gDevice->CreateRenderTargetView(pBackBuffer, NULL, &gBackbufferRTV); // set the render target as the back buffer CreateDepthStencilBuffer(); gDeviceContext->OMSetRenderTargets(1, &gBackbufferRTV, depthStencilView); //UAV for compute shader D3D11_UNORDERED_ACCESS_VIEW_DESC uavd; ZeroMemory(&uavd, sizeof(uavd)); uavd.Format = DXGI_FORMAT_R8G8B8A8_UNORM; uavd.ViewDimension = D3D11_UAV_DIMENSION_TEXTURE2D; uavd.Texture2D.MipSlice = 1; gDevice->CreateUnorderedAccessView(pBackBuffer, &uavd, &gUAV); pBackBuffer->Release();  
      After I render the scene, I dispatch like this:
      gDeviceContext->OMSetRenderTargets(0, NULL, NULL); m_vShaders["cs1"]->Bind(); gDeviceContext->CSSetUnorderedAccessViews(0, 1, &gUAV, 0); gDeviceContext->Dispatch(32, 24, 0); //hard coded ID3D11UnorderedAccessView* nullview = { nullptr }; gDeviceContext->CSSetUnorderedAccessViews(0, 1, &nullview, 0); gDeviceContext->OMSetRenderTargets(1, &gBackbufferRTV, depthStencilView); gSwapChain->Present(0, 0); Worth noting is the scene is rendered as usual, but I dont get any results from the CS (simple gaussian blur)
      I'm sure it's something fairly basic I'm doing wrong, perhaps my understanding of render targets / views / what have you is just completely wrong and my approach just makes no sense.

      If someone with more experience could point me in the right direction I would really appreciate it!

      On a side note, I'd really like to learn more about this kind of stuff. I can really see the potential of the CS aswell as rendering to textures and using them for whatever in the engine so I would love it if you know some good resources I can read about this!

      Thank you <3
       
      P.S I excluded the .hlsl since I cant imagine that being the issue, but if you think you need it to help me just ask

      P:P:S. As you can see this is my first post however I do have another account, but I can't log in with it because gamedev.net just keeps asking me to accept terms and then logs me out when I do over and over
    • By noodleBowl
      I was wondering if anyone could explain the depth buffer and the depth stencil state comparison function to me as I'm a little confused
      So I have set up a depth stencil state where the DepthFunc is set to D3D11_COMPARISON_LESS, but what am I actually comparing here? What is actually written to the buffer, the pixel that should show up in the front?
      I have these 2 quad faces, a Red Face and a Blue Face. The Blue Face is further away from the Viewer with a Z index value of -100.0f. Where the Red Face is close to the Viewer with a Z index value of 0.0f.
      When DepthFunc is set to D3D11_COMPARISON_LESS the Red Face shows up in front of the Blue Face like it should based on the Z index values. BUT if I change the DepthFunc to D3D11_COMPARISON_LESS_EQUAL the Blue Face shows in front of the Red Face. Which does not make sense to me, I would think that when the function is set to D3D11_COMPARISON_LESS_EQUAL the Red Face would still show up in front of the Blue Face as the Z index for the Red Face is still closer to the viewer
      Am I thinking of this comparison function all wrong?
      Vertex data just in case
      //Vertex date that make up the 2 faces Vertex verts[] = { //Red face Vertex(Vector4(0.0f, 0.0f, 0.0f), Color(1.0f, 0.0f, 0.0f)), Vertex(Vector4(100.0f, 100.0f, 0.0f), Color(1.0f, 0.0f, 0.0f)), Vertex(Vector4(100.0f, 0.0f, 0.0f), Color(1.0f, 0.0f, 0.0f)), Vertex(Vector4(0.0f, 0.0f, 0.0f), Color(1.0f, 0.0f, 0.0f)), Vertex(Vector4(0.0f, 100.0f, 0.0f), Color(1.0f, 0.0f, 0.0f)), Vertex(Vector4(100.0f, 100.0f, 0.0f), Color(1.0f, 0.0f, 0.0f)), //Blue face Vertex(Vector4(0.0f, 0.0f, -100.0f), Color(0.0f, 0.0f, 1.0f)), Vertex(Vector4(100.0f, 100.0f, -100.0f), Color(0.0f, 0.0f, 1.0f)), Vertex(Vector4(100.0f, 0.0f, -100.0f), Color(0.0f, 0.0f, 1.0f)), Vertex(Vector4(0.0f, 0.0f, -100.0f), Color(0.0f, 0.0f, 1.0f)), Vertex(Vector4(0.0f, 100.0f, -100.0f), Color(0.0f, 0.0f, 1.0f)), Vertex(Vector4(100.0f, 100.0f, -100.0f), Color(0.0f, 0.0f, 1.0f)), };  
    • By mellinoe
      Hi all,
      First time poster here, although I've been reading posts here for quite a while. This place has been invaluable for learning graphics programming -- thanks for a great resource!
      Right now, I'm working on a graphics abstraction layer for .NET which supports D3D11, Vulkan, and OpenGL at the moment. I have implemented most of my planned features already, and things are working well. Some remaining features that I am planning are Compute Shaders, and some flavor of read-write shader resources. At the moment, my shaders can just get simple read-only access to a uniform (or constant) buffer, a texture, or a sampler. Unfortunately, I'm having a tough time grasping the distinctions between all of the different kinds of read-write resources that are available. In D3D alone, there seem to be 5 or 6 different kinds of resources with similar but different characteristics. On top of that, I get the impression that some of them are more or less "obsoleted" by the newer kinds, and don't have much of a place in modern code. There seem to be a few pivots:
      The data source/destination (buffer or texture) Read-write or read-only Structured or unstructured (?) Ordered vs unordered (?) These are just my observations based on a lot of MSDN and OpenGL doc reading. For my library, I'm not interested in exposing every possibility to the user -- just trying to find a good "middle-ground" that can be represented cleanly across API's which is good enough for common scenarios.
      Can anyone give a sort of "overview" of the different options, and perhaps compare/contrast the concepts between Direct3D, OpenGL, and Vulkan? I'd also be very interested in hearing how other folks have abstracted these concepts in their libraries.
    • By turanszkij
      If I do a buffer update with MAP_NO_OVERWRITE or MAP_DISCARD, can I just write to the buffer after I called Unmap() on the buffer? It seems to work fine for me (Nvidia driver), but is it actually legal to do so? I have a graphics device wrapper and I don't want to expose Map/Unmap, but just have a function like void* AllocateFromRingBuffer(GPUBuffer* buffer, uint size, uint& offset); This function would just call Map on the buffer, then Unmap immediately and then return the address of the buffer. It usually does a MAP_NO_OVERWRITE, but sometimes it is a WRITE_DISCARD (when the buffer wraps around). Previously I have been using it so that the function expected the data upfront and would copy to the buffer between Map/Unmap, but now I want to extend functionality of it so that it would just return an address to write to.
    • By mister345
      Trying to write a multitexturing shader in DirectX11 - 3 textures work fine, but adding 4th gets sampled as black!
      Could you please look at the textureClass.cpp line 79? - I'm guess its D3D11_TEXTURE2D_DESC settings are wrong, 
      but no idea how to set it up right. I tried changing ArraySize from 1 to 4, but does nothing. If thats not the issue, please look
      at the LightShader_ps - maybe doing something wrong there? Otherwise, no idea.
          // Setup the description of the texture.
          textureDesc.Height = height;
          textureDesc.Width = width;
          textureDesc.MipLevels = 0;
          textureDesc.ArraySize = 1;
          textureDesc.Format = DXGI_FORMAT_R8G8B8A8_UNORM;
          textureDesc.SampleDesc.Count = 1;
          textureDesc.SampleDesc.Quality = 0;
          textureDesc.Usage = D3D11_USAGE_DEFAULT;
          textureDesc.BindFlags = D3D11_BIND_SHADER_RESOURCE | D3D11_BIND_RENDER_TARGET;
          textureDesc.CPUAccessFlags = 0;
          textureDesc.MiscFlags = D3D11_RESOURCE_MISC_GENERATE_MIPS;
      Please help, thanks.
      https://github.com/mister51213/DirectX11Engine/blob/master/DirectX11Engine/Texture.cpp
       
  • Popular Now