Cascaded Voxel Cone Tracing reflections - cones go through walls at coarser LoDs

Started by
13 comments, last by Anfaenger 4 years, 5 months ago
3 hours ago, Anfaenger said:

but its surfel-based GI requires heavy precomputation.

I work on automated tool to generate regular surfel LOD hierarchies and seamless UVs to apply them to the mesh since... >2 years now, so don't use surfels!!!! haha :) (Especially not with terraforming.) This also means i'm actually focused with geometry processing, and i got rusty with GI. I'm also not up to date with ambient dice, which is interesting but just another new format to to represent the data (you surely know this: https://mynameismjp.wordpress.com/2016/10/09/sg-series-part-1-a-brief-and-incomplete-history-of-baked-lighting-representations/)

But i can mention some lesser known alternatives to tracing on how to calculate GI.

With the cascaded SH grid i tried two methods. The first was Bunnells interesting anti-radiosity trick to avoid expensive visibility determination all together, mentioned on the bottom here: https://developer.nvidia.com/gpugems/GPUGems2/gpugems2_chapter14.html He uses surfels, but it works with volume grid too. Beasically each voxels needs to gather a volume of neighbouring voxels from all cascades. Simple but bandwidth heavy. He tried to make games from the surfels method on last gen consoles. Here a result but it did not became a thing:

You see it is fast and looks good, but the visibility trick causes color shifting bleeding through walls, making it unpractical for complex interiors. He proposed to fix this with labeling rooms, but that's just a hack. For outdoor games however, the approach seems suited very well and i don't know why it has never been used.

 

The second approach i tried was diffusion. (Similar to Cryteks LPV, but not relying on screenspace) This solves the bandwidth issue, but speed of light becomes noticeable slow. Using a single SH volume also can not represent multiple bounces well - for this multiple volumes, or at least a second one to handle indirect bouncing would be necessary. This seems an example:

 

The reason i gave up on those things personally was the inability to represent complex interiors with voxel mip maps and resulting leaking for indoor scenes.

Sharp specular reflections are out of reach with any of this, ofc. 

 

8 hours ago, Frantic PonE said:

Those are the two major ones I can think of if you don't want to try split cone tracing, or waiting to see if Cryengine explains what they're doing in a readable manner

IIRC, they offer two modes, the cheaper one uses the voxels only for visibility determination, so they could be represented by a single bit. Less memory -> higher res to solve leaking, eventually. I guess it works somehow like this: https://software.intel.com/en-us/articles/layered-reflective-shadow-maps-for-voxel-based-indirect-illumination

 

13 hours ago, Anfaenger said:

But I'm afraid that the cost of tracing the whole scene through several cascades would be prohibitively high,

Maybe it's fast enough :)

EDIT: I'm optimistic, looking at this for example: http://teardowngame.com/

Seems he does not do anything special: http://blog.tuxedolabs.com/2018/10/17/from-screen-space-to-voxel-space.html

 

 

 

Advertisement

Thanks, it's all very interesting and inspirational!

After watching the presentation "Global Illumination in 'Tom Clancy's The Division' " [GDC 2016] (https://www.gdcvault.com/play/1023273/Global-Illumination-in-Tom-Clancy) I decided to go with the HL2 ambient cube basis instead of SH2. (HL2 ambient cube is also what Enlisted is using. And the math is very simple to understand in comparison to the SH nightmare.)

In that presentation, the speaker mentioned that they are using a single 6-channel volume texture ("irradiance volume") to store ambient cube coefficients. But there is no 6-channel texture format in DirectX (https://docs.microsoft.com/en-us/windows/win32/api/dxgiformat/ne-dxgiformat-dxgi_format). Should I resort to bit packing or use a structured buffer?

Btw, there's another way to solve light leaking with irradiance probes: using visibility cubemap per each probe, as in DDGI (https://morgan3d.github.io/articles/2019-04-01-ddgi/) (I'm sure you already know it). We don't need RTX, voxel/SDF raymarching/cone tracing will work too.

 

Sounds like performance is a huge concern for your current target, so probe-grid GI with ambient cube sounds like a good solution. Take a look at the RTX one though, there's an entirely non raytracing cheap extension they use to prevent leaks that might be helpful.

I'll try to make the voxels as small as possible. If there's too much light leaking, I'll try the probe visibility approach in Precomputed Light Field Probes [2017] / Dynamic Diffuse Global Illumination with Ray-Traced Irradiance Fields [2019].

I decided to store the irradiance volume (HL2 ambient cube) in six R11G11B10 volume textures. Therefore, 6 texture fetches per pixel. Maybe, I'm missing something obvious, but I have absolutely no idea how they store 6 channels in a single texture...

This topic is closed to new replies.

Advertisement