Hybrid Frustum Traced Shadows

Started by
19 comments, last by JoeJ 7 years, 11 months ago

I came across this article: https://developer.nvidia.com/hybrid-frustum-traced-shadows-0

I am having a little difficulty following it, I'm missing something. If someone could explain what exactly they're doing it would be appreciated.

Thank You.

-potential energy is easily made kinetic-

Advertisement

Instead of rastering potential light blocking triangles and doing the shadow test based on the rasterization result,

they do the shadow test directly on triangles itself by testing each texel against the planes build from its edges and light source position.

To ensure no edge-intersecting texel is missed, conservative rasterization is necessary.

Pro is robustness leading to pixel perfect shadows (reminds me to shadow volumes).

Con is trashing the eintire idea behind the efficiency of shadow maps.

There should be better ways to burn the PC vs. console advantages... just my opinion :)

Yeah I'm still lost... can you go step by step? I understand the concept of testing a point against the frustum created by the triangle but i don't really get much else.

-potential energy is easily made kinetic-

During the caster pass, instead of storing depth at each pixel, they store the triangle's plane equation coefficients.

During the receiver pass, instead of doing depthAtReceiver >= depthAtShadowmap test like in regular shadow mapping, they perform a depthAtReceiver >= calculateDepthAt( planeEquationCoefficients, x, y );

Becoming effectively a form of raytracing since it's a ray vs triangle intersection test.

So where does this https://developer.nvidia.com/sites/default/files/akamai/gameworks/Frustum_Trace.jpg fit into what you just described. Also the how does the irregular z-buffer fit into this?

-potential energy is easily made kinetic-

So where does this https://developer.nvidia.com/sites/default/files/akamai/gameworks/Frustum_Trace.jpg fit into what you just described. Also the how does the irregular z-buffer fit into this?

That's afaik, the ray vs triangle intersection test. You construct your frustum as in the jpeg then test if the onscreen pixel is inside that frustum. Don't remember what the irregular Z-buffer was for, as I glanced through the paper and concluded Sebastian's "virtual shadow mapping" (about 3/4ths of the way down) would serve something similar in terms of image quality while doing so a lot faster.

If you're really going for some "make high end PC stuff useful" as Joe-J suggests I've found just having everything be scaleable in the engine is a good idea anyway. That way you can turn things (SSR/SSAO samples, shadow map res, g-buffer quality, LOD distance/quality, HDR buffer quality, etc.) down and/or up as needed to hit any platform and target framerates.

I probably get the details wrong, but i guess it works somehow like this:

Instead storing depth per shadow map texel, store all triangles (or its planes) touching the texel in a list.

How many triangles do we need at least per texel?

How do we sort out occluded or least important triangles in the likely case we have too much triangles?

Then at the shadowing pass, link every screen pixel to its referring shadowmap texel.

Why? - We could calculate the shadowing just in place by iterating over all triangles from the referring texel(s).

So why do we need that 'irregular Z-buffer list' they talk about?

Seems i have more questions than answers - so maybe i should not make sarcastic comments at that point yet.

But i miss the days where they did public research instead marketing for the masses and gameworks for the devs.

Ha - better idea.

1. Think of the screen pixels as rays from the light source.

Link each ray to the shadow map texel - because rays have no area each ray links to only one texel which is why 'fixed memory footprint' works.

2. Render the shadow map.

For each texel occupied by the current triangle check the rays for occlusion.

That's pretty fast and nice. Really an interesting usecase for conservative rasterization.

Appologizes to the green team ;)

Also the how does the irregular z-buffer fit into this?

They don't use an irregular z-buffer. They don't even need a Z-buffer. Pay attention again: instead of storing depth at each pixel, they store the triangle's plane equation coefficients. A Z-buffer is used to store depth. If they don't store depth, they are not using a Z-buffer.

So where does this https://developer.nvidia.com/sites/default/files/akamai/gameworks/Frustum_Trace.jpg fit into what you just described.

The picture is a visual description of "depthAtReceiver >= calculateDepthAt( planeEquationCoefficients, x, y );"

I don't think they store a plane eq. One texel may cover many triangles - a single plane would be an approx. and it would be impossible to calculate pixel perfect shadows.

If i'm right, all they store in the shadow map is a index to a screen pixel, and that pixel indices the next pixel falling to the same texel.

Ending up with a linked list of all pixels the texel may shadow.

Then they render a potential occluding triangle from the light position,

and for each texel it covers they check the list of pixels and mark them shadowed if they are inside all triangle planes.

No need to store planes. Seems you make a similiar wrong assumption i did initially?

Still expensive, but it might be a practical replacemant for sun cascades.

To avoid the need for conservative rasterization occluder edges could be extended in GS, allowing totally robust and accurate soft shadows too.

This topic is closed to new replies.

Advertisement