I'm just kind of day-dreaming for ideas here

We've gotten to the point now where it's possible to make a real-time renderer with 1000 dynamic lights, but the problem is that we can't really generate 1000 real-time shadow maps yet.

Most games usually only have a handful of dynamic shadow-casting lights, and a large number of small point lights w/o shadows, or a large number of static lights with baked shadows.

What if for all these lights where we can't afford to generate shadows for them, we spun the problem around backwards --- instead of calculating the visibility from the perspective of each light, what if we calculate the approximate visibility from each surface?

That's crazy talk, Hodgman! There's millions of surfaces (pixels) that need to be shaded, and only thousands of lights, so it should be more expensive... at least until we've also got millions of dynamic lights...

However, the results don't have to be perfect -- approximate blurry shadows are better than no shadows for all these extra lights.

And if it's possible, it's a fixed cost; you calculate this per-pixel visibility once, and then use it to get approximate shadows for any number of lights.

There's only a few techniques that come to mind when thinking along these lines:

- DSSDO -- an SSAO type effect, but you store occlusion per direction in an SH basis per pixel. When shading, you can retrieve an approximate occlusion value in the direction of each light, instead of an average occlusion value as with SSAO.
- Screen-space shadow tracing -- not sure what people call this one. Similar to SSAO, but you check occlusion in a straight line (
*in the direction of your light source*) instead of checking occlusion in a hemisphere. I've used it on PS3/360, and IIRC it was used in Crysis 1 too.

The problem with #2 is that it's still per-light -- for each light, you'd have to trace an individual ray, and save out thousands of these occlusion results...

The problem with #1 is that it's just an occlusion value, disregarding distance -- you might find an occluder that's 2m away, but have a light that's only 1m away, which will still cause the light to be occluded. This means it can only be used for very fine details (*smaller than the distance from the light to the object*).

To make technique #1 more versatile with ranges, what if instead of storing occlusion percentage values, what if we stored depth values, like a typical depth map / shadow map? You could still store it in SH, as long as you use a shadowing algorithm like VSM that tolerates blurred depth maps (*in this case you would have one set of SH values to store the z, and another set to store z ^{2} for the VSM algorithm*).

You could then generate this data per-pixel using a combination of techniques -- You could bake these "depth hemispheres" per texel for static objects, bake out "depths probes" for mid-range and do screen-space ray-tracing for very fine details, and then merge the results from each together.

Then when lighting a pixel, you could read it's z and z^{2} values for the specific lighting direction and apply the VSM algorithm to approximately shadow the light.

I haven't tried implementing this yet, it's just day-dreaming, but can anyone point out any obvious flaws in the idea?

To make technique #2 work for more than one light, what if we only use it to shadow specular reflections, not diffuse light. We can make the assumption that any light-source that contributes a visible specular reflection, must be located somewhere in a cone that's centred around the reflection-vector, and who's angle is defined by the surface roughness.

Yeah, this assumption isn't actually true for any microfacet specular (*it is true for Phong*), but it's close to being true a lot of the time

So, if we trace a ray down the R vector, and also trace some more rays that are randomly placed in this cone, find the distance to the occluder on each ray (*or use 0 if no occluder is found*), and then average all these distance, we've got a filtered z map. If we square the distances and average them we've got a filtered z^{2} map, and we can do VSM.

When shading any light, we can use these per-pixel values to shadow just the specular component of the light.

Not sure what you'd call this... Screen Space Specular Shadow Variance Ray Tracing is a mouthful

I *have* tried implementing this one over the past two days. I hadn't implemented decent SSR/SSRT/RTLR before, so I did that first, with 8 coarse steps at 16 pixels, then 16 fine steps one pixel at a time to find the intersection point. When using this technique to generate depth maps instead of reflection colours, I found that I could completely remove the "fine" tracing part (i.e. use a large step distance) with minimal artefacts -- this is because the artefact is basically just a depth bias, where occluders are pushed slightly towards the light.

At the moment, tracing 5 coarse rays in this cone costs 3.5ms at 2048x1152 on my GeForce GTX 460.

In this Gif, there's a green line-light in the background, reflecting off the road. Starting with a tiny cone centred around the reflection vector, the cone grows to an angle of 90º:

Instead of animating the cone width for testing purposes, my next step is to read the roughness value out of the G-buffer and use that to determine the cone width.

This effect will work best for extremely smooth surfaces, where the cone is small, so that the results are the most accurate. For rough surfaces, you're using the average depth found in a very wide cone, which is a big approximation, but the shadows fade out in this case and it still seems to give a nice "contact" hint.