Cascaded Shadow Maps Optimization Concept?

Started by
2 comments, last by kauna 9 years, 11 months ago

I just had a random thought the other day that sounds feasible, but I wonder if any of you guys can poke a hole in the concept.

The depth renders for cascaded shadow maps are orthographic, which means that there isn't any perspective foreshortening on the model renders. Every object with the same model/rotation/scale should have the same relative depth extents regardless of where they are on the shadow map.

I'm thinking it might be possible to render the model depth once to a render target texture, and then for every instance of that model on the shadow cascade you just draw quads and read the depths from the depth-texture that was rendered up front. You would also have to offset the the depths read in from the texture by the instance's distance from the orthographic camera. Then you would just specify the depth output in the pixel shader to allow for depth-testing like normal.

Depth rendering is pretty fast so this might be counterproductive because of the texture reading. You would probably have to atlas several model depth renders onto the same render target to minimize the number of binds/state-changes.

Can anyone see a flaw with the idea, or should I try out an implementation of it and tell you all how it goes?

Advertisement


You would also have to offset the the depths read in from the texture by the instance's distance from the orthographic camera. Then you would just specify the depth output in the pixel shader to allow for depth-testing like normal.

Outputting depth from the pixel shader disables early-Z, so that can come at a pretty significant performance cost. If you had a lot of identical high poly models of the same orientation/scale I suppose it might outweigh the benefit. Only one way to find out!

Outputting depth from the pixel shader disables early-Z, so that can come at a pretty significant performance cost.

On DX11+ hardware you can do texture reads in the vertex shader. You could tesselate the quad up to some level, read the depths, and interpolate them with the verts. That would let you keep the early-z! Then you would have to worry about the tesselation/quality tradoff, and how many polys are in the tesslated quad vs the model. This would also reduce the texture reads.

Also, the different zoom-scales of cascades in CSM would actually benefit the tesselation I think. More detail up close, and less further away.

Edit: 'Doh, you would have to sample the depth texture in the geometry shader, because tesselation happens after the vertex shader.

Also, it may be better to pre-bake out some tesselated quads instead of letting the tesselator do a bunch of redundant work. It would tesselate all of the object quads the same way. Then you could just use the vertex shader for the texture sampling. Managing those vertex buffers might be a pain in the ass though, so I don't know if it's worth it.

Well, nothing prevents you from rendering impostors for 3d models. You may use a simplified version of your model or a 2d silhouette. There is no need to do any tricks with rendering such as outputting depth.

Of course this will result in inaccurate shadows at some parts, but with objects such as trees or plants, the errors will go unnoticed. Continuous meshes, such as terrains, will produce more visible shadowing artifacts.

Cheers!

This topic is closed to new replies.

Advertisement