Sign in to follow this  
BrentMorris

Cascaded Shadow Maps Optimization Concept?

Recommended Posts

BrentMorris    1223

I just had a random thought the other day that sounds feasible, but I wonder if any of you guys can poke a hole in the concept.

 

The depth renders for cascaded shadow maps are orthographic, which means that there isn't any perspective foreshortening on the model renders. Every object with the same model/rotation/scale should have the same relative depth extents regardless of where they are on the shadow map.

 

I'm thinking it might be possible to render the model depth once to a render target texture, and then for every instance of that model on the shadow cascade you just draw quads and read the depths from the depth-texture that was rendered up front. You would also have to offset the the depths read in from the texture by the instance's distance from the orthographic camera. Then you would just specify the depth output in the pixel shader to allow for depth-testing like normal.

 

Depth rendering is pretty fast so this might be counterproductive because of the texture reading. You would probably have to atlas several model depth renders onto the same render target to minimize the number of binds/state-changes.

 

Can anyone see a flaw with the idea, or should I try out an implementation of it and tell you all how it goes?

Edited by DementedCarrot

Share this post


Link to post
Share on other sites
phil_t    8084


You would also have to offset the the depths read in from the texture by the instance's distance from the orthographic camera. Then you would just specify the depth output in the pixel shader to allow for depth-testing like normal.

 

Outputting depth from the pixel shader disables early-Z, so that can come at a pretty significant performance cost. If you had a lot of identical high poly models of the same orientation/scale I suppose it might outweigh the benefit. Only one way to find out!

Share this post


Link to post
Share on other sites
BrentMorris    1223

 

Outputting depth from the pixel shader disables early-Z, so that can come at a pretty significant performance cost.

 

On DX11+ hardware you can do texture reads in the vertex shader. You could tesselate the quad up to some level, read the depths, and interpolate them with the verts. That would let you keep the early-z! Then you would have to worry about the tesselation/quality tradoff, and how many polys are in the tesslated quad vs the model. This would also reduce the texture reads.

 

Also, the different zoom-scales of cascades in CSM would actually benefit the tesselation I think. More detail up close, and less further away.

 

Edit: 'Doh, you would have to sample the depth texture in the geometry shader, because tesselation happens after the vertex shader.

 

Also, it may be better to pre-bake out some tesselated quads instead of letting the tesselator do a bunch of redundant work. It would tesselate all of the object quads the same way. Then you could just use the vertex shader for the texture sampling. Managing those vertex buffers might be a pain in the ass though, so I don't know if it's worth it.

Edited by DementedCarrot

Share this post


Link to post
Share on other sites
kauna    2922

Well, nothing prevents you from rendering impostors for 3d models. You may use a simplified version of your model or a 2d silhouette. There is no need to do any tricks with rendering such as outputting depth. 

 

Of course this will result in inaccurate shadows at some parts, but with objects such as trees or plants, the errors will go unnoticed. Continuous meshes, such as terrains, will produce more visible shadowing artifacts. 

 

Cheers!

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this