Jump to content
  • Advertisement
Sign in to follow this  
theagentd

OpenGL EVSM performance tip!

This topic is 2155 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Advertisement
The first time I saw this trick mentioned was in the tech post-mortem for Little Big Planet (they rolled it into the blurring pass since they didn't use MSAA), so it's been around for a while! In fact Andrew Lauritzen used it in his sample distribution shadow maps sample. Early on with VSM's it wasn't possible to do so since DX9 hardware didn't support sampling MSAA textures, and DX10 hardware didn't support sampling MSAA depth buffers (DX10.1 added this). But now on DX11-level hardware it's totally doable.

You'd probably get even better performance if you didn't use a fragment shader at all for rendering the shadow maps, since the hardware will run faster with the fragment shader disabled. Explicitly writing out depth through gl_FragDepth will also slow things down a bit, since the hardware won't be able to skip occluded fragments. Edited by MJP

Share this post


Link to post
Share on other sites
Haha, I knew I couldn't be the first one to discover this... =S

I'd like to point out that this does work on my GTX 295 which doesn't support DirectX 10.1, so it's actually doable with OpenGL but not with DirectX it seems?

I would not recommend using non-linear depth. The precision distribution is not made to work well with floats, and the additional steps all reduce the effective precision. The much improved precision of using eye-space distance can be spent on a higher C value to reduce bleeding. However, you are correct that performance is better thanks to early z rejection before the shader is run. It might be worth using both a color attachment for the eye-space distance and a depth buffer for early z rejection if your scene has lots of overdraw.

Share this post


Link to post
Share on other sites

I'd like to point out that this does work on my GTX 295 which doesn't support DirectX 10.1, so it's actually doable with OpenGL but not with DirectX it seems?


Yeah it was common knowledge that the GTX 200 series supported *most* of the 10.1 feature set, but they never bothered to get it fully compliant.


I would not recommend using non-linear depth. The precision distribution is not made to work well with floats, and the additional steps all reduce the effective precision. The much improved precision of using eye-space distance can be spent on a higher C value to reduce bleeding. However, you are correct that performance is better thanks to early z rejection before the shader is run. It might be worth using both a color attachment for the eye-space distance and a depth buffer for early z rejection if your scene has lots of overdraw.


Well for orthographic projections (directional lights) the depth value is already linear, and for a perspective projection you can flip the near and far planes if you're using a floating point depth buffer, which mostly balances out the precision issue. But of course, your mileage may vary. smile.png

Share this post


Link to post
Share on other sites
Oh, I didn't know any of that! How lame of Nvidia! I might try that flipping trick too, it sounds interesting! How would I use the flipped depth value? Would I need to use GL_GREATER instead of GL_LESS in my depth test? How would I treat the value when resolving the MSAA depth texture? Do I need to invert it or something?

Share this post


Link to post
Share on other sites
For any value from the depth buffer you can convert to a linear depth value (z component of view-space position) by using a few values from the projection matrix. Just follow the math in the matrix multiplication and you can derive it pretty easily. Doing this still works fine with a projection that uses "flipped" clipping planes. So for EVSM you can just render using "flipped" depth (make sure you switch the direction of the depth test), and then when you perform the EVSM converison/MSAA resolve you can convert to a linear depth value right after sampling the depth texture.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!