Jump to content
  • Advertisement
Sign in to follow this  
RobMaddison

Linear or non-linear shadow maps?

This topic is 3169 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I asked this on the back of someone else's post but didn't get much response, so thought I'd see how it gets on in its own thread. I understand that, for aliasing mitigation, it might be beneficial to use linear depth maps as opposed to non-linear depth maps when rendering to a shadowmap floating point texture, but how is that normally achieved? Is it as simple as converting the regular float depth value to a 16.16 fixed point float or something like that? Secondly, I've read in several places that there is some hardware-assistance when dealing with shadowmaps - could someone please explain what this assistance is exactly? Thanks in advance

Share this post


Link to post
Share on other sites
Advertisement
I don't have any direct experience with shadow mapping, but I have written linear depth buffer shaders before: the projection matrix requires a couple of elements to be tweaked, the vertex shader requires a custom transform function, and the pixel shader needs to manually write the depth value after calculating it correctly. This eliminates the possibility of Z early out (afaik). If you can't find anything via google I can dig up the shaders and matrix code, but I am in work atm.
The only hardware assistance I know of for shadow-maps is the special shadow map texture and sampler types that use a single bit per pixel (iirc).

Share this post


Link to post
Share on other sites
Quote:
Original post by RobMaddison
I understand that, for aliasing mitigation, it might be beneficial to use linear depth maps as opposed to non-linear depth maps when rendering to a shadowmap floating point texture, but how is that normally achieved? Is it as simple as converting the regular float depth value to a 16.16 fixed point float or something like that?


First, some background on depth buffers. The "problem" with depth buffers is that with a perspective projection, the resulting depth value you get by dividing z by w will increase non-linearly from 0 to 1 as you move from the near-clip plane from the far-clip plane. So you end up with most of your viewable range ending up in the [0.9, 1.0] range which gives you a very uneven distribution of precision for your depth buffer. See this and this for more info.

Now for shadow maps, whether this is an issue depends on how you're storing your shadow maps and also the type of light source. For lights where you use a perspective projection for generating the shadow map (usually spot and point lights), this can be a problem if you store z/w in your shadow map. If you're manually writing the depth to a floating-point render target the problem is even worse...this is because floating-point values have more precision closer to 0 than they do closer to 1, and this compounds with the perspective projection problem to give you even more error. But for this particular case you have a very simple remedy: just store something other than z/w. For instance you can store the world-space distance from the light position to the pixel position, or you can store view-space z. For directional lights an orthographic projection is typically used, and with an orthographic projection your z/w value will be increase linearly so you won't have precision distribution problems.

Where things get tricky is if you don't write your shadow map values into a render target, and instead just directly sample a depth buffer. This is very common in commercial games, since it saves you from having to write to a render target in your shadow map generation step. This saves you bandwidth and memory, and also most modern GPU's have some sort of "fast-z" mode which allows to operate more quickly when only writing z values. In these cases you can't directly control the z value being output into your depth buffer unless you output depth from your pixel shader, and doing that is a pretty bad idea since it disables early-z optimizations. Also if you try to muck with the the z and w values in your vertex shader, you'll usually end up breaking some combination of rasterization, early-z, or z compression. The only remedy I know of for this situation is to use a floating-point depth buffer (if it's available), and flip the near and far planes in your projection.

Quote:
Original post by RobMaddison
Secondly, I've read in several places that there is some hardware-assistance when dealing with shadowmaps - could someone please explain what this assistance is exactly?

Thanks in advance


This typically refers to Nvidia's hardware PCF extension, which has been around for a long time. Basically it performs 2x2 PCF on the depth values for you in the texture unit, so that you don't have write pixel shader code for doing the 4 samples, the 4 compares, and the filtering. It's still available as a "driver hack" in D3D9, but in 10/11 it's no longer necessary since the API has generalized support for performing a compare operation during a texture fetch. ATI actually supports the D3D9 hack for their most recent hardware.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!