Depth precision and shadow bias

Started by
3 comments, last by Aressera 7 months ago

I've been reworking the shadow mapping in my engine and noticed some weird behavior. I tried changing the depth texture format between the 16, 24, 32, 32F formats, but don't observe any difference at all with regard to the amount of bias required to eliminate shadow acne. For instance, with a 24 bit format, I should theoretically need 256 times less bias than a 16 bit format, yet they behave the same in practice. This doesn't make sense to me, because a higher precision format should be a better representation of the depth from the light perspective, and therefore need less bias, in the same way that less bias is needed when the light's depth range is reduced.

I queried GL_TEXTURE_DEPTH_SIZE to make sure I am setting up the textures correctly. It returns the expected values. Is the driver or hardware implementation choosing a specific format and ignoring what I request? It seems like the only explanation, but then why does GL_TEXTURE_DEPTH_SIZE return the same precision I request? For my main framebuffer it seems to use the correct 24-bit depth that I request (using 16-bit depth there increases Z-fighting), but shadows behave differently.

Based on the bias required, it seems like I'm getting about 16-bit depth. For instance, with 13 meter depth range (first cascade), I would expect with 24 bits depth to have about 13/(2^24)=7.7e-7 meter precision, yet it seems like I need 1-2cm of bias. 16-bit depth should theoretically have 2.0e-4 precision, which is still much smaller than the bias that seems to be required.

Does anyone have any ideas for what could be going on?

Advertisement

Aressera said:
For instance, with a 24 bit format, I should theoretically need 256 times less bias than a 16 bit format

Not really, since the main problem is probably not precision but quantization.
It happens because you approximate the complex geometry in the frustum pyramid of a texel with a single value.
To reduce this quantization, you need to increase SM resolution to make the pyramids narrower.
Sadly : )

If you want really high accuracy, something like this might be needed:

https://ktstephano.github.io/rendering/stratusgfx/svsm

Or ray tracing.

JoeJ said:
you need to increase SM resolution to make the pyramids narrower.

I realized this after I made the post (and edited the post but the edit didn't stick due to the crappy server). By increasing resolution, you are reducing the size of stair-stepping in the depth map, and therefore need less bias. I am getting decent consistent results now by scaling bias by the size of a texel in world space.

This topic is closed to new replies.

Advertisement