For integer depth buffers:
Bias = scale * max(|dzdx|,|dzdy|) + offset * 1/2^bits_in_z_format
For floating point depth buffers:
Bias = scale * max(|dzdx|,|dzdy|) + offset * 2^(exponent(max_z_in_primitive) - mantissa_bits_in_z_format)
I am a bit confused about something however. Is this the code I use within my HLSL code for calculating the shadows with the depth map? Or is this outside of the HLSL code as well?
I assume dx/dy/dz is cameraPos - vertexPos so this would be done in the shader, correct?
Since these are render states outside of the shader, how do they apply to my final shader caculation to finally apply the shadow? What are they doing to the rendering of the primitives?
I think I am just confused what implementations I am supposed to be doing inside and outside the shader for the calculations.