Jump to content
  • Advertisement
Sign in to follow this  
whattapancake

OpenGL Depth texture seems odd and inaccurate

This topic is 504 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I'm working with SharpDX. I've worked pretty extensively with OpenGL, and so far my project with DX11 has worked exactly as expected. However, now I'm working on making a depth buffer that I can also bind to a shader. I've got the shader resource view bound, and it appears to be passed in correctly. However, the range that the values are in seems completely wrong. When I use Graphics Debugger and view the image, the entire depth view is between 0.987 and 0.997. This range does change as the camera moves, and yes I do understand that depth is not linear in view space. The issue is that with my near and far frustums being set at 0.1 -> 200.0, the whole scene (even at 10ish meters) is around .99 in depth. The only way I can get any lower values is to have the camera directly intersect a triangle, at which point it still seems to fade to solid white very rapidly. Even if I don't bind the shader resource view, the issue is still there. Here's the code that creates the depth texture:

var depthTexture = new D3D11.Texture2D(renderer.GetDevice(),
    new D3D11.Texture2DDescription()
    {
        Format = SharpDX.DXGI.Format.R32_Typeless,
        ArraySize = 1,
        MipLevels = 1,
        Width = Camera.GetWidth(),
        Height = Camera.GetHeight(),
        SampleDescription = new SharpDX.DXGI.SampleDescription(1, 0),
        Usage = D3D11.ResourceUsage.Default,
        BindFlags = D3D11.BindFlags.DepthStencil | D3D11.BindFlags.ShaderResource,
        CpuAccessFlags = D3D11.CpuAccessFlags.None,
        OptionFlags = D3D11.ResourceOptionFlags.None
    });

Here's the code that creates the depth stencil view:

D3D11.DepthStencilViewDescription depthDesc = new D3D11.DepthStencilViewDescription()
{
    Format = SharpDX.DXGI.Format.D32_Float,
    Dimension = D3D11.DepthStencilViewDimension.Texture2D,
    Flags = D3D11.DepthStencilViewFlags.None,
};

Here's the code that creates the Shader Resource View:

gBufferShaderViews[2] = new D3D11.ShaderResourceView(renderer.GetDevice(), depthTexture, new D3D11.ShaderResourceViewDescription() {
    Format = SharpDX.DXGI.Format.R32_Float,
    Dimension = SharpDX.Direct3D.ShaderResourceViewDimension.Texture2D,
    Texture2D = new D3D11.ShaderResourceViewDescription.Texture2DResource()
    {
        MipLevels = 1,
        MostDetailedMip = 0
    }
});

And in case it's relevant, here's some code that seems related:

viewport = new Viewport(0, 0, Width, Height);
viewport.MaxDepth = 1.0F;
viewport.MinDepth = 0.0F;
d3dDeviceContext.Rasterizer.SetViewport(viewport);

I've tried using R24X8_Typeless / R24_UNorm_X8_Typeless as the format as well, it had the same exact issue, in the exact same range. Is there something I'm missing here? Could it have to do with the projection matrix? I'm using the built-in Matrix.PerspectiveFovLH function that comes with SharpDX, here's the call that builds that:

projection = Matrix.PerspectiveFovLH(fov * 0.0174533F, aspect, zNear, zFar); //The constant is degrees to radians

Ignoring the depth buffer, everything renders fine (including no issues with z-fighting or other depth issues).

Thanks in advance for any help that can be given, if more code is needed just let me know and I'll get it.

Share this post


Link to post
Share on other sites
Advertisement

When I use Graphics Debugger and view the image, the entire depth view is between 0.987 and 0.997.

For a 24-bit zbuffer, there's about 17 bits of precision between those two values, which is plenty enough for a zbuffer to be working / not have any z-fighting.

my near and far frustums being set at 0.1 -> 200.0, the whole scene (even at 10ish meters) is around .99 in depth.

That is entirely as expected. Given those numbers, and a 24 bit buffer, at 10 units, the zbuffer value will be 16617752/16777216, or ~0.9905.
You can check the math :)
a = zFar / ( zFar - zNear )
b = ( zFar * zNear ) / ( zNear - zFar )
new_z = a + b / z
z_buffer_value = (2^24) * new_z 
0.9905 ~= (200/(200-0.1)) + ((200*0.1)/(0.1-200)) / 10
You don't have a problem. Everything is working as normal.

and yes I do understand that depth is not linear in view space

This shows exactly how crazy this fact is.
The recommended way to mitigate this is to use a 32-bit floating point depth buffer, and construct your perspective matrix in a way where the far plane is mapped to 0.0 after projection, and the near plane is mapped to 1.0 after projection (opposite of the traditional way of making perspective matrices). The reason behind this choice is that floating point is also not linear -- it has more precision when close to zero. These two non-linear curves then mostly cancel out -- the "reverse" perspective matrix will push most values to be very close to zero, and the floating point format dedicates most bits to representing values that are close to zero... so you end up with something close to linear precision across the entire depth range.
z_buffer_value = (2^24) * ( a + b / z )

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!