D3D12 default to logrithmic depth buffer?

Started by
1 comment, last by Adam Miles 6 years, 6 months ago

I was trying to debug some zfighting issues in my d3d_12. I was using a depth stencil with the D32_FLOAT format, my camera was set to perspective with near being 0.0001f and far being 20.0f. The camera was 5 units away and the 3d model in question was approximately 1 unit deep. I was experiencing z-fighting issues which I found strange since the camera range was so narrow. When I used the graphics debugger, I found that the depth buffer only used values in the range of 0.9999+ - 1, which seems like a really narrow range given how the camera was set up. Changing the near to 0.01f fixed the z-fighting issues, but the z-buffer is still only utilizing 0.998-1. I thought that depth buffers were linear by default, have they change to logarithmic by default or how is it calculated?

Advertisement

With a perspective projection they've always been non-linear, nothing has changed in D3D12, you probably haven't noticed before.

If your units are metres, then having a far clip plane set to 0.1 millimetres is excessively small. Even your new 1cm near-clip is smaller than you probably need. Try and keep the ratio between Near/Far clip as small as possible.

The trick you should start using is the reverse depth buffer trick (Google "Reverse Depth Buffer"). This should significantly reduce any Z-fighting you're getting even with larger ratios of near/far.

Here's one such link: https://developer.nvidia.com/content/depth-precision-visualized

Adam Miles - Principal Software Development Engineer - Microsoft Xbox Advanced Technology Group

This topic is closed to new replies.

Advertisement