How precise is the (standard) depth buffer for position reconstruction???

Started by
5 comments, last by Bow_vernon 13 years, 5 months ago
Hi, I just finished my deferred renderer and now I'd like to save a RT (I used one render target for customized depth buffer). Now I think I could use the standard depth buffer instead.
However Im not sure as there are people who say the depth buffer is not precise enough, and it's not linear. I understand the non linear part, but I dont know if it's precise enough or not for position reconstruction.
I do believe that I have read somewhere that a FPS game use the standard depth buffer for position reconstruction. But since it's a FPS game (where there's not much depth in its First person view) it's not a problem (the far plane won't be that far anyway). So anyone ever used the standard depth buffer for this kinda thing?? How good did it look (instead of the customized depth buffer)??

Please enlighten this n00b so he could work on his project soon. (Im stuck at this point)

Best regards,
Bow Vernon
Advertisement
I'd guess that nearly every console game out there that implements deferred rendering does it by sampling the hardware depth buffer. There's definitely some precision problems as you get further from the camera, but it's not worth using up the extra memory and bandwidth required to explicitly write out depth. On PC it's probably more of a mixed bag, due to more memory/bandwidth being available and API restrictions.

If you're interested in precision, I made a blog post examining the resulting error from a bunch of different depth/precision formats.
You may also find this interesting Logarithmic Depth Buffer
You won't reconstruct 100% exact position, but I have no idea what effect would need more precise position.

There are many games apart from FPS which use it (for example GTA4). I also tried it myself in FPS and RTS type games and it works ok without any tricks like logarithmic depth buffers.
Not related to the actual depth precision, but take a look at this page: (apologies if it's not relevant for you)

http://aras-p.info/texts/D3D9GPUHacks.html

which lists an important caveat regarding Direct3D, the INTZ depth format, and AMD GPU's: it is a performance penalty to do both depth testing and sampling from the INTZ depth buffer at the same time.

However, according to my experiments this does not seem to be an issue on AMD GPU's when running WinXP, only when running Vista or Windows 7.
anyway, I tried it and it worked, and there's slight banding. but what bothered me is the banding will look VERY VISIBLE everytime I rotate/translate my camera. anyone has the same experience?? I'll post screenshot later, as Im going to have an outbond tonight with my friends.

[Edited by - Bow_vernon on November 11, 2010 5:43:44 AM]
sorry for bumping, anyway I found a simple solution that removes the banding artifacts altogether. simply set the filter of the depth texture to GL_NEAREST, (It was set to GL_LINEAR).anyway thanks for your time and the moderator could flag this thread as "solved".thx now back to coding :)

This topic is closed to new replies.

Advertisement