Jump to content
  • Advertisement
Sign in to follow this  
patw

Writing Z-Buffer Values

This topic is 3417 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I've been thinking about the value that gets written to the z-buffer, and I had some questions I am hoping someone can shed some light on. So normally we output from a vertexshader: out_position = mul(WorldViewProjection, in_position) And then, AFAIK, the value that gets written to the z-buffer is: out_position.z / out_position.w This makes the value stored screen-space Z. For the purposes of the GPU, it doesn't actually care what the value of Z is, so long as all polygons which need z-testing produce their values for z in the same way. Screen-space Z is not particularly useful for post-processing, and the standard storage is linear eye-space (Crytek style). So shouldn't it be not only possible, but desirable to output something other than screen-space Z to the depth buffer, provided that you plan on running entirely on some programmable pipeline, not fixed-function? It seems like, for hardware/APIs which could read-back a depth buffer, this would make life much more awesome. If you couldn't read-back the depth, than you'd be calculating, and outputting this value to a render-target anyway, and so it doesn't impact performance at all. Writing out depth values from the pixel shader is bad, so I guess the idea would be to pre-multiply the output of the vertex shader by W? Has anyone messed around with this?

Share this post


Link to post
Share on other sites
Advertisement
There is a reason that the z value stored in the depth buffer is not a linear value. The reason is that the depth buffer (even if it is 32 bit) is not precise enough to deal with lots of small difference values if you have a rather big viewing range along the z axis. Hence the perspective divide takes care that the depth buffer can be much more precise close the near plane (where you would otherwise spot any small z artifact instantly) and move the loss in precise (and thus the resulting render errors such as Z fighting) away to the far plane.

Still it is perfectly valid to manipulate the z value in the vertex shader to compensate for the perspective divide. But you should take into consideration that this might screw your depth buffer precision.

And as you already pointed out writing a depth value from the pixel shader is bad because this automatically disables early z culling which comes to your performance rescue in scene with lots of overdraw when rendering front to back.

Finally, rendering a linear depth to a seperate render target does not hurt too much. For post processing effects you are already rendering to a couple of render targets so pumping your own depth into a texture you can directly use as input in another post processing step without reading the depth values out from a resource can be done without a performance penalty.

Share this post


Link to post
Share on other sites
Like WaterWalker said, the issue is precision. Even so, depending on what you are trying to do it may not matter if you are using the non-linear depth or not. For example, in SSAO you can account for the non-linearity in your calculations automatically without processing the depth value any further (by modifying your sampling kernel size based on the depth value, you effectively neutralize the non-linearity).

Also like WaterWalker said, writing depth to a a single floating point render target will allow you to fill up the z-buffer with the appropriate non-linear values, but you can write the linear depth to the RT. This would allow for an early z-pass and give you the linear depth available for all the other goodness you can come up with.

Share this post


Link to post
Share on other sites
It is indeed possible to store linear depth values in a z-buffer...see this.

Personally I just do things the Crytek way and write out linear depth to a floating-point RT.

Share this post


Link to post
Share on other sites
I went ahead and implemented this and the only issues I have seen so far are with super-huge polygons (like 2-huge triangles providing a default 'floor'). I am doing pre-pass rendering, so devoting half my G-buffer to store depth, when I have access to the depth buffer, seems kind of silly. Since I am already trusting the linear depth value to provide lighting values, I think it is probably worth recovering half my Gbuffer, and that combined with a readable stencil area, provides me with space at least a velocity vector.

In my experience with pre-pass, I have found that 32 bits is sufficient for linear depth, but I am curious to know how it works out for D24S8 and D24FS8. I am also curious about Hi-Z impact.

EDIT: I later found this from Hummus (http://www.humus.name/index.php?page=News&ID=255). Precision is a varying issue, but the reasons discussed on that link outline why linear depth is a no-go for the depth buffer.

[Edited by - patw on June 28, 2009 2:04:10 PM]

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!