Why non-linear depth buffer?

Started by
7 comments, last by zedz 14 years, 10 months ago
It seems in order to get good results in newer algorithms: SSAO, Soft Particles that they want you to convert your depth buffer to linear. My question is then, why are we using non-linear depth buffers in the first place? I don't think the results could possibly be different. Right now I implemented a linear depth buffer for ssao and deferred lighting for a project and I am about to do the same for my game engine. Thoughts?

NBA2K, Madden, Maneater, Killing Floor, Sims http://www.pawlowskipinball.com/pinballeternal

Advertisement
Differentiating depth between close objects is more important than differentiating depth between far objects.

If two objects are 2.5 or 3 depth units apart and they're right in front of you'll be able to tell the difference between 2.5 and 3. If they're 10000 depth units away you probably won't be able to tell the difference between 2 and 20 units apart because they're probably tiny. Hence, better to have more resolution up close where it matters.
outRider has mentioned the chief reason but also read this
http://www.humus.name/index.php?page=News&ID=255
Recently I've been working on converting Z textures (non-linear) into linear maps via look-up-tables. In the process of doing this, I made a little excel spreadsheet that, given near and far plane values, graphs the resulting distribution of precision in the Z-buffer.
The results of these graphs did scare me quite a bit - in a common case in our game engine, 90% of the precision was used to store the first 10% of possible depth values.

Seeing that I'm currently working in very low bits-per-pixel, I've got to be very careful about the near/far plane values in order to get a usable linear map out of the Z-buffer. If I had the choice of using a W-buffer, I'd definitely give it a try.
I have implemented linear depth writes on a system where I could read back the Z-buffer, and I do not recommend it. In my opinion, the precision issue is a red herring. The real reason, early-Z optimizations, is discussed in the Humus link.
Quote:While W is linear in view space it's not linear in screen space. Z, which is non-linear in view space, is on the other hand linear in screen space.


hmmm. Isnt it the other way around?

In regard to Directx, View space is the world/view matrix, right? so any position value linearly increases as it gets further away from view.pos.

Where as screen space is world/view/proj matrix right? the result after pos.z/pos.w, which is a non linear curve for the z value.

what am i missing here? :P
perhaps now thats more of a reason but earlier hardware never had early out
think of a 16bit depth buffer with near,far values of 1,10000 (pretty typical)
with linear depth thats only accurate to ~0.16 which is visually gonna lead to terrible zfighting on screen
Z is nonlinear because perspective-correct rasterization requires linear interpolation of 1/z -- linear interpolation of z itself does not produce the correct results. The hardware must calculate 1/z at each vertex and interpolate it across a triangle, so it's convenient to just write that value to the depth buffer instead of performing an expensive division at every pixel to recover z.

The fact that you get more z precision closer to the near plane is just a side effect and has nothing to do with the motivation behind 1/z interpolation.
thanks for clearing that up eric

This topic is closed to new replies.

Advertisement