It seems in order to get good results in newer algorithms: SSAO, Soft Particles that they want you to convert your depth buffer to linear. My question is then, why are we using non-linear depth buffers in the first place? I don't think the results could possibly be different.
Right now I implemented a linear depth buffer for ssao and deferred lighting for a project and I am about to do the same for my game engine.
Thoughts?

**0**

# Why non-linear depth buffer?

Started by dpadam450, Jun 27 2009 11:40 AM

8 replies to this topic

Sponsor:

###
#2
Members - Reputation: **852**

Posted 27 June 2009 - 11:57 AM

Differentiating depth between close objects is more important than differentiating depth between far objects.

If two objects are 2.5 or 3 depth units apart and they're right in front of you'll be able to tell the difference between 2.5 and 3. If they're 10000 depth units away you probably won't be able to tell the difference between 2 and 20 units apart because they're probably tiny. Hence, better to have more resolution up close where it matters.

If two objects are 2.5 or 3 depth units apart and they're right in front of you'll be able to tell the difference between 2.5 and 3. If they're 10000 depth units away you probably won't be able to tell the difference between 2 and 20 units apart because they're probably tiny. Hence, better to have more resolution up close where it matters.

###
#4
Moderators - Reputation: **30432**

Posted 27 June 2009 - 08:35 PM

Recently I've been working on converting Z textures (non-linear) into linear maps via look-up-tables. In the process of doing this, I made a little excel spreadsheet that, given near and far plane values, graphs the resulting distribution of precision in the Z-buffer.

The results of these graphs did scare me quite a bit - in a common case in our game engine, 90% of the precision was used to store the first 10% of possible depth values.

Seeing that I'm currently working in very low bits-per-pixel, I've got to be very careful about the near/far plane values in order to get a usable linear map out of the Z-buffer. If I had the choice of using a W-buffer, I'd definitely give it a try.

The results of these graphs did scare me quite a bit - in a common case in our game engine, 90% of the precision was used to store the first 10% of possible depth values.

Seeing that I'm currently working in very low bits-per-pixel, I've got to be very careful about the near/far plane values in order to get a usable linear map out of the Z-buffer. If I had the choice of using a W-buffer, I'd definitely give it a try.

###
#5
Members - Reputation: **223**

Posted 28 June 2009 - 08:42 AM

I have implemented linear depth writes on a system where I could read back the Z-buffer, and I do not recommend it. In my opinion, the precision issue is a red herring. The real reason, early-Z optimizations, is discussed in the Humus link.

###
#6
Members - Reputation: **191**

Posted 28 June 2009 - 09:09 AM

Quote:

While W is linear in view space it's not linear in screen space. Z, which is non-linear in view space, is on the other hand linear in screen space.

hmmm. Isnt it the other way around?

In regard to Directx, View space is the world/view matrix, right? so any position value linearly increases as it gets further away from view.pos.

Where as screen space is world/view/proj matrix right? the result after pos.z/pos.w, which is a non linear curve for the z value.

what am i missing here? :P

###
#7
Members - Reputation: **291**

Posted 28 June 2009 - 09:10 AM

perhaps now thats more of a reason but earlier hardware never had early out

think of a 16bit depth buffer with near,far values of 1,10000 (pretty typical)

with linear depth thats only accurate to ~0.16 which is visually gonna lead to terrible zfighting on screen

think of a 16bit depth buffer with near,far values of 1,10000 (pretty typical)

with linear depth thats only accurate to ~0.16 which is visually gonna lead to terrible zfighting on screen

###
#8
Crossbones+ - Reputation: **2333**

Posted 01 July 2009 - 01:19 PM

Z is nonlinear because perspective-correct rasterization requires linear interpolation of 1/z -- linear interpolation of z itself does not produce the correct results. The hardware must calculate 1/z at each vertex and interpolate it across a triangle, so it's convenient to just write that value to the depth buffer instead of performing an expensive division at every pixel to recover z.

The fact that you get more z precision closer to the near plane is just a side effect and has nothing to do with the motivation behind 1/z interpolation.

The fact that you get more z precision closer to the near plane is just a side effect and has nothing to do with the motivation behind 1/z interpolation.

I work on this stuff: C4 Engine | The 31st | Mathematics for 3D Game Programming and Computer Graphics | Game Engine Gems | OpenGEX