• entries
41
137
• views
107465

# Logarithmic Depth Buffer

3064 views

I assume pretty much every 3D programmer runs into depth buffer issues sooner or later. Especially when doing planetary rendering; the distant stuff can be a thousand kilometers away but you still would like to see fine details right in front of the camera.

Previously I have dealt with the problem by splitting the depth range in two and using the first part for near stuff and another for distant stuff. The boundary was floating, somewhere around 5km - quad-tree tiles up to certain level were using the distant part, and the more detailed tiles that by law of LOD are occurring nearer the camera used the other part.
Most of the time this worked. But in one case it failed miserably - when a more detailed tile appeared behind a less detailed one.
I was thinking about the ways to fix it, grumbling why we can't have a depth buffer with a better distribution, when it occurred to me that maybe we can.

Steve Baker's document explains common problems with depth buffers (Z-buffers). In short, the depth values are proportional to the reciprocal of Z. This gives amounts of precision near the camera but little off in the distance. Common method is then to move your near clip plane further away, which helps but also brings its own problems, mainly that .. the near clip plane is too far [rolleyes]

A much better Z-value distribution is a logarithmic one. It also plays nicely with LOD used in large scale terrain rendering.
Using the following equation to modify depth value after it's been transformed by the projection matrix:
z = log(C*z + 1) / log(C*Far + 1) * w;         //DirectX with 0..1 depth range
or
z = (2*log(C*z + 1) / log(C*Far + 1) - 1) * w; //OpenGL with -1..1 depth range
Where C is constant that determines the resolution near the camera, and the multiplication by w undoes in advance the implicit division by w later in the pipeline.
Resolution at distance x, for given C and n bits of depth buffer resolution can be computed as
      log(C*Far + 1)Res = ----------------      (2^n-1) * C/(C*x+1)

So for example for a far plane at 10,000 km and 24-bit Z-buffer this gives the following resolutions:
            1m      10m     100m    1km     10km    100km   1Mm     10Mm            ------------------------------------------------------------C=1         1.9e-6  1.1e-5  9.7e-5  0.001   0.01    0.096   0.96    9.6     [m]C=0.001     0.0005  0.0005  0.0006  0.001   0.006   0.055   0.549   5.49    [m]

Along with the better utilization of z-value space it also (almost) gets us rid of the near clip plane.

And here comes the result.

Looking into the nose while keeping eye on distant mountains ..

10 thousand kilometers, no near Z clipping and no Z-fighting! HOORAY!

### More details

(at the request of y2kiah)

The C basically changes the resolution near the camera; I used C=1 for the screenshots, having theoretical resolution 1.9e-6m. However, the resolution near the camera cannot be utilized fully as long as the geometry isn't finely tessellated too, because the depth is interpolated linearly and not logarithmically. On models such as the guy on the screenshots it is perfectly fine to put camera on his nose, but with models with long stripes with vertices few meters apart the bugs from the interpolation can be visible. We will be dealing with it by requiring certain minimum tessellation.

Also I think I've read somewhere that some forthcoming generation of hardware will support different modes of interpolation too.

So yes, modifying C changes the resolution near the camera, setting it to a value that gives the largest acceptable resolution may be desirable to achieve more linear distribution in the near range and thus minimizing the interpolation problem.

Near clip plane can be put arbitrarily 'near' but not zero because of the 1/w division. I have put it to 0.0001m. This is using standard perspective projection setup.

### Negative Z artifact fix

Ysaneya suggested a fix for the artifacts occurring with thin or huge triangles when Z goes behind the camera, by writing the correct Z-value at the pixel shader level. This disables fast-Z mode but he found the performance hit to be negligible.

That's pretty cool.

That is really cool, actually. I've never done any explicit z-buffer management. How do you do this in code?

Quote:
 Original post by Ravuya That is really cool, actually. I've never done any explicit z-buffer management. How do you do this in code?
It really is the one line of code:
z = log(C*z + 1) / log(C*Far + 1) * w

that you use in shader after the vertex position has been transformed by modelviewproj matrix. Of course the 1/log(C*Far + 1) part is constant and can be even baked into the matrix itself.

neat!

can you shed some more light on what varying C between .001, 1, 10, 100 etc. actually does? What is the ideal range of C, and what value did you use in these screenshots? Does setting a higher C basically just shift more of the precision away from the camera at the expense of precision close to the camera?

How close do you place your near plane if you can "almost get rid of it"?

Quote:
 Original post by y2kiah can you shed some more light on what varying C between .001, 1, 10, 100 etc. actually does?

The C basically changes the resolution near the camera; I used C=1 there, having theoretical resolution 1.9e-6m. However, the resolution near the camera cannot be utilized fully as long as the geometry isn't finely tessellated too, because the depth is interpolated linearly and not logarithmically. On models such as the guy on the screenshots it is perfectly fine to put camera on his nose, but with models with long stripes with vertices few meters apart the bugs from the interpolation can be visible. We will be dealing with it by requiring certain minimum tessellation.

Also I think I've read somewhere that some forthcoming generation of hardware will support different modes of interpolation too.

So yes, modifying C changes the resolution near the camera, setting it to a value that gives the largest acceptable resolution may be desirable to achieve more linear distribution in the near range and thus minimizing the interpolation problem.

The near clip plane can be put arbitrarily 'near' but not zero because of the 1/w division. I have put it to 0.0001m. This is using standard perspective projection setup.
But I think it should be possible to set up the projection matrix in such way that w comes out always 1 and the near plane is at zero ... I should try it [smile]

Quote:
 Original post by cameni Also I think I've read somewhere that some forthcoming generation of hardware will support different modes of interpolation too.

Now that you mention it, I'm surprised that state wasn't introduced to FFP hardware many years ago.

Quote:
 the resolution near the camera cannot be utilized fully as long as the geometry isn't finely tessellated too, because the depth is interpolated linearly and not logarithmically

So does that manifest as z-fighting near the middle of the polygon where the interpolation diverges the most? It seems odd to me that if you're placing your vertex depths on a logarithmic scale, that linear interpolation between the corrected depths would still be good enough.

Sorry I'm just having a hard time picturing the artifact that you describe... trying though

Quote:
 Original post by y2kiahSo does that manifest as z-fighting near the middle of the polygon where the interpolation diverges the most? It seems odd to me that if you're placing your vertex depths on a logarithmic scale, that linear interpolation between the corrected depths would still be good enough.
The problem manifests mainly when one vertex is behind the camera. The logarithmic function goes to negative values quite rapidly, here's how it looks:

The red curve is for C=0.001 and the blue one is for C=1
You can see, linear interpolation in the blue case with vertices few meters apart will make quite large errors.

interesting, now I see what you mean.

I plotted resolution with respect to distance for a range of C values to get a better visual picture of the resolution function. It's interesting to actually see how quickly the standard z-buffer precision diverges compared to the logarithmic scale.

C = 0.001 seems like a good "middle ground" value where you still have about 1/2 millimeter of precision at close to zero depth (assuming your units are meters) which should be more than enough AFAIK, while the depth function plots to a relatively linear curve between -100 and +100. It shouldn't be too difficult to enforce tessellation at sub 200meter sizes I would think :)

Quote:
 Original post by y2kiahC = 0.001 seems like a good "middle ground" value where you still have about 1/2 millimeter of precision at close to zero depth (assuming your units are meters) which should be more than enough AFAIK, while the depth function plots to a relatively linear curve between -100 and +100. It shouldn't be too difficult to enforce tessellation at sub 200meter sizes I would think :)
But you know what? When I use 0.001 instead of 1 it doesn't help with the error at all, on a test building with vertices some 5 meters away [embarrass]
So it will not be that simple ..

Great Idea!

You may have already seen this, but in case not, Ysaneya has found a solution to the problem of lerping the z-values by solving for the correct z-values at the pixel shader.

Perhaps something like this could be done in the vertex shader rather than calling log per pixel and losing the hierarchical Z.

signZ = sign(z);
z = signZ * (log(C * signZ * z + offset) / log(C * far + offset));


Essentially for negative z values inverting the graph given by positive Z values.
Cheers,
Martin

P.S. Double posting (I posted on Ysaneya's thread as well) incase people don't visit there.