Z-buffer precision/length ?

Started by
4 comments, last by Kiz 21 years, 10 months ago
I was wandering is there a direct link between setting far/near clipping planes (e.g. near=0.1f far=40.0f) and z buffer value. If i have a polygon drawing on about 40.0f Z distance from camera ( at the edge of far clipping plane) is that means that his z-buffer pixels'' values are going to be somewhere around 1.0f ? How do you set your Z buffer range? With 16-bit Z buffer many near polygons may show artifacts. Does anybody know how do I set Z buffer range to specific distance from camera? Example: zbufferlow = 0.00001 (at 0.1 units) zbufferhigh = 1.0 (at 40.0 units) Kiz
Kiz
Advertisement
The z buffer is dependant on the value of the near and far planes, such that z = 0.0 at the near plane and z = 1.0 at the far plane. You should be aware, however, that the z buffer is not linearly mapped to the distance between the near and far planes, so (far - near) / 2 != 0.5 . This is because most of the precision of the z buffer is devoted to descriminating objects near the carmera (close to the near plane) rather than those that are far away. For this reason, you should not set the near plane too close, because it will suck up all the depth precision for close objects and you will have very coarse descrimination for far objects (possibly leading to artifacts). This is especially true if you only have a 16bit buffer. If you really need the exact equation for mapping distance to z, I can look it up; otherwise, I am too lazy to do it now.
This is exactly what I needed to confirm, I thought it was nonlinear coz polys was showing artifacts only on larger distances from cam.
It would be still nice though if you could find me that formulae for calculating exact z buffer value from distance... allthough you''ve been great help already.

Thanx.

Kiz
Kiz
You can set the z-buffer bit depth to 32bit by running in 32bit color. I would like to know if there''s another way myself though...

I''ve been curious about this myself. More precisely does the depth value come from the z value after the perspective divide on the projection?
Keys to success: Ability, ambition and opportunity.
From the OpenGL faq:
http://www.frii.com/~martz/oglfaq/depthbuffer.htm

quote:
After the projection matrix transforms the clip coordinates, the XYZ-vertex values are divided by their clip coordinate W value, which results in normalized device coordinates. This step is known as the perspective divide. The clip coordinate W value represents the distance from the eye. As the distance from the eye increases, 1/W approaches 0. Therefore, X/W and Y/W also approach zero, causing the rendered primitives to occupy less screen space and appear smaller. This is how computers simulate a perspective view.

As in reality, motion toward or away from the eye has a less profound effect for objects that are already in the distance. For example, if you move six inches closer to the computer screen in front of your face, it''s apparent size should increase quite dramatically. On the other hand, if the computer screen were already 20 feet away from you, moving six inches closer would have little noticeable impact on its apparent size. The perspective divide takes this into account.

As part of the perspective divide, Z is also divided by W with the same results. For objects that are already close to the back of the view volume, a change in distance of one coordinate unit has less impact on Z/W than if the object is near the front of the view volume. To put it another way, an object coordinate Z unit occupies a larger slice of NDC-depth space close to the front of the view volume than it does near the back of the view volume.

In summary, the perspective divide, by its nature, causes more Z precision close to the front of the view volume than near the back.


Section 12.050 is also about what you''re asking...
----------------Amusing quote deleted at request of owner

This topic is closed to new replies.

Advertisement