Jump to content
  • Advertisement
Sign in to follow this  
AQ

why not use un-projected z for depth comparisons in z - buffer

This topic is 4904 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi I have had this question for quite some time, and maybe some one can answer. As I understand, the z-bufefr precision problems occur because the value that gets written in the z-buffer is not z, but instead it is 1/z. This happens because after projection, z coordinate is reduced to zero, and it gets encoded in w. Therefore the projection stage is followed by a perspective divide stage, where the projected vertex position values are divided by the 4th coordiante, w, thus giving the correct x, and y values and z coordiante becoming 1/z.
world          projected          after perspective divide
[x y z 1] ---> [x y 1 z] -------> [x/z  y/z 1/z 1]
So if 1/z is problematic (is it not?), then why can't the un-projected z be retained as is and forwarded directly to the z-buffer for depth comparisons? Also related to this, when they say w-buffer, what value are they talking about that is being used? Thanks

Share this post


Link to post
Share on other sites
Advertisement
Quote:
So if 1/z is problematic (is it not?), then why can't the un-projected z be retained as is and forwarded directly to the z-buffer for depth comparisons?

Because, for a perpective projection, Z doesn't coincide with the distance to the projection point (camera's eye). That is:
In an ortho projection, where the distances are parallel to the z-axis (from the view space), you can use Z values. But for a perpective projection, you must consider the distances from the polygons to the projection point (camera's eye). These distances are not parallel to the z-axis. To compute them, we transform the perspertive projection in a ortho projection by using a relation like this: h(z) = 1 / (1 - Z/Zproj). So, we compare Z*h.

The problem with z-buffer is that it is a non-linear method. This means that it can be very accurate close to the eye and little accurate off in the distance.

Quote:
Also related to this, when they say w-buffer, what value are they talking about that is being used?

The w-buffer is a linear method, so it keep the same precision error everywhere, what can be very good for larger ranges. On the other hand, close to the eye, it is less accurate than the z-buffer.
For more informations, read this paper.

(re-edited for a better understanding)

[Edited by - adriano_usp on March 15, 2005 11:35:06 AM]

Share this post


Link to post
Share on other sites
Thanks for the insight ... that led me to think and I think I understand why we can not do what I had asked ... please comment if my understanding is correct!


Using unprojected z would be plain wrong. Reasons:

We only have the unprojected z value available to us for vertices of the polygon only. When we scan convert a polygon, we NEVER have the true 'z' value for each and every pixel. We only have 1/z (or some other related value) for each pixel.

Hence during depth comparisons for every pixel, while we can do un-projected z coords at the vertices, the only correct value that we can use for each pixel is infact 1/z.

So unless the hardware does an invert, and then compares, I think using plain unprojected-z would be mathematically wrong - or in fact, it is not even readily available for each pixel without performing additional mathemtmathemtical operations on the interpolated values.

Perhaps that explains why we have to have 1/z values in the z-buffers.

Share this post


Link to post
Share on other sites
Yes, you are right. But just for not remaining doubts, let's deduce some points:

Firstly, which would be the math behind the perspective projection?

Suppose that P(px,py,pz) is a point from a polygon, expressed relative to the coordinate system from the view space.
Suppose that Pr(prx,pry,prz) is the projection point, that is, it is the camera's eye. Usually, Pr is placed on the z-axis.
Between the Pr and P is the projection surface (the plane where the polygon will be projected). Usually, the projection surface is orthogonal to z-axis. So, suppose the projection surface is the plane z = k.

Let's write the equation for the line defined by P and Pr:

(x,y,z) = Pr + t*(P-Pr), where t is a parameter

This means:

x = prx + t*(px-prx)
y = pry + t*(py-pry)
z = prz + t*(pz-prz)

or:

t = (x-prx)/(px-prx) = (y-pry)/(py-pry) = (z-prz)/(pz-prz)

So, we can express x and y as a function of z:

x = prx + (px-prx)*(z-prz)/(pz-prz)
y = pry + (py-pry)*(z-prz)/(pz-prz)

We know that this line intersects the surface projection. Since all the point on the surface projection have the same z value, we can determine the x and y coordinates of the intersection point doing z = k:

x = prx + (px-prx)*(k-prz)/(pz-prz)
y = pry + (py-pry)*(k-prz)/(pz-prz)

Well, this is a general solution to find the projected coordinates of a 3D point. Now, let's consider the Direct3D's definitions. In Direct3D, the Pr = (0,0,0) and the surface projection is the plane z = 1. So, the projected coordinates are simplified to:

x = px*(1/pz)
y = py*(1/pz)
z = 1

Bingo! We find the famous 1/pz projection relation.


OK, let's go to depth-buffer now:
For a perspective projection, we must compare the distances from the point to the projection point. We could simply do:

dist = sqrt( (px-prx)^2 + (py-pry)^2 + (pz-prz)^2 )

In fact, this works, but it is an 'expensive' solution for computing. A fast solution can be obtained by using:

z = (zp-1)*(1/zp)

This means that, for the z value, we are 'transforming' the perspective in an ortho projection, where we can compare the z distances directly.

[re-edited to correct some mistakes]

[Edited by - adriano_usp on March 18, 2005 3:26:46 PM]

Share this post


Link to post
Share on other sites
I wanted to ask you which book would you recommend to understand this. I think I understand this now, but without asking furhter questions and with the aid of a digram can not explain what my confusion is.


Specifically, I understand your explanation that z can not be used as what we require is the distance of a vertex to the eye. So we shoudl compute the full distance instead of using the original z value. that is understood. However what I do not understand is how this i/z can be used in place of the proper distance (using the distance formula).

2. On a side note, I think after projection, we are NOT comparing the actual distances. Because if we were, then consider this scenario. You are in eye space and there is a point (vertex) on z axis infront of you at 5 z units. If you rotate this point to the right now, it is still 5 units away from the eye, however it's z coordiante is now less than 5. Now two questions. What value do we use for z-comparison (in other words, what does the hardware do). Does it use the absolute distance (which would be the same in both cases), or does it use the z value only (in which case the point after rotation is closer).

I think what the hardware does is is it computes the z-distance from the projection plane and NOT to the eye!





\ | /
\ | * / B
C \ |** / CA
\__|__/
\ | /
\|/
.



I have tried to highlight this in this picture. The two stars are on the same projection line and hence will both project to the same point on the projection plane (== near plane). But there are two z- values (distance) values we can use for them. Their actual distances (computed using the ditance formula) or their jsut the z-distances (which eill appear as perpendicular lines from near plane to the point).

If we choose the actual distances, then I can not understand how come 1/z thing can be used in its place. If we use the vertical distance, then why can we not just use the original z value.

Furthermore, vertex C is at same z depth as far as eye is concerend. However its correct distance from the eye is less than that for vertex A. Hence I think we must not use CORRECt geometric distance and instead realize that eye perceives the distances as they are measured from the near plane. (or plane of the eye for that metter).

Share this post


Link to post
Share on other sites
Hi AQ, don't worry. The math behind the projection/depth-buffer is not really easy to visualize. A lot of people don't understand it exactly [smile].

Usually, applying a perspective projection may give us the idea that the view frustum assumes a conic/pyramidal shape. Actually, we define the perpective by setting the fov to the viewing frustum, but it's important to notice that the projection transformation converts it into a box shape. Since the near end of the viewing frustum is smaller than the far end, this has the effect of expanding objects that are near to the camera. Look at the following picture:

proj.jpg

(A) ilustrates a cube projected in perpective on the projection plane. (B) shows the cube transformed by those projection relations. Notice that the cube was deformed due to transformation from a "perspective" to an "ortho" projection, but the projected image (on the projection plane) is the same.

Well, once in the end of the process we have a parallel projection, the z-buffer method will store z-values (that could be the distances from the points to the projection plane, as you said), but these z-value were already transformed by those projection relations (this means that the distance to the eye was already computed).

Actually, you could use pure Z values directly, as you said. But considering the depth-method has a limited memory space, you can get problems like these:
- overlap polygons: intersection lines will be less accurate.
- Parallel and very close polygons: Z interpolation may cause more z fighting.
- Very large polygons: The interpolation will be more prone to errors.

Share this post


Link to post
Share on other sites
I guess I should stoip asking questions because it seems this would go on and on. But maybe jsut last few

you mentioned that

[blockquote]
z = 1*(1/pz)

This means that, for the z value, we are 'transforming' the perspective in an ortho projection, where we can compare the z distances directly.
[/blockquote]


Two questions.

1. Isn't the z-value same as near-plane value for ALL the points after projection. So my understanding so far is that since it would be a waste of the space to duplicate the same value in the z-coordiantes for all vertices, we use it to store a more 'meaningful' value so that we can do depth comparisons.

This can be either original z or the 1/z. One reason we use 1/z is that it can be interpolated linearly in screen space.


2. [separate from the above]. How (mathematically) can it be shown that going from z-->1/z is the same as going from perspective to orthographic!


Aren't the above two, two separate issues?

Once again, thanks for all the great help though. It has really making me think!!

And also, if you can recommend a suitable book.




Share this post


Link to post
Share on other sites
Oh... sorry so much, AQ. I wrote wrong! I don't know what hell I had in mind when I wrote z = 1*(1/zp) [headshake]. It was to be z = (zp-1)*(1/zp) instead. I'm sorry again.

OK, let's try to repair these mistakes. Well, those relations could be represented by the following projection matrix:

| 1 0 0 0 |
| 0 1 0 0 |
| 0 0 1 1 |
| 0 0 -1 0 |

Look at "What Is the Projection Transformation?" in DX SDK to understand how to determine this matrix.
When we transform a vertex ( xp, yp, zp, 1 ) by this matrix we have the vector ( xp, yp, zp-1, zp ). To homogenize this vector, we divide its components by zp (W), resulting:

x = xp*(1/zp)
y = yp*(1/zp)
z = (zp-1)*(1/zp)

This works but, even so, the z coordinate is got from another way, because this matrix doesn't consider the fov and this could result in depth/z problems for polygons in distance. Actually, the projection matrix used by Direct3D is:

| C 0 0 0 |
| 0 C 0 0 |
| 0 0 Q S |
| 0 0 -Q*Zn 0 |

where:
Zn = near z plane
Zf = far z plane
C = cos(fov/2)
S = sin(fov/2)
Q = S/(1-Zn/Zf)

So, when we transform a vertex ( xp, yp, zp, 1 ) by this matrix we have the vector ( C*xp, C*yp, Q*zp-Q*Zn, S*zp ). To homogenize this vector, we divide its components by S*zp (W), resulting:

x = xp*1/(zp*tan(fov))
y = yp*1/(zp*tan(fov))
z = (Zf/(Zf-Zn))*(1-Zn/zp) -> this is the value compared in z-buffer

You could know more about this projection matrix in Jim Blinn's book. Some of the Watt's books also treat these subjects. I'm really sorry for the confusion and also for my bad English.

Share this post


Link to post
Share on other sites
No .. Excellent .. you have been extremely helpful .. Yes I just discovered those two books :) .. and I could relate what you wrote there and I think I understand properly.

Infact I understand the projection alright .. my fav book is Peter Lin's 3D graphics programming for Linux ... however my original question was that why not use real z directly if the z value being used has non-linear behaviour.

As it turns out, this encodes the depth information and perhaps is easier to work with during other stages of the pipeline.





Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!