I have written my own software rasterizer, and I have been trying in vain to create and utilize a Z buffer. I use matrices for projection and transformations, of which I get my screen coordinates as the product of a projection matrix multiply. My polygons don't draw correctly using the z values given by this operation for the buffer, and linear interpolation of the z values is obviously out. Can anyone explain to me what it is that I need to do to get the z values for my buffer, preferably from the screen coordinates I generate, and how would I linear interpolate those values to get the depth value for an arbitrary point on the polygon?

**0**

# Z buffer use

Started by Ectara, Feb 17 2010 05:20 PM

9 replies to this topic

Sponsor:

###
#2
Members - Reputation: **175**

Posted 17 February 2010 - 11:09 PM

In OpenGL, the linear interpolation for the polygon primitives (triangles) uses barycentric coordinates. This is explained in the specification (to be found in opengl.org - documentation - specifications - rasterization - polygons). There's a more complete article in Wikipedia. Of course there may be other ways.

###
#3
Members - Reputation: **553**

Posted 18 February 2010 - 05:52 AM

Maybe you should try to get it working with an identity "view matrix" first na d with an orthographic projection for projection matrix.

I mean, place your model a bit down the z axis ( +10 along Z in Left handed coord sys for example), place your camera in the center of the coordinate system and point it look down the z direction torwards the model. Then in "view space" the verices depth values should be ther Z coordinate.

Now get a triangle. Pick an edge of that triangle. Get the two vertices that form that edge. See the difference between their projected Y values for example.

That give you how much scan lines will take to go from the top to the botom 2D projected. Now iterate and for every scanline lineary interpolate the top vertex Z value with the bottom vertex Z value along the current edge. Store the interpolated values of Z for every scanline in a temporaly buffer. Do the same with other edge. Actually, these buffers aren't exactly temporaly, since they hold the Z values along an edge and that's not going to be deleted, but will be part of the final Z buffer when completely filled up.

Now when rasterizing the triangle, pick Z values stored in the temporaly buffers of the two edges for every scanline and interpolate between to get the Z value for every pixel of the triangle.

In other words. Fill every pixel along the edges with interpolated values between the Z coordinates of the two vertices of that edge. Then for every scan-line interpolate between the values stored in the edges to fill up the pixels in the current scanline of the triangle with Z values.

I haven't implemented such thing and probably there are better and faster ways. It's just how I would go about it.

I mean, place your model a bit down the z axis ( +10 along Z in Left handed coord sys for example), place your camera in the center of the coordinate system and point it look down the z direction torwards the model. Then in "view space" the verices depth values should be ther Z coordinate.

Now get a triangle. Pick an edge of that triangle. Get the two vertices that form that edge. See the difference between their projected Y values for example.

That give you how much scan lines will take to go from the top to the botom 2D projected. Now iterate and for every scanline lineary interpolate the top vertex Z value with the bottom vertex Z value along the current edge. Store the interpolated values of Z for every scanline in a temporaly buffer. Do the same with other edge. Actually, these buffers aren't exactly temporaly, since they hold the Z values along an edge and that's not going to be deleted, but will be part of the final Z buffer when completely filled up.

Now when rasterizing the triangle, pick Z values stored in the temporaly buffers of the two edges for every scanline and interpolate between to get the Z value for every pixel of the triangle.

In other words. Fill every pixel along the edges with interpolated values between the Z coordinates of the two vertices of that edge. Then for every scan-line interpolate between the values stored in the edges to fill up the pixels in the current scanline of the triangle with Z values.

I haven't implemented such thing and probably there are better and faster ways. It's just how I would go about it.

###
#4
Crossbones+ - Reputation: **3089**

Posted 18 February 2010 - 06:18 AM

@Farfadet: Reading this paper now, thanks for the link.

@solenoidz: I used a method like that for my Gouraud shading, but it would seem that that does not work for a z buffer after a projection matrix multiplication, to get a perspective correct z value.

@solenoidz: I used a method like that for my Gouraud shading, but it would seem that that does not work for a z buffer after a projection matrix multiplication, to get a perspective correct z value.

###
#6
Members - Reputation: **175**

Posted 18 February 2010 - 08:44 AM

Linear interpolation using barycentric coordinates :

1) transform vertices to screen coordinates --> X,Y,Z (screen)

2) forget world coordinates

3) compute barycentric coordinates l1, l2, l3 from X,Y at the vertices as explained in the links

4) now you can interpolate ANY function for which you know the values at the vertices :

f® = l1 f(r1) + l2 f(r2) + l3 f(r3) f(ri) = value of the function at vertex i. Any function means that you can also interpolate Z. openGl also uses this to interpolate values such as color, normal vector, texture coordinates, etc.

1) transform vertices to screen coordinates --> X,Y,Z (screen)

2) forget world coordinates

3) compute barycentric coordinates l1, l2, l3 from X,Y at the vertices as explained in the links

4) now you can interpolate ANY function for which you know the values at the vertices :

f® = l1 f(r1) + l2 f(r2) + l3 f(r3) f(ri) = value of the function at vertex i. Any function means that you can also interpolate Z. openGl also uses this to interpolate values such as color, normal vector, texture coordinates, etc.

###
#7
Members - Reputation: **100**

Posted 18 February 2010 - 10:33 AM

I assume you are using scanline conversion and not raytracing right?

anyways, since in 3D, you divide x and y by z to transform to screenspace, the value 1/z is what would be linear in screenspace.

given the z value at two points on the screen, you need to interpolate between them. Here is how:

screenZ1 = 1 / Z1

screenZ2 = 1 / Z2

then interpolate between screenZ1 and screenZ2

it is possible to use the 1/z value in your z buffer, and I did so in my own software rasterizer, but if you want the z of a given point between the two values, just flip it back around again i.e. find the reciprocal

realZ = 1 / interpolatedScreenZ

if there is a change in Z moving down the edge on either side of the triangle, you will have to change your z values to 1/z before you start interpolating down the edge, then interpolated the 1/z across the triangle for a given scanline. You can then find the reciprocal(the actual worldspace Z value) if you need to.

make sense?

anyways, since in 3D, you divide x and y by z to transform to screenspace, the value 1/z is what would be linear in screenspace.

given the z value at two points on the screen, you need to interpolate between them. Here is how:

screenZ1 = 1 / Z1

screenZ2 = 1 / Z2

then interpolate between screenZ1 and screenZ2

it is possible to use the 1/z value in your z buffer, and I did so in my own software rasterizer, but if you want the z of a given point between the two values, just flip it back around again i.e. find the reciprocal

realZ = 1 / interpolatedScreenZ

if there is a change in Z moving down the edge on either side of the triangle, you will have to change your z values to 1/z before you start interpolating down the edge, then interpolated the 1/z across the triangle for a given scanline. You can then find the reciprocal(the actual worldspace Z value) if you need to.

make sense?

###
#8
Crossbones+ - Reputation: **3089**

Posted 18 February 2010 - 11:40 AM

Yeah, I'm using a scanline rasterizer. Makes sense. I can't seem to get anything to work properly. Maybe its something in my rasterization process. Succinctly, three points of a triangle come in, are multiplied by a model transformation matrix, then the resulting vectors are multiplied by a projection matrix. I then get my screen coordinates as a result, then I interpolate vertices to compute an array of horizontal spans. After that, I interpolate color information in the span arrays and Gouraud shade at the same time I rasterize. Up until this point, I have been trying to put the z information into the span, then interpolate a second time horizontally when I shade and raster. Isn't working out. The no matter which of the methods presented here I use, it generates the same result; different than no z-buffer, but still incorrect.

###
#10
Members - Reputation: **100**

Posted 18 February 2010 - 12:30 PM

I dont know how it will fit into your project exactly, but it sounds like you have the right idea. You DO have to interpolate some Z value, but before you interpolate it, you need to make it the reciprocal 1/Z. The reason is that you are working and interpolating in screen space. In screen space Z is not linear, and therefore you can't use linear interoplation. Its reciprocal is though. After the points have been rotated by the model matrix, and then rotated around the camera by the view matrix, then you have to find the reciprocal of Z at each vertex. Project the other components of the vertices as you normally would, and interpolate your 1/z values along the edges and spans with them. When you find your spot somewhere in the middle of the triangle or wherever, find the reciprocal of your 1/z value to find the real Z.

Z = 1 / (1/z)

Z = 1 / (1/z)