How is the depth buffer calculated?

Started by
3 comments, last by panic 14 years, 2 months ago
Just a quick question guys, How does opengl calculate the pixels in the depth buffer? It looks like it might be 1.0f - (zpos/farClippingPlane) but I'm not completely sure.
Advertisement
The 3D positions of your vertices are turned into 4D vectors by appending a 1 on the end (x,y,z,1). This 4D vector is transformed by the world/view/projection matrices.
The projection matrix contains near/far plane information and does the division that you're guessing at ;)
You can look up "perspective projection matrix" and "matrix multiplication" to see how this math actually works.

These matrix multiplications result in a "projected" 4D vector which is interpolated across your triangles.
For each pixel in the triangle, it's depth value is computed by pos.z/pos.w (i.e. pos[2]/pos[3])
OK, So I'm implementing deferred lighting which requires the depthbuffer image. Would I be better off making the depthbuffer image with a shader or is there an easy way to grab the depth buffer? And then once I have that depth buffer, how would I get the z position of the pixel I'm targeting? (Multiply by pos.w, only I'm working with an image so it doesn't make much sense to that)
I've been meaning to ask this question for some time now...

If w == 1, why bother with the division? You'll always end up with z.

No, I am not a professional programmer. I'm just a hobbyist having fun...

Quote:Original post by maspeir
I've been meaning to ask this question for some time now...

If w == 1, why bother with the division? You'll always end up with z.


w doesn't stay 1 after the transformation with modelviewprojection

This topic is closed to new replies.

Advertisement