Jump to content
  • Advertisement
Sign in to follow this  
bluntman

OpenGL Implementing linear-z.

This topic is 3732 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I am trying to do this at the moment, but running into a couple of problems. My current implemention: I divide the z-column of the projection matrix by z Far:
proj.m33/=camera->getFarPlane();
proj.m43/=camera->getFarPlane();

My shaders all calculate the final vertex position like so:
float4 applyModelViewProj(float3 vpos, float4x4 modelViewProj)
{
	float4 OUTposition = mul(modelViewProj, float4(vpos, 1.0)); 
	OUTposition.z = OUTposition.z * OUTposition.w;
	return OUTposition;
}

This article was my source for this technique, but it is directed at DirectX users. I think OpenGL does things a little differently with the z-coordinates but I can't find out how. The visual manifestation of the problem I am getting is that it seems that depth is not being correctly interpolated between vertices, and also depth seems to increase again behind the camera. Could anybody please explain to me the OpenGL specific method of doing a linear z-buffer?

Share this post


Link to post
Share on other sites
Advertisement
EDIT While I believe that the principles mentioned herein are correct, there are some mistakes made by me, namely some sign errors. In a post furthur below I correct this. /EDIT


I don't know the solution directly, but I know that OpenGL actually does handle linear perspective projection differently. The following is what I think gives the solution, but please check it double. I'm also not sure about any side effects. If you try it out please let me know what happens ...

If using glFrustum, then the last 2 rows of the projection matrix are

[ 0 0 (f+n)/(n-f) 2fn/(n-f) ]
[ 0 0 -1 0 ]
When multiplying this with an arbitray vertex position [ x y z 1 ]T and normalizing the result, then
z' := (f+n)/(f-n) - 2fn/(f-n)/z
is the transformed z. So, the limits are
z'(z=n) = -1
z'(z=f) = +1

In D3D the equivalent stuff is (see matrix in the cited article)
z' := f/(f-n) - fn/(f-n)/z
and hence the limits are
z'(z=n) = 0
z'(z=f) = +1
as is affirmed by the article. The trick was to alter the z' formula by dividing by f and multiplying by z, so that
z" := z'*z/f = z/(f-n) - n/(f-n)
and the limits are still
z"(z=n) = 0
z"(z=f) = +1

Now you're doing the same in OpenGL, yielding in
z" := z*(f+n)/(f-n)/f - 2n/(f-n)
with the limits
z"(z=n) = -n/f > -1
z"(z=f) = +1

Since z clipping is done on [-1,+1] but z''(z=n) is approx. 0, you'll have geometry located before the near "clipping" plane visible.

What you want instead is a linear function
z"(z) := a*z + b
that fulfills OpenGL's limits. That leads to 2 functions
z"(z=n) = a*n + b == -1
z"(z=f) = a*f + b == +1
what can be solved to
a = 2/(f-n)
b = -(n+f)/(f-n)
so that
z" = 2z/(f-n) - (n+f)/(f-n)

Dividing by w=-z and comparing the co-efficients with OpenGL's matrix values shows that
proj.m33 = -a
proj.m43 = -b
would produce the correct matrix.

Oh well, I hope that I made no mistake :)

[Edited by - haegarr on May 27, 2008 3:44:27 AM]

Share this post


Link to post
Share on other sites
Or alternativaly, create a GL_RGBA32F render to texture and output the eye space depth values

[VERTEX SHADER]
varying vec4 myvertex;

myvertex = ModelviewMatrix * gl_Vertex;

[FRAGMENT SHADER]
varying vec4 myvertex;

gl_FragColor = myvertex;

Share this post


Link to post
Share on other sites
Or even more alternatively output the eyespace.z / farplane. This will give you a value ranging from 0 to 1, as the farplane is the maximum value that an eyespace z value can have.

The other way is to just convert the value into a linear value when you need it in the shader (this will allow you to use the existing depth buffer).

Share this post


Link to post
Share on other sites
Quote:
Original post by stramit
Or even more alternatively output the eyespace.z / farplane. This will give you a value ranging from 0 to 1, as the farplane is the maximum value that an eyespace z value can have.

The other way is to just convert the value into a linear value when you need it in the shader (this will allow you to use the existing depth buffer).

Can you explain this a bit deeper, please? IMHO that is what the OP / cited article primarily did, isn't it? And it doesn't work for the OP because he is using OpenGL. The projection is a mapping from the eye space to the clip space, and OpenGL's clip space doesn't range from 0 to +1 but from -1 to +1. The article explains using the z buffer and driving as little as possible additional shader operations as goals. So, following your approach would waste half of the range of the z buffer and requires an additional clipping plane, or else adds more operations to be done for each vertex, didn't it?. Please disprove this point if I'm wrong.

Share this post


Link to post
Share on other sites
Quote:
Original post by Ashkan
Linearized Depth Using Vertex Shaders

Well, if you would please take the time to read at least the OP, then you'll see that exactly the cited article was implemented and doesn't work "as is" for OpenGL. This entire thread is about to adapt the method to work well with OpenGL.

Share this post


Link to post
Share on other sites
I thought that a GL matrix was transposed (notationally, if not in memory layout) with respect to a Direct3D matrix. So to modify this technique for OpenGL, shouldn't it be the 33 & 34 elements that are modified, rather than 33 and 43?

After all, if a point in GL is a column vector, then row 3 of a GL matrix is what determines Z'. This implies that to linearize Z', you need to modify elements on the 3rd row, possibly in the way that haegarr is suggesting.

Or am I being a noob?

Share this post


Link to post
Share on other sites
Quote:
Original post by tweduk
I thought that a GL matrix was transposed (notationally, if not in memory layout) with respect to a Direct3D matrix. So to modify this technique for OpenGL, shouldn't it be the 33 & 34 elements that are modified, rather than 33 and 43?

Thanks for hinting at this point. Yep, mathematically the matrices of OpenGL are the transposed matrices of D3D. To be precise, OpenGL uses column vectors while D3D uses row vectors. Additionally, the (2D) matrices have also a memory layout when stored in the (1D) linear memory. Here OpenGL uses the column major order, while D3D uses the row major order. In sum, column vectors with column major order and row vectors with row major order yield in the identical layout of matrices in memory. There is no need to re-arrange matrix values when switching from the one form to the other. (Opposed to that, e.g. COLLADA uses column vectors and row major order, and hence requires re-arrangement when using with either OpenGL or D3D.)

Quote:
Original post by tweduk
After all, if a point in GL is a column vector, then row 3 of a GL matrix is what determines Z'. This implies that to linearize Z', you need to modify elements on the 3rd row, ...

Yep a 2nd time. I haven't proven which elements m33 and m43 are by looking into D3D docs, but concluded from the math in the cited article that the both scalars are those also affected in OpenGL matrix. For clarification, I meant the substitution

[ 0 0 a b ]
for the 3rd row.

Share this post


Link to post
Share on other sites
Quote:
Original post by haegarr
Quote:
Original post by tweduk
I thought that a GL matrix was transposed (notationally, if not in memory layout) with respect to a Direct3D matrix. So to modify this technique for OpenGL, shouldn't it be the 33 & 34 elements that are modified, rather than 33 and 43?

Thanks for hinting at this point. Yep, mathematically the matrices of OpenGL are the transposed matrices of D3D. To be precise, OpenGL uses column vectors while D3D uses row vectors. Additionally, the (2D) matrices have also a memory layout when stored in the (1D) linear memory. Here OpenGL uses the column major order, while D3D uses the row major order. In sum, column vectors with column major order and row vectors with row major order yield in the identical layout of matrices in memory. There is no need to re-arrange matrix values when switching from the one form to the other. (Opposed to that, e.g. COLLADA uses column vectors and row major order, and hence requires re-arrangement when using with either OpenGL or D3D.)

I'm not totally sure which elements m33 and m43 are when speaking of D3D, but concluded from the math in the cited article that the both scalars are those also affected in OpenGL matrix. For clarification, I meant the substitution

[ 0 0 a b ]
[ 0 0 -1 0 ]
for the 3rd and 4th rows.


The 4,3 element of a D3D matrix is row 4, column 3. So depending on the memory layout of 'proj', it's possible that the OP is clobbering the -1 element rather than setting the element with value 'b'.

To OP: how is 'proj' defined?

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!