Jump to content
  • Advertisement
Sign in to follow this  
cpplus

OpenGL OpenGL Projection Matrix Clarifications

This topic is 1095 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hello! I would really like and be thankful for a review and a few answers on the topic biggrin.png

 

Theese days I have been reading on the math behind the projection matrix. I get almost everything, except a few things, so I ask you guys that you read what I have learned up until now and to correct me about the wrong things I say. I will also ask a question or two.

 

So from what I know OpenGL renders everything from -1 to 1 on all axes. (Why does everyone say that the camera is at 0,0,0 then?!)

 

What we need to do is to project the scene on any plane that is parralel to the XY plane and has its Z between 1 and -1 BUT... we need the Z values for the depth buffer to work, so we decide to project the scene on a cube instead on a plane, if that makes sense.

 

Basically the matrix scales the vertices' X according to aspect ratio, so that the scene doesnt look squashed or stretched in NDC. It also scales the scene inside the specified frustum to a size of 2 cube (well, not a cube but a range) and translates it inside the -1 1 range on the three axises (again, why all the articles and videos I watch say that the camera is positioned at 0, 0, 0??).

 

First question:

What is bothering me is that the GPU divides the gl_Position's X, Y and Z by Z, which makes Z essentialy lost. Ok.. we keep it in the W value by using the matrix, but that means either of 2 things: 

- the depth test happens before GPU's perspective divide by Z

- the depth test tests agains gl_Position's W value

When does the depth test occur and does it use the W or the Z value?

 

So this is the tutorial that I base all of my assumptions on (related to my second question): http://ogldev.atspace.co.uk/www/tutorial12/tutorial12.html

 

Second question: from the matrix I construct based on the tutorial I project X and Y coordinates of a vertex on a plane, whose Z (the planes) is defined by the field of view angle, but the Z component of the vertex is projected on a plane with Z = 1 (because the matrix keeps the Z component into the W component so after division by W the Z component of the vertex becomes 1 so it goes on the "1" plane). So the vertex is not exactly projected, but it only looks as if it is exactly projected, because X and Y are projected on a different plane than the Z value. What am i getting wrong, or is this relatively right biggrin.png.

Edited by cpplus

Share this post


Link to post
Share on other sites
Advertisement

So from what I know OpenGL renders everything from -1 to 1 on all axes. (Why does everyone say that the camera is at 0,0,0 then?!)

Good question! It's because OpenGL is an oddball with unpopular conventions laugh.png

What is bothering me is that the GPU divides the gl_Position's X, Y and Z by Z, which makes Z essentialy lost. Ok.. we keep it in the W value by using the matrix, but that means either of 2 things: 
- the depth test happens before GPU's perspective divide by Z
- the depth test tests agains gl_Position's W value
When does the depth test occur and does it use the W or the Z value?

No, the GPU divides XYZ by W, and then throws W away. The W value is used to create perspective, allowing the scaling of X and Y to change based on distance.
Another way to think of it, is that the projection matrix doesn't map into a cube that goes from -1 to +1, it actually maps into a cube that goes from -W to +W. The GPU then normalizes these coordinates by dividing by W, resulting in the -1 to +1 NDC coordinates.
The depth test occurs after the divide by W, using the new NDC Z' value (where Z' = Z/W).

 

The fact that the depth test is done on Z' (not Z), also means that the distribution of precision inside depth buffers is hyperbolic. The vast majority of your depth buffer precision is located close to Z values of zero sad.png

See: https://developer.nvidia.com/content/depth-precision-visualized http://www.sjbaker.org/steve/omniv/love_your_z_buffer.html

GL's stupid -1 to +1 z coordinates also make the industry standard solution (see "reversed floating point buffer in DirectX" section) to this problem impossible to implement angry.png

Edited by Hodgman

Share this post


Link to post
Share on other sites


(Why does everyone say that the camera is at 0,0,0 then?!)

 

Saying the camera is at 0,0,0 is a bit misleading. In computer graphics, everything is a vector and vector math requires that the origin be 0. It's no more complicated then that. The camera, which is nothing more then a matrix, is the translation and rotation from 0,0,0.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!