Jump to content
  • Advertisement
calioranged

Depth Test Seemingly Inverted

Recommended Posts

Posted (edited)

I have the depth test enabled, and the depth function set to GL_LESS, but for some reason, the depth test seems to work the opposite way around, that is, if the z value is greater than the value currently contained within the depth buffer, the fragment becomes a pixel, and if the z value is less than the value currently contained within the depth buffer, then it fails the depth test. 

I have attached the project files below, hopefully there is somebody who understands why this behaviour is occurring.  

Buffers.cpp

Buffers.h

Errors.cpp

Errors.h

GLFW.cpp

GLFW.h

GL.shader

Shader.cpp

Shader.h

Main.cpp

Edited by calioranged
Attached solution file rather than individual project files.

Share this post


Link to post
Share on other sites
Advertisement

It looks like you're not using a projection matrix anywhere / are using identity as projection? If so you're working directly in normalised device coordinates. It's been a while since I had to wrestle GL's conventions, but I think projections typically scale the Z axis by -1, because GLs NDC Z range is from -1 to +1, with the axis pointing towards the viewer -- i.e. z=-1 is further into the screen, and z=+1 is nearer to the viewer.

Share this post


Link to post
Share on other sites
2 minutes ago, Hodgman said:

It looks like you're not using a projection matrix anywhere

If you look in the constructor of 'ModelViewProjection' (at the top of Shader.cpp), I call glm::ortho(left, right, bottom, top, near, far) to generate an orthographic projection matrix. Curiously, by calling glm::orthoLH(left, right, bottom, top, near, far) the problem disappears. I am still trying to figure out why exactly this is. I heard somebody say that OpenGL uses a right handed coordinate system with orthographic projection and a left handed coordinate system with perspective projection (for the purposes of the depth test), I'm still trying to confirm this and understand why using a different coordinate system has such an effect on the projection matrix. 

9 minutes ago, Hodgman said:

GLs NDC Z range is from -1 to +1, with the axis pointing towards the viewer -- i.e. z=-1 is further into the screen, and z=+1 is nearer to the viewer.

If that was the case then the depth test would surely be be set to GL_GREATER by default rather than GL_LESS so that fragments that are closer to the viewer (have a higher z value) are rendered on top of those that are further away (have a lesser z value). 

The fact that by default, the OpenGL depth test works the opposite way round (objects with a lesser z value are rendered on top of those with a greater value) surely means that at least for perspective projection, OpenGL uses a left handed coordinate system (-z is coming out of the screen and +z is going into the screen) rather than a right handed coordinate system (-z is going into the screen and +z is coming out of the screen).

Share this post


Link to post
Share on other sites

OpenGL's NDC space is left-handed, with +z away from the viewer (or at least I'm 99.9% sure that's the case - I always allow for the possibility of being wrong). So I think you've correctly surmised that part of things.

Quote

I heard somebody say that OpenGL uses a right handed coordinate system with orthographic projection and a left handed coordinate system with perspective projection (for the purposes of the depth test)

That doesn't sound right. It'd be interesting to see a source for that.

In any case, OpenGL doesn't have any fixed handedness as far as view and projection transforms go (at least not since the very earliest versions).

I'm not sure why your depth test is (or was) reversed, but I'm guessing it's due to a mismatch between your view transform and/or worldspace coordinates and your projection transform convention. If switching to a LH projection solves the problem, that suggests your view transform and/or worldspace coordinates are configured (intentionally or otherwise) with the expectation of a left-handed projection.

Share this post


Link to post
Share on other sites
1 hour ago, Zakwayda said:

That doesn't sound right. It'd be interesting to see a source for that.

Here is where I heard it:

Could someone run the program on their system and see if there is any difference?

Share this post


Link to post
Share on other sites
Quote

Here is where I heard it:

Hm, I didn't hear that claim made in the video. I'd probably need a timestamp or quote to know what you're referring to.

Also, although the claim is commonly made, I'd disagree with the claim made in the video that OpenGL is right-handed and DirectX is left-handed. Even if that was true at some point in the past, it hasn't been true for a very long time.

Share this post


Link to post
Share on other sites
Posted (edited)
5 hours ago, Zakwayda said:

Hm, I didn't hear that claim made in the video. I'd probably need a timestamp or quote to know what you're referring to.

Ah, I linked the wrong video. Sorry for wasting your time there! Here is the correct link with a timestamp (explanation ends at around 3:40):

Specifically this part, at around 3:20:

Quote

"Only on the perspective projection, +1 is going out away from our view point, -1 is close, right up to our eye, and I'm pretty sure that only has to do with depth tests... most of the time in OpenGL we're right handed, once we do the perspective projection, we go to left handed."

 

6 hours ago, Zakwayda said:

I'm not sure why your depth test is (or was) reversed, but I'm guessing it's due to a mismatch between your view transform and/or worldspace coordinates and your projection transform convention. If switching to a LH projection solves the problem, that suggests your view transform and/or worldspace coordinates are configured (intentionally or otherwise) with the expectation of a left-handed projection.

I have set up all my coordinates with the expectation that -1 will be coming out of the screen, while +1 is going into the screen:

std::array<float, 12> RedSquareVertices
 =	{
  	 240.0F,-135.0F,-0.1F,
  	 240.0F, 135.0F,-0.1F,
  	-240.0F, 135.0F,-0.1F,
  	-240.0F,-135.0F,-0.1F
 	};

std::array<float, 12> BlueSquareVertices
=	{
	1200.0F,405.0F,-0.2F,
	1200.0F,675.0F,-0.2F,
	+720.0F,675.0F,-0.2F,
	+720.0F,405.0F,-0.2F,
	};

So with the above example, once the depth buffer is turned on, I would expect, as standard, for OpenGL to render the square composed of BlueSquareVertices on top of the square composed of RedSquareVertices as its values are lesser (closer to the viewer) than its counterpart and the depth test is set to GL_LESS by default. However this isn't what happens. See below:

When using glm::ortho(left, right, bottom, top, near, far):

1392416202_pastedimage0.png.293756e4f1e09ffc3c75a84e8dc5a8bc.png

When using glm::orthoLH(left, right, bottom, top, near, far):

614073886_pastedimage0(1).png.93dd394b8f37aabb5c8378d0ed2cd844.png

Edited by calioranged

Share this post


Link to post
Share on other sites

I think by 'only on the perspective projection', the video presenter just means compared to clip space and NDC. I don't think he means to draw a comparison between perspective and orthographic there.

I'm not familiar with GLM, but looking online I see mention of a default handedness. I'm guessing that you have the default handedness set to right-handed, and that therefore ortho() is building a right-handed transform while orthoLH() is building a left-handed transform.

Based on this:

Quote

I have set up all my z coordinates with the expectation that -1 will be coming out of the screen, while +1 is going into the screen:

It seems orthoLH() is indeed what you want. (Maybe you intended to include one more image in your post? If so, and if that image shows the blue quad over the red quad, that would be consistent with this conclusion.)

Share this post


Link to post
Share on other sites
Posted (edited)

If you refresh this now you should see the second image (I accidentally hit 'Submit Reply' prematurely).

29 minutes ago, Zakwayda said:

I think by 'only on the perspective projection', the video presenter just means compared to clip space and NDC. I don't think he means to draw a comparison between perspective and orthographic there.

Ah yes, that sounds as if it is probably right, but I'm having trouble getting my around the practical application of such a convention. Surely that just makes it more confusing being that the depth function is set to GL_LESS rather than GL_GREATER by default. If it was set to GL_GREATER then there would be no need to think of the z coordinates the opposite way round when it comes to the depth test. 

29 minutes ago, Zakwayda said:

I'm not familiar with GLM, but looking online I see mention of a default handedness. I'm guessing that you have the default handedness set to right-handed, and that therefore ortho() is building a right-handed transform while orthoLH() is building a left-handed.

I'm still having trouble understanding the mathematics of why the handedness actually matters when building the projection matrix because at the end of the day, the parameters in the glm functions ortho() and orthoLH() still take zNear and zFar in the same order, so if I pass the same values (-1.0F, 1.0F respectively) to either function then how is it possible that they end up the opposite way round in glm::ortho() but not glm::orthoLH()? I really can't get my head around this part especially! Any thoughts on this?

Edited by calioranged

Share this post


Link to post
Share on other sites
Quote

I'm still having trouble understanding the mathematics of why the handedness actually matters when building the projection matrix because at the end of the day, the parameters in the glm functions ortho() and orthoLH() still take zNear and zFar in the same order, so if I pass the same values (-1.0F, 1.0F respectively) to either function then how is it possible that they end up the opposite way round in glm::ortho() but not glm::orthoLH()? I really can't get my head around this part especially! Any thoughts on this?

This is probably stating the obvious, but if you submit the same input values to orthoRH() and orthoLH() and look at the output matrices, you'll see that a couple elements are negatives of each other. Similarly, if you look at the code for these two functions you should see this reflected in the code as well. In short, the RH version has the effect of negating coordinates along the z axis to get them into the left-handed space that OpenGL expects. Since the LH version is already left handed, it doesn't need to apply this modification.

To put it in other words, the space OpenGL expects vertex coordinates to be delivered in is left-handed (conceptually at least). How you get the coordinates into that space is your business, but if your view space is right handed, generally speaking there'll be an extra step required to get into left-handed space that won't be required if your view space is left handed to begin with. That's why handedness matters when building a projection matrix.

It is confusing, and I'm not sure how helpful my above comments will be, but feel free to ask for further clarification if needed.

Share this post


Link to post
Share on other sites

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!