FBO depth test issues

Started by
3 comments, last by freakchild 11 years, 5 months ago
I'm probably missing something really simple here, but as best I can tell, my FBO code is suddenly not handling depth testing (or writing?) properly any longer. I recently upgraded my coding laptop to a 640M so that might be what's wrong as well. The code might (but shouldn't) also behave differently, because I'm invoking everything from within my GUI wrapper - if the wrapper is to be blamed, I'd certainly like to know how or why!

In any case, I ended up going around my own codebase and writing a direct OpenGL test snippet, which behaves identically to my wrapper code: stuff is written to the color buffer if depth testing is disabled or depth func is set to GL_ALWAYS. Otherwise nothing is written. The result is attached and so are the relevant code snippets.

As I mentioned above, I'm probably missing something really simple and stupid.

PS - culling is disabled!
PPS - sorry for the shoddily formatted code - I threw it together smile.png



static GLuint fboId = 0;
static GLuint rboId = 0;
static GLuint textureId = 0;
if(!fboId)
{
#define TEXTURE_HEIGHT 512
#define TEXTURE_WIDTH 512
// create a texture object
glGenTextures(1, &textureId);
glBindTexture(GL_TEXTURE_2D, textureId);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
//glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
//glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP);
//glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, TEXTURE_WIDTH, TEXTURE_HEIGHT, 0, GL_RGBA, GL_UNSIGNED_BYTE, 0);
glBindTexture(GL_TEXTURE_2D, 0);

//set up the FBO - CHECK_FBO_STATUS succeeds for all calls

CHECK_FBO_STATUS(glGenFramebuffers(1, &fboId));
glBindFramebuffer(GL_FRAMEBUFFER, fboId);
CHECK_GL_ERROR(glGenRenderbuffers(1, &rboId));
CHECK_GL_ERROR(glBindRenderbuffer(GL_RENDERBUFFER, rboId));
CHECK_GL_ERROR(glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT, TEXTURE_WIDTH, TEXTURE_HEIGHT));
CHECK_GL_ERROR(glBindRenderbuffer(GL_RENDERBUFFER, 0));
// attach a texture to FBO depth attachement point
CHECK_FBO_STATUS(glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, textureId, 0));
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, rboId);
glBindFramebuffer(GL_FRAMEBUFFER, 0);
}

//render code
glBindFramebuffer(GL_FRAMEBUFFER, fboId);

glClear(GL_DEPTH_BUFFER_BIT | GL_COLOR_BUFFER_BIT);

//do some additional setup here, like setting up the camera and such

glEnable(GL_DEPTH_TEST); //<- specifically enable depth testing...
glDepthFunc(GL_LEQUAL);//...set up depth comparison...
glDepthMask(true);//... and enable masking


//nothing fancy - enables a shader and draws the buffered geometry; this part works properly
shdTerrain->Enable(drv);
drv->DrawVertexArray(vaHandle, GD_TRIANGLES);
shdTerrain->Disable();
glBindFramebuffer(GL_FRAMEBUFFER, 0);


And here's a screenshot when depth testing is disabled, proving that it's not a color attachment, shader or vertex buffer issue. As noted above, when depth testing is enabled, nothing is drawn.

[attachment=12136:fbo depth.png]
Advertisement
i have the same exact problem, and no solution for you, sorry
but i will be following this thread in hopes for a solution =)
A case where everything is rendered when depth is disabled, but nothing when enabled is suggestive to me, as in Kaptein's case that the pixels are being rejected because the depth test is working and either the depth testing setup is wrong, or you're getting the wrong output because of either the wrong input or some meddling going on between (your depth value is not what you are expecting, or is not within the range you've configured to expect).

I notice you haven't set your depthrange...nothing wrong with that, it just means that you're expecting the default of between 0.0 and 1.0. Is either your projection not set up to deliver a value in this range, or are you doing something else in the shader that might overwrite or otherwise corrupt the depth output to be not within this?

It might be a good idea to post your projection matrix code and your shaders, then people can look how your projection is set up and confirm that's okay and nothing is meddling with the expected result in the shader.

I would also look at your depth output by putting it into the shader output rgb and therefore viewing it (depth tests disabled to make sure the results aren't rejected). Make sure you're not always writing 1.0 or something, or at least it's always as you expect showing a depth between 0.0 and 1.0.

Another test you can do is force the depth output to 0.0...which if the depth testing is on should see everything pass. This will not solve the problem, but would at least add more weight to the idea the depth testing is working even if the result is correct. I would do this before the above actually. Don't be worried about the value of doing tests like this, sometimes you have to beat all the info to the surface before the problem/solution is obvious.

With that in mind, also have a look at http://www.mvps.org/...r_z/linearz.htm...it's not a solution for your problems but there is a discussion in there on z and w values...some knowledge of what they are and how they relate should also allow you to force them to other specific values and undertake more specific tests, making sure that you always get the expected results. There is also some knowledge in there that might otherwise highlight an error.
Thanks for the lengthy and insightful reply!

My depth output looks perfectly normal. The following shader demonstrates it:


//VERTEX

in vec3 in_Position;
in vec3 in_Color;
varying vec3 color;
void main()
{
color = in_Color;
gl_Position = gl_ModelViewProjectionMatrix * vec4(in_Position, 1.0);
}


//FRAGMENT
varying vec3 color;
void main()
{
float a = gl_FragCoord.z/gl_FragCoord.w *0.01;
//gl_FragData[0] = vec4(color, 1.0);
gl_FragData[0] = vec4(a, a, a, 1.0);
}


The following code sets up the ortho and perspective view matrices, which are the two modes I'm using:


void glSwitchOrtho(float pLeft, float pRight, float pBottom, float pTop, float pNear, float pFar)
{
glMatrixMode(GL_PROJECTION);
glPushMatrix();
glLoadIdentity();
glOrtho(pLeft, pRight, pBottom, pTop, pNear, pFar);
glMatrixMode(GL_MODELVIEW);
glPushMatrix();
glLoadIdentity();
}

void glSwitchPerspective()
{
glMatrixMode(GL_PROJECTION);
glPopMatrix();
glMatrixMode(GL_MODELVIEW);
glPopMatrix();
}



I am using perspective projection when drawing 3D geometry. I'm building all matrices myself and loading them to OpenGL via glLoadMatrix(), but I've thoroughly tested this code and it has never been the culprit before. The FBO case on the latest drivers seems to be the first (I'll try to get around to testing it out on a different GPU/driver tomorrow).

By "forcing" depth to be zero do you mean setting the z and w components of gl_Position in the vertex shader to zero?
Okay, so I did mean the code that sets up the projection matrix - just looking to make sure it maps to your depth range of 0.0 to 1.0, but if that's always worked and you're confident in it.

I mean set gl_FragDepth to 0 in the fragment shader, but certainly you could also try fiddling with gl_Position (although with w as 1) to try and beat other oddities to the surface. Really these won't solve anything, they'll just allow you to spot inconsistencies and maybe pickup a clue nugget.

This topic is closed to new replies.

Advertisement