Rendering to Depth Buffer using GLSL

Started by
7 comments, last by MGB 12 years, 9 months ago
I m trying to render my scene from light POV to a depth texture using GLSL. I have setup the depth texture as required and specified GL_NONE for draw buffers. For testing purpose when I use RGBA8 texture along with a render buffer as depth buffer with the FBO and render the scene, the rendered scene is updated in the RGBA8 texture properly. However, when I am trying binding the depth buffer and rendering the scene and trying to view the rendered depth buffer texture, all I get is a white texture output. In the later case I am using a slightly different GLSL code (as the draw buffers are disabled).
Here are my GLSL shaders:
Vertex shader


uniform mat4 worldViewProjMat;
attribute vec3 vertPos;
void main(void) {
gl_Position = worldViewProjMat * vec4(vertPos,1);
}



Pixel shader


void main(void) {
discard;
}


I am quite sure this is not how I should render to the depth buffer. Can anyone enlighten me as to what should the fragment do in the later case as I do not want any color output.


I tweaked my pixel shader a bit to do:

void main(void) {
gl_FragDepth = 0.5;
}


This rendered the scene as expected with the common depth value in the depth buffer (in other words, it showed up some gray pixels, the outline of the geometry I was rendering from light view point).

What I do not understand is if automatic depth test and depth write should write those depth values to the texture anyway why am I getting a value of 1.0f in all cases except when I am explicitly outputting the depth value from frag program.
I certainly do not plan to output depth value from fragment program nor compare the values and loose all optimization for Early Z or Hierachical Z outs.
I even adjusted the near and the far planes (400 units apart), but still outputs white.


EDIT:

One of my mistakes was using 'discard' which disables depth write/test. So my pixel shader now looks like:

void main(void) {
// do nothing???
}


Thanks for reading,
obhi
What if everyone had a restart button behind their head ;P
Advertisement
I added a depth bias in my fragment shader, and it kinda showed me the gray output of the depth texture.
Before that I checked if gl_FragCoord.z was within [0,1] which it was for the pixels that got drawn. That made sense and when I added a small bias, I was able to darken the pixels enough to give me the depth image.
I was hoping glPolygonOffset would do that trick for me (and hence I wouldn't need to work with gl_FragDepth) but I tried it and failed. Will look into it further.

Still wondering what should I do with the fragment shader, as it will do nothing, how to go about it. Should I just not link it to the program?

Any help will be much appreciated.
obhi
What if everyone had a restart button behind their head ;P
Sounds like you want to render to a depth texture without a color buffer
http://www.opengl.org/wiki/GL_EXT_framebuffer_object#Depth_only

as for the fragment shader, you must always write to gl_FragColor.
No, you can't call discard. That just stops fragment processing and the depth buffer doesn't get updated.
Sig: http://glhlib.sourceforge.net
an open source GLU replacement library. Much more modern than GLU.
float matrix[16], inverse_matrix[16];
glhLoadIdentityf2(matrix);
glhTranslatef2(matrix, 0.0, 0.0, 5.0);
glhRotateAboutXf2(matrix, angleInRadians);
glhScalef2(matrix, 1.0, 1.0, -1.0);
glhQuickInvertMatrixf2(matrix, inverse_matrix);
glUniformMatrix4fv(uniformLocation1, 1, FALSE, matrix);
glUniformMatrix4fv(uniformLocation2, 1, FALSE, inverse_matrix);
Basically what the other guy said:

Discard will not write ANY pixel data (color or depth).

Once you draw the texture it will still be all white because almost all the numbers in the depth buffer are .9 to 1.0, so what you should do is take a screenshot of the white buffer and put it in photoshop etc, and use the contrast slider and you will start to see your depth buffer.

NBA2K, Madden, Maneater, Killing Floor, Sims http://www.pawlowskipinball.com/pinballeternal


Sounds like you want to render to a depth texture without a color buffer
http://www.opengl.or...ject#Depth_only

as for the fragment shader, you must always write to gl_FragColor.
No, you can't call discard. That just stops fragment processing and the depth buffer doesn't get updated.


Thanks for that info and link V-man. I figured the values are very near to 1 otherwise I wont be getting a white texture, so I subtracted a random value (0.3) to the gl_FragCoord.z to get something as output. Anyway it was just for testing's sake. But this I did not know that gl_FragColor needs to be written to.
That link was very helpful.

Thanks again.
obhi
What if everyone had a restart button behind their head ;P
I have got the depth buffer rendering working, here is a screenshot:
screenshotogx.jpg

To render the depth buffer, (i.e. the quad with the depth buffer texture applied) I am doing this in the fragment shader (for the quad):



uniform sampler2D depthMap;
in vec2 texCoord;
void main(void)
{
vec4 tex = (texture2D( depthMap, texCoord ) * 1000) - vec4(999);
gl_FragColor = tex;
}


I tested with some magic numbers to come to the understanding that the depth value is varying at about the 3rd number after decimal (my znear was 0.1 which caused this).
Having said that I am not getting correct shadows when doing my light pass with the shadow map, I guess it has to do with the projection I am doing.
I am doing a pretty straight forward shadow mapping with texture2D lookup + biasing. I will do away with the bias and check (my bias is 0.0005 which might tinker with the image it is rendering).

Thanks for all the help.
obhi
What if everyone had a restart button behind their head ;P
I figured out quite a few things:
1. Using directx like projection matrix in opengl app, as expected, will kill some precision with the z coords. So scaling the projection matrix by a matrix like: (row major, row vector)
[ 1 0 0 0 ]
[ 0 1 0 0 ]
[ 0 0 2 0 ]
[ 0 0 -1 1 ]
This rectifies the problem with precision.

2. I am not sure if this will work with every version of GL but writing a fragment shader like:

void main() {
}

...does output the depth data (+ depth test) irrespective of the fact that color output was not provided (at least with OpenGL 3.1+).

3. The near plane of 0.1 is a bad idea if rendering the depth buffer as an onscreen quad has to be taken into consideration. Using 1 as the near plane helps, but it should of-course depend on the scene geometry and light position, etc.

With these modifications I am able to visualize the depth buffer without any ugly truncation of digits. However, my shadows are still not working. I am getting a shadowed patch far off from the actual area that should be shadowed as shown:

screenshotdo.jpg


The light is placed behind the chair as visible in the depth image. However the shadow doesn't appear to be correct. I will have to explore more on this.

Any clue as to why that patch might appear (texture warping??) would be helpful.

Thanks,
obhi
What if everyone had a restart button behind their head ;P
I fixed it. It was due to not clamping the texture (I was using warp)!!
What if everyone had a restart button behind their head ;P
As a quick aside, you can view the depth texture easily via the fixed pipe by temporarily switching off the texture compare mode, e.g.:


glBindTexture(GL_TEXTURE_2D, mTexIdShadowDepthRender);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_COMPARE_MODE_ARB, GL_NONE); // Turn off compare mode to draw.
glBegin(GL_TRIANGLE_STRIP);
glTexCoord2d(0,1); glVertex2d(x, y);
glTexCoord2d(0,0); glVertex2d(x, y+size);
glTexCoord2d(1,1); glVertex2d(x+size, y);
glTexCoord2d(1,0); glVertex2d(x+size, y+size);
glEnd();
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_COMPARE_MODE_ARB, GL_COMPARE_R_TO_TEXTURE_ARB); // Reinstate compare mode.

This topic is closed to new replies.

Advertisement