FBO depth buffer

Started by
10 comments, last by Herr_O 17 years, 5 months ago
Hi! I've decided to render my shadow map with the help of a FBO instead of using the window. I think I have set up my framebuffer and depthbuffer right, but it doesn't want to work. My code looks like this: Setup:____________________________________________________________________

// Create depth texture 
glGenTextures(1, &textureArray[0]);
glBindTexture(GL_TEXTURE_2D, textureArray[0]);
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT24, 512, 512, 0, GL_DEPTH_COMPONENT, GL_UNSIGNED_INT, 0);

// Filter and clamp
glTexParameterf(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR);
glTexParameterf(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP);

// Create frame buffer object and depth buffer
glGenFramebuffersEXT(1, &frameBuffer);
glGenRenderbuffersEXT(1, &depthRenderBuffer);

// initialize depth renderbuffer
glBindRenderbufferEXT(GL_RENDERBUFFER_EXT, depthRenderBuffer);
glRenderbufferStorageEXT(GL_RENDERBUFFER_EXT,GL_DEPTH_COMPONENT24, depthWidth, depthHeight);
glFramebufferRenderbufferEXT(GL_FRAMEBUFFER_EXT,GL_DEPTH_ATTACHMENT_EXT,GL_RENDERBUFFER_EXT, depthRenderBuffer);

checkFBOStatus();
Rendering:________________________________________________________________

glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, frameBuffer);
glBindRenderbufferEXT(GL_RENDERBUFFER_EXT, depthRenderBuffer);

glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_DEPTH_ATTACHMENT_EXT, GL_TEXTURE_2D, texture[0], 0);

/* Render stuff to texture */

...

/* Go back to window rendering */
glBindFramebufferEXT( GL_FRAMEBUFFER_EXT, 0 );
glBindRenderbufferEXT( GL_RENDERBUFFER_EXT, 0 );
I can't figure out what's wrong, I've used the examples #3 & #7 at oss.sgi.com. If I understand this the right way.. I shouldn't need to copy my frame to texture[0] with glCopyTexSubImage2D since im rendering directly to the fbo? I think my depth buffer becomes empty because my scene gets shadowed all over. Thanks in advance! [Edited by - Herr_O on October 5, 2006 8:16:54 AM]
_____________________________
www.OddGames.comwww.OddGames.com/daniel
Advertisement
what videocard are you using? If ATI don't use 24bit unless its supported IIRC older than x1000 doesn't support it. And change GL_CLAMP to GL_CLAMP_TO_EDGE
1) When rendering to the depth channel you don't actually need to bind the depth channel each time, (unless you want to use a different depth texture) to the current context as it is shared between all the colour buffers.

2) When you just want to render to the depth buffer you need to tell open GL to not use the colour buffers like so (This is Java but you should get the idea):

GL11.glDrawBuffer(GL11.GL_NONE);
GL11.glReadBuffer(GL11.GL_NONE);
You don't ned to bother setting up a depth renderbuffer as you're using the texture for that. Try specifying GL_DEPTH_COMPONENT rather than GL_DEPTH_COMPONENT24 (gives your vid card more flexibility in choosing format).

Make sure you use

glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_COMPARE_FUNC, GL_LEQUAL);
glTexParameteri(GL_TEXTURE_2D, GL_DEPTH_TEXTURE_MODE, GL_LUMINANCE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_COMPARE_MODE, L_COMPARE_R_TO_TEXTURE);

to set up your texture.
Quote:Original post by coordz
glTexParameteri(GL_TEXTURE_2D, GL_DEPTH_TEXTURE_MODE, GL_LUMINANCE);

Is this line required? If so what does it do exactly?

Life's like a Hydra... cut off one problem just to have two more popping out.
Leader and Coder: Project Epsylon | Drag[en]gine Game Engine

If you're using OpenGL's fixed functionality you need to say where to put the result of the depth comparison. By "where" I mean where abouts in (R,G,B,A) texture value that will be combined with the input fragments value. By using

glTexParameteri(GL_TEXTURE_2D, GL_DEPTH_TEXTURE_MODE, GL_LUMINANCE);

the depth comparison result is put in RGB channels. You can also use GL_ALPHA (result is in Alpha channel) or GL_INTENSITY (result is in all channels, i.e. RGBA).
imho GL_NEAREST is the one and only filter supported for depth textures...

...so change filtering from GL_LINEAR to GL_NEAREST and you *should* be fine ;)

cheers
zimerman
Quote:Original post by zimerman
imho GL_NEAREST is the one and only filter supported for depth textures...

...so change filtering from GL_LINEAR to GL_NEAREST and you *should* be fine ;)

cheers
zimerman


WHAT? GL_LINEAR on Nvidia gives you PCF for free, as its hardware accelerated.
This is completly wrong. The linear filtering does per pixel filtering. This is though wrong on shadow maps. Imagine a situation with 4 pixels queried. 2 of them are in the light ( hence their value is smaller than the value in the shadow map ) and 2 points in shadow ( values larger ). Now if you simply do a linear filter then the value of the shadow map is filtered together giving the impression there is an object at a point where there is not resulting in wrong comparisson results. PCF only works because you "test" all of the four pixels ( 1.0 for passed, 0.0 for not ) and then average this. This is precise wheras linear filtering causes artefacts.

Life's like a Hydra... cut off one problem just to have two more popping out.
Leader and Coder: Project Epsylon | Drag[en]gine Game Engine

Quote:Original post by RPTD
This is completly wrong. The linear filtering does per pixel filtering. This is though wrong on shadow maps. Imagine a situation with 4 pixels queried. 2 of them are in the light ( hence their value is smaller than the value in the shadow map ) and 2 points in shadow ( values larger ). Now if you simply do a linear filter then the value of the shadow map is filtered together giving the impression there is an object at a point where there is not resulting in wrong comparisson results. PCF only works because you "test" all of the four pixels ( 1.0 for passed, 0.0 for not ) and then average this. This is precise wheras linear filtering causes artefacts.


Nvidia website
In contrast, hardware shadow mapping uses a bilinear filter on top of the usual PCF step, resulting in smooth grey values even if the viewer zooms in on the shadowed regions. As you zoom in, hardware shadow mapping is therefore able to deliver arbitrarily higher quality without taking additional samples and while maintaining the same frame rate.


Not sure what your point is, but Nvidia's site says bilinear PCF is done with LINEAR. Now if you are saying this just samples the 4 texels around the current pixel then yeah your right but still a form of PCF.

This topic is closed to new replies.

Advertisement