Sign in to follow this  
Herr_O

FBO depth buffer

Recommended Posts

Hi! I've decided to render my shadow map with the help of a FBO instead of using the window. I think I have set up my framebuffer and depthbuffer right, but it doesn't want to work. My code looks like this: Setup:____________________________________________________________________
// Create depth texture 
glGenTextures(1, &textureArray[0]);
glBindTexture(GL_TEXTURE_2D, textureArray[0]);
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT24, 512, 512, 0, GL_DEPTH_COMPONENT, GL_UNSIGNED_INT, 0);

// Filter and clamp
glTexParameterf(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR);
glTexParameterf(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP);

// Create frame buffer object and depth buffer
glGenFramebuffersEXT(1, &frameBuffer);
glGenRenderbuffersEXT(1, &depthRenderBuffer);

// initialize depth renderbuffer
glBindRenderbufferEXT(GL_RENDERBUFFER_EXT, depthRenderBuffer);
glRenderbufferStorageEXT(GL_RENDERBUFFER_EXT,GL_DEPTH_COMPONENT24, depthWidth, depthHeight);
glFramebufferRenderbufferEXT(GL_FRAMEBUFFER_EXT,GL_DEPTH_ATTACHMENT_EXT,GL_RENDERBUFFER_EXT, depthRenderBuffer);

checkFBOStatus();
Rendering:________________________________________________________________
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, frameBuffer);
glBindRenderbufferEXT(GL_RENDERBUFFER_EXT, depthRenderBuffer);

glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_DEPTH_ATTACHMENT_EXT, GL_TEXTURE_2D, texture[0], 0);

/* Render stuff to texture */

...

/* Go back to window rendering */
glBindFramebufferEXT( GL_FRAMEBUFFER_EXT, 0 );
glBindRenderbufferEXT( GL_RENDERBUFFER_EXT, 0 );
I can't figure out what's wrong, I've used the examples #3 & #7 at oss.sgi.com. If I understand this the right way.. I shouldn't need to copy my frame to texture[0] with glCopyTexSubImage2D since im rendering directly to the fbo? I think my depth buffer becomes empty because my scene gets shadowed all over. Thanks in advance! [Edited by - Herr_O on October 5, 2006 8:16:54 AM]

Share this post


Link to post
Share on other sites
1) When rendering to the depth channel you don't actually need to bind the depth channel each time, (unless you want to use a different depth texture) to the current context as it is shared between all the colour buffers.

2) When you just want to render to the depth buffer you need to tell open GL to not use the colour buffers like so (This is Java but you should get the idea):

GL11.glDrawBuffer(GL11.GL_NONE);
GL11.glReadBuffer(GL11.GL_NONE);

Share this post


Link to post
Share on other sites
You don't ned to bother setting up a depth renderbuffer as you're using the texture for that. Try specifying GL_DEPTH_COMPONENT rather than GL_DEPTH_COMPONENT24 (gives your vid card more flexibility in choosing format).

Make sure you use

glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_COMPARE_FUNC, GL_LEQUAL);
glTexParameteri(GL_TEXTURE_2D, GL_DEPTH_TEXTURE_MODE, GL_LUMINANCE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_COMPARE_MODE, L_COMPARE_R_TO_TEXTURE);

to set up your texture.

Share this post


Link to post
Share on other sites
Quote:
Original post by coordz
glTexParameteri(GL_TEXTURE_2D, GL_DEPTH_TEXTURE_MODE, GL_LUMINANCE);

Is this line required? If so what does it do exactly?

Share this post


Link to post
Share on other sites
If you're using OpenGL's fixed functionality you need to say where to put the result of the depth comparison. By "where" I mean where abouts in (R,G,B,A) texture value that will be combined with the input fragments value. By using

glTexParameteri(GL_TEXTURE_2D, GL_DEPTH_TEXTURE_MODE, GL_LUMINANCE);

the depth comparison result is put in RGB channels. You can also use GL_ALPHA (result is in Alpha channel) or GL_INTENSITY (result is in all channels, i.e. RGBA).

Share this post


Link to post
Share on other sites
Quote:
Original post by zimerman
imho GL_NEAREST is the one and only filter supported for depth textures...

...so change filtering from GL_LINEAR to GL_NEAREST and you *should* be fine ;)

cheers
zimerman


WHAT? GL_LINEAR on Nvidia gives you PCF for free, as its hardware accelerated.

Share this post


Link to post
Share on other sites
This is completly wrong. The linear filtering does per pixel filtering. This is though wrong on shadow maps. Imagine a situation with 4 pixels queried. 2 of them are in the light ( hence their value is smaller than the value in the shadow map ) and 2 points in shadow ( values larger ). Now if you simply do a linear filter then the value of the shadow map is filtered together giving the impression there is an object at a point where there is not resulting in wrong comparisson results. PCF only works because you "test" all of the four pixels ( 1.0 for passed, 0.0 for not ) and then average this. This is precise wheras linear filtering causes artefacts.

Share this post


Link to post
Share on other sites
Quote:
Original post by RPTD
This is completly wrong. The linear filtering does per pixel filtering. This is though wrong on shadow maps. Imagine a situation with 4 pixels queried. 2 of them are in the light ( hence their value is smaller than the value in the shadow map ) and 2 points in shadow ( values larger ). Now if you simply do a linear filter then the value of the shadow map is filtered together giving the impression there is an object at a point where there is not resulting in wrong comparisson results. PCF only works because you "test" all of the four pixels ( 1.0 for passed, 0.0 for not ) and then average this. This is precise wheras linear filtering causes artefacts.


Nvidia website
In contrast, hardware shadow mapping uses a bilinear filter on top of the usual PCF step, resulting in smooth grey values even if the viewer zooms in on the shadowed regions. As you zoom in, hardware shadow mapping is therefore able to deliver arbitrarily higher quality without taking additional samples and while maintaining the same frame rate.


Not sure what your point is, but Nvidia's site says bilinear PCF is done with LINEAR. Now if you are saying this just samples the 4 texels around the current pixel then yeah your right but still a form of PCF.

Share this post


Link to post
Share on other sites
I've got this from nVidia website too. They said there themselves that LINEAR does wrong filtering and proper PCF is required.

Lovely if one and the same entity says two different things :D

Share this post


Link to post
Share on other sites
I've got the depth buffer working now, sorry for the late reply. Regarding PCF, I can turn on and off linear filtering in my app. When I have it turned on I get a strange artifact on one texels edge. Is it supposed to look like this or is there some work-around. Is it "bad" to use linear filtering on the shadows because of this?



Red arrows pointing at problem.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this