Depth texture + FBO = problem...

Started by
9 comments, last by zedz 17 years ago
Hi, I'm trying to attach a 16-bit depth texture to an FBO, but I keep getting GL_FRAMEBUFFER_UNSUPPORTED_EXT for the FBO status. The depth texture is created with the following parameters: internalformat = GL_DEPTH_COMPONENT16 format = GL_DEPTH_COMPONENT type = GL_UNSIGNED_INT There's another texture already attached as the first colour attachment, which has the following parameters: internalformat = GL_RGBA8 format = GL_RGBA type = GL_UNSIGNED_BYTE Neither of the textures cause any OpenGL errors when they're created. Am I doing something silly like using the wrong type, or is this something deeper? I'd like to not have to hack my way round this problem, so if anyone can shed any light on it, I'll be very thankful... PS. My card is an NVIDIA Geforce 7800, with the latest 93.71 drivers.
My opinion is a recombination and regurgitation of the opinions of those around me. I bring nothing new to the table, and as such, can be safely ignored.[ Useful things - Firefox | GLee | Boost | DevIL ]
Advertisement
A little more on this. I dug out a program I wrote last year which used FBOs. I wrote the code below, creating a 24-bit depth render buffer, and it works just fine. Is it just perhaps that a depth texture is unacceptable?

glGenFramebuffersEXT(1, &lFbo);glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, lFbo);glGenRenderbuffersEXT(1, &lRb);glBindRenderbufferEXT(GL_RENDERBUFFER_EXT, lRb);glRenderbufferStorageEXT(GL_RENDERBUFFER_EXT, GL_DEPTH_COMPONENT24, 1024, 1024);glFramebufferRenderbufferEXT(GL_FRAMEBUFFER_EXT, GL_DEPTH_ATTACHMENT_EXT, GL_RENDERBUFFER_EXT, lRb);
My opinion is a recombination and regurgitation of the opinions of those around me. I bring nothing new to the table, and as such, can be safely ignored.[ Useful things - Firefox | GLee | Boost | DevIL ]
I think it's because nvidia only allows the same depth as the main frame buffer at the moment, be careful though as ATi cards unless they added support for 24bit, only work with 16bit depth in FBOs to make it extra confusing.
You know you don't need a renderbuffer for a depth map?

What about GL_CLAMP_TO_EDGE for textures? I thought I remember FBO's in some cases need that set... But I could be wrong. I use GL_FLOAT for my types, but don't think that matters either.

I would ditch the RB if you don't need it. I know that 16bit works cause I have used it myself with a RB before.
Quote:Original post by BiGCyC
I think it's because nvidia only allows the same depth as the main frame buffer at the moment, be careful though as ATi cards unless they added support for 24bit, only work with 16bit depth in FBOs to make it extra confusing.


Recent drivers seem to support the depth_stencil extension so in theory at least 24bit depth is supported on ATI cards now, along with 8bit stencil...

(in practise; I'm not tried it yet and I don't know of anyone how has.. maybe tomorrow/later today... )
Quote:Original post by MARS_999
You know you don't need a renderbuffer for a depth map?

What about GL_CLAMP_TO_EDGE for textures? I thought I remember FBO's in some cases need that set... But I could be wrong. I use GL_FLOAT for my types, but don't think that matters either.

I would ditch the RB if you don't need it. I know that 16bit works cause I have used it myself with a RB before.

Yeah, I know the render buffer works. The point I was making with my second post was that while the render buffer works, a depth texture doesn't, which is puzzling (not mention deeply annoying).

GL_CLAMP_TO_EDGE is set when I use the texture by CgFX, but I can't see why it should affect rendering to the texture. I changed to using GL_FLOAT, which made no difference, unfortunately.

Quote:Original post by BiGCyC
I think it's because nvidia only allows the same depth as the main frame buffer at the moment, be careful though as ATi cards unless they added support for 24bit, only work with 16bit depth in FBOs to make it extra confusing.

Not sure what bit-depth of depth buffer I'm using; the system is running through a GLUT testbed application, so I guess it could be anything...the program that uses the render buffer actually acquires its RC through Win32, so I know for a fact it's using a 32-bit depth there...perhaps I'll move everything over to Win32 (which is planned anyway, shouldn't take long), and see if that makes any difference...
My opinion is a recombination and regurgitation of the opinions of those around me. I bring nothing new to the table, and as such, can be safely ignored.[ Useful things - Firefox | GLee | Boost | DevIL ]
Quote:Original post by iNsAn1tY
Quote:Original post by BiGCyC
I think it's because nvidia only allows the same depth as the main frame buffer at the moment, be careful though as ATi cards unless they added support for 24bit, only work with 16bit depth in FBOs to make it extra confusing.

Not sure what bit-depth of depth buffer I'm using; the system is running through a GLUT testbed application, so I guess it could be anything...the program that uses the render buffer actually acquires its RC through Win32, so I know for a fact it's using a 32-bit depth there...perhaps I'll move everything over to Win32 (which is planned anyway, shouldn't take long), and see if that makes any difference...

Turns out it was this. I switched over to using Win32 with a 32-bit depth buffer, and it seems to be working now. Thanks for the replies.
My opinion is a recombination and regurgitation of the opinions of those around me. I bring nothing new to the table, and as such, can be safely ignored.[ Useful things - Firefox | GLee | Boost | DevIL ]
Quote:Original post by iNsAn1tY
Quote:Original post by iNsAn1tY
Quote:Original post by BiGCyC
I think it's because nvidia only allows the same depth as the main frame buffer at the moment, be careful though as ATi cards unless they added support for 24bit, only work with 16bit depth in FBOs to make it extra confusing.

Not sure what bit-depth of depth buffer I'm using; the system is running through a GLUT testbed application, so I guess it could be anything...the program that uses the render buffer actually acquires its RC through Win32, so I know for a fact it's using a 32-bit depth there...perhaps I'll move everything over to Win32 (which is planned anyway, shouldn't take long), and see if that makes any difference...

Turns out it was this. I switched over to using Win32 with a 32-bit depth buffer, and it seems to be working now. Thanks for the replies.

Oh. It wasn't that. A bug was causing a different FBO problem, and when I resolved that, this one came back.

If you're rendering to a depth texture (not a render buffer, an actual texture) using an FBO on similar NVIDIA hardware (Geforce 7800 GTX), could you post here?
My opinion is a recombination and regurgitation of the opinions of those around me. I bring nothing new to the table, and as such, can be safely ignored.[ Useful things - Firefox | GLee | Boost | DevIL ]
Hi, I'm messing with shadow mapping just now so have some code handy if it helps.

As you can see, I just use GL_DEPTH_COMPONENT as the texture format, so I take it that the current depth buffer setting is used (mine is 24-bit).

The code shows the essentials for creating/freeing everything and how to bind the fbo for rendering. You Probably don't need all of this but I hope some of it helps you, and others..


//-------------------------------------//create the depth texture - only once//-------------------------------------glEnable(GL_TEXTURE_2D);glGenTextures(1, @DepthTexture);glBindTexture(GL_TEXTURE_2D, texture);glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE);glCopyTexImage2d(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT, 0, 0, aWidth, aHeight, 0);glBindTexture(GL_TEXTURE_2D, 0);glDisable(GL_TEXTURE_2D);//-------------------------------------//init fbo with depth texture - only once//-------------------------------------glGenFramebuffersEXT(1, @fbo);glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, fbo);glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT,		GL_DEPTH_ATTACHMENT_EXT, GL_TEXTURE_2D, DepthTexture, 0);glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, 0);//-------------------------------------//bind the fbo for rendering - every frame//-------------------------------------glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, fbo);		glDrawBuffer(GL_NONE);glReadBuffer(GL_NONE);//-------------------------------------//set up viewport etc. and Draw stuff - every frame//-------------------------------------Dorender();//-------------------------------------//unbind the fbo - every frame//-------------------------------------glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, 0);//-------------------------------------//delete the fbo and depth texture - only once//-------------------------------------glDeleteFramebuffersEXT(1, @fbo);glDeleteTextures(1, @DepthTexture);
Hey Fig, you don't need to do
glCopyTexImage2d when you setup a FBO or use them... Just thought I would point that out.

This topic is closed to new replies.

Advertisement