Jump to content
  • Advertisement
Sign in to follow this  
floatingwoods

GL_DEPTH24_STENCIL8_EXT not working ok on NVidia and Intel. ATI doing fine

This topic is 2121 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi,

 

I am having compatibility issues between graphic cards, when using FBO and stencil buffers: it only works correctly on ATI cards, NVidia (GeForce GT 620) and Intel (HD Graphics 4000) fail (they operate as if the stencil buffers were not enabled). I made sure I had the last drivers. Any idea what I am doing wrong, or what is going on?

Here my code:

glGenFramebuffersEXT(1,&_fbo);
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT,_fbo);
        
glGenRenderbuffersEXT(1,&_fboDepthBuffer);
glBindRenderbufferEXT(GL_RENDERBUFFER_EXT,_fboDepthBuffer);
glRenderbufferStorageEXT(GL_RENDERBUFFER_EXT,GL_DEPTH24_STENCIL8_EXT,resX,resY);
glFramebufferRenderbufferEXT(GL_FRAMEBUFFER_EXT,GL_DEPTH_ATTACHMENT_EXT,GL_RENDERBUFFER_EXT,_fboDepthBuffer);

glGenRenderbuffersEXT(1,&_fboPictureBuffer);
glBindRenderbufferEXT(GL_RENDERBUFFER_EXT,_fboPictureBuffer);
glRenderbufferStorageEXT(GL_RENDERBUFFER_EXT,GL_RGB,resX,resY);
glFramebufferRenderbufferEXT(GL_FRAMEBUFFER_EXT,GL_COLOR_ATTACHMENT0_EXT,GL_RENDERBUFFER_EXT,_fboPictureBuffer);

glBindFramebufferEXT(GL_FRAMEBUFFER_EXT,_fbo);

The FBO functionality works fine with all tested graphic cards. Only the stencil buffer functionality fails with above mentionned graphic cards.

 

Thanks for any insight.

Share this post


Link to post
Share on other sites
Advertisement

GL_EXT_framebuffer_object doesn't support packed depth-stencil formats, you need to check for GL_EXT_packed_depth_stencil before you can use them (and if you don't have GL_EXT_packed_depth_stencil you can't use them).

 

Any reason why you're not using GL_ARB_framebuffer_object, which does?

Share this post


Link to post
Share on other sites

Thank you mhagain,

 

The code is quite old... I now switched to ARB_framebuffer_object. I basically left everything untouched, except that I check for the availability of ARB_framebuffer_object, then bind the functions to the same names, except for the final EXT.

I have also checked that my card has EXT_packed_depth_stencil. It has it, but my problem remains on the NVidia card and Intel card. What could be the reason?

Share this post


Link to post
Share on other sites

Not sure why it would happen if the extension is exported; I've never actually used EXT_framebuffer_object (either ARB was always available by the time I got interested in FBOs in OpenGL, or else I needed to support hardware that didnt have either so I used CopyTexSubImage2D).

Share this post


Link to post
Share on other sites

Thanks to both of you.

 

I found the problem. Changing this line:

glFramebufferRenderbufferEXT(GL_FRAMEBUFFER_EXT,GL_DEPTH_ATTACHMENT_EXT,GL_RENDERBUFFER_EXT,_fboDepthBuffer);

into this:

glFramebufferRenderbufferEXT(GL_FRAMEBUFFER_EXT,GL_DEPTH_STENCIL_ATTACHMENT,GL_RENDERBUFFER_EXT,_fboDepthBuffer);

Makes the trick. The question is now why it worked on ATI cards with the wrong argument?!

Share this post


Link to post
Share on other sites

Makes the trick. The question is now why it worked on ATI cards with the wrong argument?!

 

With API/AMD my first instinct would be to say "bad driver".  It's normally the other way around (things that should work don't) but still a viable possibility.

Share this post


Link to post
Share on other sites
That's why I asked which hardware he was testing on; with anything GCN based it working is logical... in an odd way.

GCN hardware doesn't used packed depth-stencil targets; memory is allocated in two blocks of depth and stencil, so from the drivers point of view if you allocate a D24_S8 texture it is, in fact, allocating a D32 & S8 textures behind the scenes (so you get no memory saving at all D24 vs D32), so when you bind this to a depth attachment target this is 'ok' as the driver knows how attach things correctly. Chances are the 'completeness' rules are woolly enough here that AMD's logic is allowed and perfectly fine.

Although this does bring the follow up question of 'does the OP see the expected result with regards to stencil values?' as they probably shouldn't see any coming out for this to make sense smile.png

Share this post


Link to post
Share on other sites

Hello,

 

The ATI card where the original (i.e. wrong) code was working was an ATI Mobility Radeon HD 5470. The behaviour was correct (i.e. the stencil buffers did what they were used for (implementation of mirrors in a scene))

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!