Sign in to follow this  
Rhawk187

32-bit depth buffers

Recommended Posts

So, I'm trying to get my program to use a 32-bit depth buffer. It doesn't have to be portable, so I'm okay with that. In my googling to figure out how to do this, that's the one thing people consistently mentioned. I've been testing this on an NVIDIA Geforce 280M, but I have access to a 295 if for some reason it's not standard on the 280. I'm using SDL 1.2 at the moment. If I request anything higher than 24 bits of depth via: SDL_GL_SetAttribute( SDL_GL_DEPTH_SIZE, 32 ); for instance, it gives me 24 if I check via: SDL_GL_GetAttribute( SDL_GL_DEPTH_SIZE, &x ); or glGetIntegerv( GL_DEPTH_BITS, &x ); I thought maybe it had to be manually enabled somewhere else; I checked my NVIDIA control panel, but didn't see anything obvious there. I was thinking maybe I have to use an 3.0 or higher context, but I'm not sure if I can make that until I update SDL. In fact, I'm wondering if something is more fundamentally wrong, because I really can't distinguish much of a difference between setting it from 16-24 either. I've tried rendering to a FBO with a 32-bit depth buffer attached and that doesn't look much better either. Some background on my application... I'm modeling a large scene, basically the gulf of Mexico region (several hundred kilometers in each direction), using realistic bathymetric data. But I've also got object on the order of 10s of meters in the scene too, so there are time I can't have a large near clipping plane. I have a water effect intersecting the world at elevation=0, and its z-fighting with my terrain in places. Any help would be appreciated.

Share this post


Link to post
Share on other sites
That's actually the exact example I was using to try to figure out the 32-bit depth buffer attachment stuff.

They use glut though, but assuming SDL is doing everything right that should be the only difference there.

Maybe I'm just not doing the glDepthRange stuff right. Is you are trying to use a 32-bit FBO depth buffer but you don't change the glDepthRange will it perform exactly the same as the lower ones or something? If so, that could be the problem. I wasn't entirely sure where they got 1024 from for their range value.

Also, I'd completely forgotten about the depthTest. I'll try that.

Share this post


Link to post
Share on other sites
I tried GL_LESS, I don't seem to have a GL_LOWER, is that supposed to do something different?

GL_LESS, didn't seem to do much. I guess the pixels that are z-fighting that I see would pass both tests roughly equally.

Share this post


Link to post
Share on other sites
Also, I'm wondering if part of the problem is because the vertices of the terrain are like 1800 units apart (the data only has a 1 minute arc-length resolution). So in some situations even when I was using a large clipping plane (10 near, 1000 far) it was still freaking out; I'm assuming because most of the vertices were beyond the clipping plane, even though there were tons of fragments on screen.

I was thinking of creating sort of a super sampled version of the terrain that inserts extra vertices just to reduce the data between them, even thought it adds no new information, but I wasn't going to bother if that wasn't going to really help anything.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this