Sign in to follow this  
Ingrater

DepthTexture + Ati + FBO

Recommended Posts

I won't get depthtextures on ati cards working. I tried every DEPTH_COMPONENT type (16,24,32) but ATI cards always fall back to software rasterizer. I tested it on a x700 mobilty, 9800 XT and a x1800. I read in various threads that ati only supports 16 bit depth textures, but that won't work either. Did any one manage to render into an depth texture with an fbo on ati cards? Here's my code.
    glGenFramebuffersEXT(1, &fbo);
    glBindFramebufferEXT(GL_FRAMEBUFFER_EXT,fbo);
    glGenTextures(1,&DepthBuffer);
    glBindTexture(GL_TEXTURE_2D, DepthBuffer);   
    glTexImage2D(GL_TEXTURE_2D,0,GL_DEPTH_COMPONENT16,Width,Height,0,GL_DEPTH_COMPONENT,GL_UNSIGNED_SHORT,NULL);
    //Set Texture Parameters
    glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
    glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
    glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE);    
    glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT,GL_DEPTH_ATTACHMENT_EXT,GL_TEXTURE_2D,DepthBuffer,0);
    glDrawBuffer( GL_NONE );
    glReadBuffer( GL_NONE );
It is anoying that despite it won't work on ati, I get the GL_FRAMEBUFFER_COMPLETE_EXT message. As ever it works fine on nvidia cards.

Share this post


Link to post
Share on other sites
As far as I know, depth textures (and buffers for that matter) with ATI's OpenGL drivers are horribly buggy. It's no surprise you haven't been able to get them to work. Unfortunately, you can't create luminance render-targets on ATI cards, so it wouldn't even be possible to "fake" a depth texture by rendering depth (unless you're willing to use RGB and waste two channels).

I'm still curious to know if anyone's actually been able to successfully create a depth texture (not renderbuffer) on ATI, of any bit depth. I've personally never tried myself but it appears to be the single most illusive feature to get working.

Share this post


Link to post
Share on other sites
Quote:
Original post by Zipster
As far as I know, depth textures (and buffers for that matter) with ATI's OpenGL drivers are horribly buggy.

Everything about ATIs OpenGL drivers is horribly buggy... Yeah, yeah, I know, I'm repeating myself here...

Quote:
Original post by Zipster
It's no surprise you haven't been able to get them to work. Unfortunately, you can't create luminance render-targets on ATI cards, so it wouldn't even be possible to "fake" a depth texture by rendering depth (unless you're willing to use RGB and waste two channels).

The best way to get anything even remotely similar to a depth texture that actually works at decent speed on an ATI card, is to encode the 24 or 32bit depth value into an RGBA texture.

Quote:
Original post by Zipster
I'm still curious to know if anyone's actually been able to successfully create a depth texture (not renderbuffer) on ATI, of any bit depth.

It doesn't work, period. Current ATI hardware just doesn't support anything higher than 16bit depth render textures. Under D3D they fake it with an internal copy (AFAIK). Under OpenGL you can't even fake it yourself, since the API calls and GLSL features that would allow for a fast fake are, well, buggy... You can actually successfully create a 16bit depth texture (yay !), as long as you follow the exact instruction order used in their demo. But as soon as you try doing anything remotely interesting with it, the driver either crashes, gives back incorrect or random depth values, or opens a dimensional passage from your room into ATIs HQ toilet area...

Ingrater: seriously, forget about it. Even if you somehow get it to work, by doing something highly sophisticated such as reordering instructions or hitting your ATI card multiple times with an iron bar, as soon as you update your drivers, everything will probably be broken again. Your best bet is to use some form of RGB(A) encoding instead.

Share this post


Link to post
Share on other sites
Quote:
Original post by Yann L
You can actually successfully create a 16bit depth texture (yay !), as long as you follow the exact instruction order used in their demo. But as soon as you try doing anything remotely interesting with it, the driver either crashes, gives back incorrect or random depth values, or opens a dimensional passage from your room into ATIs HQ toilet area...

So if you made it 16bit and had the correct instruction order (as arcane a requirement as it is), it would work? I'm a little confused because first you said it wouldn't work period, and then you said you can successfully create one as long as you adhere to those requirements. I guess whether or not that's considered "working" is open for debate, so what I really meant was if it would be physically possible under any conditions, as crazy as they may be.

Share this post


Link to post
Share on other sites
Quote:
Original post by Zipster
So if you made it 16bit and had the correct instruction order (as arcane a requirement as it is), it would work? I'm a little confused because first you said it wouldn't work period, and then you said you can successfully create one as long as you adhere to those requirements. I guess whether or not that's considered "working" is open for debate, so what I really meant was if it would be physically possible under any conditions, as crazy as they may be.

Confusing wording on my part, sorry.

It will indeed work, as long as you follow some mysterious instruction order (that is changing from one driver version to the next) and exclusively keep to 16 bit depth. You can then (more or less) safely use those textures with the FF pipeline. But as soon as you start using them in GLSL shaders, they become completely unpredictable. Basically it works under 'lab conditions' (Eg. little demos, etc), but it doesn't work in a real world scenario of a game or an application.

Share this post


Link to post
Share on other sites
Well that is very bad, but as ever I will write a bunch of extra code only to get my demo work on ATI cards -.-

Any sugesstions how to pack the depth data into an rgba texture?
Should I use a floating point texture?
Or should I do some fixed point math in my shaders, what would mean, that I have to write each shader two times that uses the depth texture?

Share this post


Link to post
Share on other sites
Quote:
Original post by Ingrater
Any sugesstions how to pack the depth data into an rgba texture?
Should I use a floating point texture?
Or should I do some fixed point math in my shaders, what would mean, that I have to write each shader two times that uses the depth texture?

The latter. FP16 isn't really precise enough on a single channel, and you'd waste two additional channels. The best option is to pack the depth into a 0.24 or 0.32 fixed point representation. 0.24 is usually enough, and saves shader instructions.

But as I mentioned, it is possible to get 16bit depth textures to work. But don't count on it, it might go broke again with the next driver, or might only work on certain ATI GPUs. For a hobby project, this is probably not so much a problem, and the following fixed point workaround is a little over the top. But for a commercial product, DO NOT USE depth textures on ATI ! You'll be in a world of (customer support) pain...

OK, on to the fixed point solution. But careful ! There is another great ATI bug waiting around the corner... Here's the shader I used to encode a depth value.

Vertex shader part:

varying vec4 Position;

void main()
{
// Compute vertex transform
vec4 P = ftransform();
gl_Position = P;

// ATI's implementation of gl_FragCoord is buggy, so we
// need to supply the clipspace position as a separate varying.
Position = P;
}


As you can see, you'll have to explicitely send the clipspace position over the interpolators, because gl_FragCoord.z doesn't work due to an ATI driver bug. Note that you'll only use the z and w components in the pixel shader, so you can get away with using a vec2 varying (faster, and good if you're low on varying resources).

Fragment shader part:

uniform sampler2D RampTexture;

varying vec4 Position;

void main()
{
vec3 c;

// This is bugged on ATI (will always return 0.5) !
// float g = gl_FragCoord.z;

// Do the z-coord transformation manually instead
float g = Position.z / Position.w;
g = g * gl_DepthRange.diff / 2.0 + (gl_DepthRange.near + gl_DepthRange.far) / 2.0;

// Convert the float value into a packed 24bit RGB value
c.r = texture2D(RampTexture, vec2(g, 0.5)).r;
c.g = texture2D(RampTexture, vec2(g * 256.0, 0.5)).r;
c.b = texture2D(RampTexture, vec2(g * 65536.0, 0.5)).r;

gl_FragColor.rgb = c;
}


Ugly as hell, I know. But that's the only thing that would always work on all tested ATI cards. The ramp texture is simply a 256*1 pixel luminance texture with a 0 to 255 gradient (you could also use an 1D texture instead). Why do we need this ? Because if you do the math directly in the shader, you'll run into heavy precision issues on some ATI chips... None at all on NVidia, btw, but we don't need all that crap on NV anyway.

Decoding is the inverse. No need for weird gradient textures this time, a simple dotproduct works well.

Good luck !

Share this post


Link to post
Share on other sites
Well i figured out, that It wasn't even the FBO code not working :D

A simple 128x127 Texture totaly destroyed everything. I've changend the size to 128x128 and now it works fine. In the meantime I wrote a CopyTexImage2D solution, which works fine on nvidia gpu's. The texture is also correct on ATI cards but the texture2D call in my shaders returns total crap.

Well no it works, with an fbo and a 16-bit depth component texture which is clamped to the edges and uses linear filtering. Also no color buffer should be attached to the fbo ^^.

Thanks for your help. But now that it works, I'm to lazy to code the shader solution.

Share this post


Link to post
Share on other sites
Congratulations on getting everything working! I actually have a few questions for Yann regarding his packing code.

If I understand the code correctly, you're converting the normalized depth value back into the [N,F] range, and then packing into an RGB value. But since texture coordinates are in the range [0,1], even with the GL_WRAP texture addressing mode wouldn't your code only pack the fractional portion of the depth? How is the integral part encoded? I don't use fixed-point math that often so it's possible I'm missing something.

Share this post


Link to post
Share on other sites
Ditto, congratulations ;) Don't touch it too much in the future, it might suddendly decide to break...

Quote:
Original post by Zipster
If I understand the code correctly, you're converting the normalized depth value back into the [N,F] range, and then packing into an RGB value.

Almost. The Position varying is not yet normalized, it's only in the unit cube clip space (as transformed by the vertex shader with ftransform). The normalization (ie. the transformation into normalized device space) is done by the homogeneous divide in the fragment shader (z/w). In the normal OpenGL transform pipeline, this divide is implicitely performed by dedicated circuitry outside of shader control, on all 3 coordinates. But we're only interested in z, so we only have one divide.

Then I transform the normalized device coordinates into window coordinates. That's basically applying the state controlled by glDepthRange. Again, this is usually done implicitely by the GPU. When using the default depth range, this transformation will simply map the normalized device coords in the -1..1 range into the 0..1 range.

After this step, the z coordinate is finally normalized, and can be used to access the ramp texture (of course, the texture must be GL_WRAP).

All these transformation steps could be skipped, and simply be replaced by the implicitely calculated window coordinate z, which is also used by the hardware to perform z-buffering. That coordinate is usually available through gl_FragCoord.z, unless you have an ATI...

Share this post


Link to post
Share on other sites
So, I've been doing some quick testing in Vista wrt some of the issues raised here;

- Firstly it appears that there is now access to a 24bit depth buffer paired with an 8bit stencil buffer;


// Now setup the first texture to render to
glGenTextures(1, &depthTexture);
glBindTexture(GL_TEXTURE_2D, depthTexture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH24_STENCIL8_EXT,
width, height, 0, GL_DEPTH_STENCIL_EXT,
GL_UNSIGNED_INT_24_8_EXT, NULL);

glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);


// And attach it to the FBO so we can render to it
glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_DEPTH_ATTACHMENT_EXT, GL_TEXTURE_2D, depthTexture, 0);
glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_STENCIL_ATTACHMENT_EXT, GL_TEXTURE_2D, depthTexture, 0);

// Now set the render state correctly
glDrawBuffer(GL_FALSE);
glReadBuffer(GL_FALSE);

GLenum status = glCheckFramebufferStatusEXT(GL_FRAMEBUFFER_EXT);
if(status != GL_FRAMEBUFFER_COMPLETE_EXT)
exit(1);



on my X1900 that doesn't exit [smile] however as I'm not sure of the sanity of the rest of the code (it was doing strange things before but I've lost my good FBO based shadow mapping code) I can't comment on the quality.

- When it comes to the gl_FragCood.z value, well it doesn't return 0.5 all the time but I'm not sure what I'm seeing [grin]
I've some shader code which basically says if gl_FragCood.z > 0.95 frag color is green, else make it red; this results in a small section of the rendered polys being green close to the near plane and the rest being red.

I'd fiddle more but well, dissertation to complete so don't have the time [grin]

As noted, this is on Cat7.4 on Vista with an X1900XT card.

Share this post


Link to post
Share on other sites
Quote:
Original post by Yann L
Almost. The Position varying is not yet normalized, it's only in the unit cube clip space (as transformed by the vertex shader with ftransform). The normalization (ie. the transformation into normalized device space) is done by the homogeneous divide in the fragment shader (z/w). In the normal OpenGL transform pipeline, this divide is implicitely performed by dedicated circuitry outside of shader control, on all 3 coordinates. But we're only interested in z, so we only have one divide.

Then I transform the normalized device coordinates into window coordinates. That's basically applying the state controlled by glDepthRange. Again, this is usually done implicitely by the GPU. When using the default depth range, this transformation will simply map the normalized device coords in the -1..1 range into the 0..1 range.

Ah, I already know how all the transformations work, but for some reason I was under the mistaken impression that gl_DepthRange contained the actual near/far view-space plane values. It probably should have occurred to me that it was just mapping [-1,1] to the window-friendly [0,1] range (which is where the packing code makes sense [smile]), because how would OpenGL actually know what the near/far values are unless it went ahead and inverted the projection matrix and plugged in -1 and 1 to find out? But of course it wouldn't make sense to go through the trouble since you hardly ever need to know those values!

Share this post


Link to post
Share on other sites
Quote:
Original post by phantom
So, I've been doing some quick testing in Vista wrt some of the issues raised here;

Hmm, that's pretty good news. Seems that ATI really tries to keep up with their promises to do a better job on the Vista drivers. I wonder if AMD has anything to do with this...

Anyway, since you already have an ATI running under Vista, would you mind making a quick test ? Could you please try to read back a few pixels from a 24bit depth FBO renderbuffer (using glReadPixels), and check if they seem to contain reasonable values ? Thanks a lot !

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this