More on SSAO

Started by
7 comments, last by Rompa 15 years, 7 months ago
I've been working on an SSAO implementation. Having issues with banding. I know this has been covered in previous posts - related to depth buffer precision. I'm wondering about opengl vs. directx for precision. Is DirectX the only way to get 32F buffers? OpenGL's SDX 10 supports float depth, but requires 8000+ cards. Yet, other SSAO demos work on my system (Geforce 7950, no float depth buffers). Has anyone gotten SSAO to work under opengl? Do you have to use 32-bit buffers to implement depth-map SSAO properly?
Advertisement
If you're using 32-bit float depth buffers make sure to store (1-z) rather than (z). This is a simple change to the projection matrix but it'll give you a lot more depth buffer precision. May still not be enough, but it's a brainless change :)
Thanks AndyTX. I'll try it..

More head banging, and I >finally< figured it out. I'm using CopyTexImage2D to transfer depth buffer to depth texture. Note to self - and others -- you MUST use GL_NEAREST for the GL_TEXTURE_MIN/MAG_FILTER of the depth texture if you want full precision copies of depth buffers:

glTexParameterf (GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST );

Don't ask my why.. but if you don't then opengl will take your 24-bit depth buffer, remove the low-bits and give you the high 8-bits copied to each (R,G,B) of the depth texture. This happens even if you set GL_DEPTH_COMPONENT and GL_FLOAT properly as your texture format:

glTexImage2D( GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT24_ARB, 1024, 1024, 0, GL_DEPTH_COMPONENT, GL_FLOAT, 0);

For those intending to transfer depth buffers to textures this way (i'm using FBOs for my shadows), if you see 8-bit depth textures, it _may_ be this and not the near/far precision.

Here's my output (finally) :) - Shadow maps + SSAO + depth of field..



(Is this the first SSAO under opengl?)
Very pretty :D
Quote:Original post by rcz
For those intending to transfer depth buffers to textures this way (i'm using FBOs for my shadows), if you see 8-bit depth textures, it _may_ be this and not the near/far precision.
If I remember correctly, if you're already using FBOs, you should be able to bind your depth-buffer as a texture without having to copy it to a separate intermediate texture.

Quote:(Is this the first SSAO under opengl?)
I'm afraid I've seen it in Horde3D and OGRE before (though I'm not sure if the OGRE one was portable).
Quote:Original post by AndyTX
If you're using 32-bit float depth buffers make sure to store (1-z) rather than (z). This is a simple change to the projection matrix but it'll give you a lot more depth buffer precision. May still not be enough, but it's a brainless change :)


Can you explain this a bit more.. I've never heard of this
If the hardware supports a fp 24-bit or 32-bit depth buffer, the depth resolution is distributed more equally compared to a integer depth buffer.
The performance difference when you invert Z exists because early depth rejection does not have sufficient resolution to reject at any significant distance from the near plane.
Every 360 programmer knows this :-)

[Edited by - wolf on September 16, 2008 8:20:38 PM]
Quote:Original post by Matt Aufderheide
Can you explain this a bit more.. I've never heard of this

Check out this paper, among others.

The elevator pitch is that using both a z-buffer perspective projection and a floating point representation stacks way too much precision at the near plane (0). Flipping the depth range though causes the two to somewhat "cancel out" (not precisely of course) and gives you more uniform precision throughout.
ah so reverse depth range with glDepthRange(1,0), reverse depth compare function, and set the depth clear to 0 instead of 1..
AndyTX: The PS2 had it right all along! Lol :-)

This topic is closed to new replies.

Advertisement