Access to default depth buffer from fragment shader

Started by
0 comments, last by Kaptein 10 years, 4 months ago

Hi,

I am going to port DirectX code to OpenGL and need to choose the right OpenGL version. What I want to do might not even be possible according to what I already read online.

As I understand it (I might be wrong though), I can use the default depth buffer as a pixel shader resource in DirectX 10 [1]. That is useful for deferred lighting. In a first pass I would draw the meshes and the depth buffer would be filled. During the post processing I could then use the depth buffer information. I would not have to use multiple render targets and could save some memory and bandwidth that way.

Is the only way to achieve this in any OpenGL version to use an FBO and blitting [2, 3]?

[1] http://bitwisegames.wordpress.com/2011/03/25/getting-direct-access-to-the-depthbuffer-in-directx10/

[2] http://www.gamedev.net/topic/578084-depth-buffer-and-deferred-rendering/

[3] http://www.opengl.org/discussion_boards/showthread.php/180782-Binding-to-a-different-depth-buffer

Advertisement

a) Create "fullscreen" color texture (empty, screensized GL_TEXTURE_2D)

b) Create depth texture (empty, GL_DEPTH_COMPONENT)

Methods of accessing depth buffer, after rendering something:

1. Use FBO.

Render scene to color and depth attachments, which are GL_TEXTURE_2D and GL_DEPTH_COMPONENT respectively.

2. If you don't use the alpha component of your color, use (r, g, b, depth).

Depth is:

vec4 position = matview * vec4(vertex, 1.0);

float depth = normalize(position.xyz) / ZFAR; // don't forget to divide by ZFAR, which you need to provide as constant

This will give you linear depth in proper camera space, which is wonderful. That means the corners of the screen no longer has the wrong depth.

3. Alternatively, just copy the current main framebuffer to a texture:

glCopyTexSubImage(...); // it's fast enough, but not recommended.

Do this for both color and depth. Same as 1), except older and less acceptable. ;)

Finally, for this part you either use another FBO,

or just clear the entire main framebuffer and turn off depth testing and all that jazz.

Render fullscreen quad using color and depth to a fullscreen texture.

In the fullscreen operation you can access both the color of the scene, and its depth values. Keep in mind that you need, or want to, linearize the depth.

Depth is accessed just like a regular texture: texture2D(tex, position.xy).x, and as you can see the depth is in the x swizzle.

To linearize Z:

Z = 2.0 * ZNEAR / (ZFAR + ZNEAR - Z * (ZFAR - ZNEAR)); // range [0, 1]

If you provided linear depth as the .a component, then:

Z = color.a;

Most GPUs today support FBOs, even the ARB version, which I highly suggest you use, unless you are forced to use FramebufferEXT.

I believe ARB FBOs is for OpenGL 3.x. For reference, all modern Intel CPUs support OpenGL 3.x and limited OpenGL 4.x.

I don't remember the actual coverage level, but it's more than enough.

EDIT:

The title says "access to default depth buffer," and that is impossible with OpenGL. The depth buffer on any graphics card these days is in a very special format. I think even if you could access it in DirectX, it would be seriously slower (since you access it millions of times,) than just downsampling it to a depth texture.

In the case of 2) where you don't use depth textures at all, the speed gain is noticeable. Not to mention you get the correct depth, linearized.

The speed gain is mostly from the fact that the graphics hardware doesn't need to write to a non-native format, and read from it as it continously renders.

This topic is closed to new replies.

Advertisement