Jump to content
  • Advertisement
Sign in to follow this  

How to read/write depth buffer?

This topic is 4173 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I need to draw once into a depth buffer, reading it into main memory and then use it once ( or sometimes more ) to be copied into a shared texture. So far I have to first clear the depth buffer and then render the geometry into it whenever I use it. My idea is to render once, copy to the main memory, and from then on simply reinitializing the depth buffer ( replacing the clear/render passes with one copy ). All the rendering is done using FBO. I have googled around quite a bit and I found the following so far: - glGetTexSubImage does not allow GL_DEPTH_COMPONENT ( but would be fast ) - glTexSubImage does not allow GL_DEPTH_COMPONENT ( but would be fast ) - glReadPixels/glWritePixels allow GL_DEPTH_COMPONENT ( but are slow ) Is this true? Am I running here into a wall? Is there no way I can read the depth buffer into main memory? What if the depth buffer is a Renderbuffer? Could I then not read from there? And is it not somehow possible to copy the depth values into a depth texture? Might it work with using RGB instead of depth? Any help appreciated.

Share this post

Link to post
Share on other sites
Graphics cards have a lot of optimizations so why not re-render the scene?
glReadPixels and glDrawPixels tend to be slow but you could always benchmark.

Share this post

Link to post
Share on other sites
Let me just check I understand what you want to do first. I think you just want to :
1) render geometry
2) save depth buffer "somewhere" (or does it *have* to be main memory)
3) initialize depth from saved depth
4) render some other stuff
5) goto (3)

If that's the case, here's what I'd do. Set up a depth texture and bind to you FBO. Render your geometry into this depth texture. (steps 1 and 2). For step 3 I'd use a fragment shader as follows.

Bind your depth texture to a texture unit and set up depth comparisons like this


Render a screen sized quad with texture coords (0,0), (0,1), (1,1), (1,0) to your screen with no mipmapping or texture filter of any kind (to make sure the pixels are mapped one-to-one. bilinear filtering *will* screw this up). Without a fragment shader this will give you a nice image of your depth texture on the screen (sometimes useful but anyway....). The have your fragment shader something like

uniform sampler2DShadow depthTex;

void main(void){
gl_FragColor = vec4( 0.0 );
gl_FragDepth = shadow2D(depthTex, gl_TexCoord[0].xyz).r;

This will reroute the depth into the depth buffer. You may need to change sampler2DShadow to sampler2D and the shadow2D() call to texture2D(depthTex, gl_TexCoord[0].xy), I'm not sure on this. I remember reading that nVidia treat sampler2DShadow and sampler2D the same internally..... If you want to render the colour as well that is simply added as another texture and reading this into gl_FragColor in your shader.

Of course if you *need* to read the depth into main memory you're stuck with glReadPixels(). In that case look into using pixel buffer objects to avoid pipeline stalls.

[Edited by - coordz on July 13, 2007 10:19:02 AM]

Share this post

Link to post
Share on other sites
Sign in to follow this  

  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!