Jump to content
  • Advertisement


This topic is now archived and is closed to further replies.


applying a depth map to a texture in video memory

This topic is 5486 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Is it possible to either: 1) alloc a fixed size texture, copy the frame buffer to it every frame, retrieve the depth map for the scene using glReadPixels() with GL_DEPTH_COMPONENT and then filter the texture directly in video memory without running into massive overhead caused by accessing the video memory pixel-by-pixel or resorting to creating a new texture every frame (and using glReadPixels()) 2) set up a depth filter to only retrieve those pixels from the frame buffer that have a specific depth value (the scene is prerendered, I just want to copy certain frame buffer pixels) The latter solution seems both fast as well as logical, but I can''t think of a way to do it... Suggestions?

Share this post

Link to post
Share on other sites
If I understood it right, you want to create some sort of z buffering where z comes from a texture.

I am quite sure it can be done quite easily by using a pbuffer (possibly with render to depth texture capability) and a fragment program which KILls fragment accordingly to the result of a comparation.

This may seem to you quite generic but I cant'' tell you more than that since I think I do not understand what you''re trying to do.

Share this post

Link to post
Share on other sites
Okay I''ll explain more specifically. What I''m trying to implement depth of field for complex scenes that cannot be drawn more than once per frame (as is done using the accumulation buffer) because it''d take too long. For this I figured the best way would be to use a depth map:

1] glReadPixels(GL_RGB) - read the color buffer
2] glReadPixels(GL_DEPTH_COMPONENT) - read the depth data of the screen
3] mask out either pixels closer than or farther than the focal point in the image obtained in [1] (set them to (0,0,0)) based on the depth map [2]
4] write the obtained image into a preallocated texture slot (using glTexImage2D() (I could be getting the name wrong))
5] paint over the original frame in ortho mode with the appropriate blending combination, unfocusing the appropriate portion of the screen based on the depth mask

It works - that is, I got it working, but it is excruciatingly slow. On my TNT2, I''m getting about 10 fps which is really bothering me. I know glReadPixels() effectively halves the framerate, but writing the texture data into video memory in real-time is a killer as welll. I haven''t had the chance to test this out on better hardware, though. FYI - my CPU is more than enough for it.

What I was hoping was that I could replace step [1] with glCopyPixels() instead, getting rid of one glReadPixels() which should pump up the framerate to at least 20 and step [4], the slowest part. By only retrieving the depth data and multiplying a texture that is already in the video memory with it (this was my question: can I have the graphics card AND (as in binary AND) a texture with a random map stored in RAM (eg [r&map_n, g&map_n, b&map_n] for each pixel)?), I should be able to push the framerate up quite a bit. Indeed, I''m testing it on a simple scene that usually runs at ~60-80 fps on my system, but this method would be effective also on scenes that don''t get more than 20 fps and cannot slowed down any more.

Share this post

Link to post
Share on other sites

  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!