Jump to content
  • Advertisement
Sign in to follow this  
lima

Storing the pixel depth in a texture

This topic is 4421 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

To sum up the question: How is depth usually stored in a buffer? How many bits does it typically need? I am trying to create a g-buffer that stores the normal and depth of every pixel. Looking at some online examples and some books im reading (GPU Gems2), it looks like this is a pretty common thing to do. Storing the Normal in the color channels, and the depth in the alpha channel. Calculating the depth and normal in the vertex shader, passing it to the pixel shader, and then storing Color = Normal and Alpha = depth is pretty easy, but the issue is in a standard 32 bit texture you only have 8 bits for the alpha. 8 bits (only 256 values) seems like a very very small amount to store depth information in. In fact when looking at my scene, huge bands of the scene are clumped together into the same depth because of tring to compress it all down to 8 bits. Is there a better way to do this? Should I be using a 64 bit texture? Should I be storing depth in its own texture and using all 32 bits for it? Is there a better way to use my single texture to store depth information? What I dont get is looking at a lot of examples online of edge detection algorithms, and deferred shading examples, they often use only the alpha channel of a texture to store pixel depth, and I cant figure out how they are getting away with it in a 32 bit texture. My current implementation is ps2.0 in d3d9, but I thought this question was more generic to graphics.

Share this post


Link to post
Share on other sites
Advertisement
I would probably just go with the 64-bit bit texture. It's the most direct away to accomplish what you want. I haven't been able to think of a good way to compress depth data into 8-bits, even with a floating-point format, but someone else might be better in that area.

Share this post


Link to post
Share on other sites
Normal z-buffer depth is usually 16 or 32 bits in an integer format. So you could possibly use a normal compression method to compress the normal vector down to two components (like converting to spherical coordinates) and use two channels for the depth. This would allow you to use greater precision for the depth while still using a simple 32-bit render target.

Or, you could switch to a floating point format. It really shouldn't be very difficult to try it out - but I think you will be surprised at the perf hit you will take for having double the bandwidth to process. It's worth a shot though to save trying to implement something like I mentioned above.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

Participate in the game development conversation and more when you create an account on GameDev.net!

Sign me up!