Storing the pixel depth in a texture

Started by
1 comment, last by Jason Z 18 years ago
To sum up the question: How is depth usually stored in a buffer? How many bits does it typically need? I am trying to create a g-buffer that stores the normal and depth of every pixel. Looking at some online examples and some books im reading (GPU Gems2), it looks like this is a pretty common thing to do. Storing the Normal in the color channels, and the depth in the alpha channel. Calculating the depth and normal in the vertex shader, passing it to the pixel shader, and then storing Color = Normal and Alpha = depth is pretty easy, but the issue is in a standard 32 bit texture you only have 8 bits for the alpha. 8 bits (only 256 values) seems like a very very small amount to store depth information in. In fact when looking at my scene, huge bands of the scene are clumped together into the same depth because of tring to compress it all down to 8 bits. Is there a better way to do this? Should I be using a 64 bit texture? Should I be storing depth in its own texture and using all 32 bits for it? Is there a better way to use my single texture to store depth information? What I dont get is looking at a lot of examples online of edge detection algorithms, and deferred shading examples, they often use only the alpha channel of a texture to store pixel depth, and I cant figure out how they are getting away with it in a 32 bit texture. My current implementation is ps2.0 in d3d9, but I thought this question was more generic to graphics.
Advertisement
I would probably just go with the 64-bit bit texture. It's the most direct away to accomplish what you want. I haven't been able to think of a good way to compress depth data into 8-bits, even with a floating-point format, but someone else might be better in that area.
Normal z-buffer depth is usually 16 or 32 bits in an integer format. So you could possibly use a normal compression method to compress the normal vector down to two components (like converting to spherical coordinates) and use two channels for the depth. This would allow you to use greater precision for the depth while still using a simple 32-bit render target.

Or, you could switch to a floating point format. It really shouldn't be very difficult to try it out - but I think you will be surprised at the perf hit you will take for having double the bandwidth to process. It's worth a shot though to save trying to implement something like I mentioned above.

This topic is closed to new replies.

Advertisement