Sign in to follow this  
cignox1

depth textures and glsl

Recommended Posts

For many effects (from depth of field to realistic soft shadows) you need to read the depth buffer. Currently I'm trying to reproduce translucid glass (where the transmitted image is blurried). I made it work by averaging sourronding pixel for each pixel drawn (the scene was rendered in a previous pass to a texture) but the real effect should take into account the distance between the object and the glass. I thought to use a texture to store the depth buffer of the previous pass, and use the z coordinate of the pixel for a depth_value - gl_FragCoord.z operation. But the resulting value isn't correct. Is the sampler2D/texture2D pair the right way to access a depth texture? How are stored the z coordinate and the depth buffer values? Are they in the same scale/space/range? Can they be compared? When I read the depth value with texture2D(), do I get the full precision value or a clamped one? Does the resulting vector store the same value for each RGBA channels? Sorry for all those questions, I hope someone can help me :-) Thank you!

Share this post


Link to post
Share on other sites
Quote:
Original post by cignox1
But the resulting value isn't correct. Is the sampler2D/texture2D pair the right way to access a depth texture? How are stored the z coordinate and the depth buffer values? Are they in the same scale/space/range? Can they be compared?
When I read the depth value with texture2D(), do I get the full precision value or a clamped one? Does the resulting vector store the same value for each RGBA channels?

Sorry for all those questions, I hope someone can help me :-)
Thank you!


You need to use the shadow samplers if you are using a depth texture with depth comparisons turned on. AFAIK the depth values are stored from 0-1 and not linear either... Yes RGBA are all the same if you sample a depth map and assign it to a vec4 color = shadow2D();

Share this post


Link to post
Share on other sites
Thank you! I have two questions though: the shadow sampler gets a 3d vector as texture coordinate, while the texure2d sampler gets a 2d vector. Does the shadow one simply return the value in the depth buffer or does some more computation/test that could potentially invalidate the resulting error for my pourposes?

Is there any work around for the depth value being in the range [0,1] and non linear while the z component is different?

I could imagine to render to a texture the position of every pixel and then pass that texture to the following pass. This would be easy, but it smells like a hack. Instead, using the depth buffer for those effects would be more natural.
Any idea?
Thank you again.

EDIT: But doesn't a shadow sampler only return 0.0 or 1.0?
Following this thread sampling is possible, but apparently only a 8-bit value will be returned.
this thread provides a (hopenly working) formula to linearize (converting to eye space?) the depth value:

a=-far/(far-near)
b=-far*near/(far-near)
Z=(a*z+b)/-z

If this formula is correct, then half of my problems are over. Now: how to read the actual depth value? The first thread I linked to says that this is done easily in HLSL, but that GLSL specs only allow depth comparison, not sampling. Indeed, it seems that the most I can get is a 8-bit value, too low precision. Is that true?

[Edited by - cignox1 on May 24, 2006 2:57:34 AM]

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this