question about oD0 in vertex shader

Started by
5 comments, last by Yang G 12 years, 4 months ago
Hi!While implementing the shadow map with shader, I encountered some strange result. there are many dark lines in the surface of the object, but the shadow is correct.this is the idea to implement:1:using the zl/zm as the depth value in shadow texture.where zl is the z value in light view space(view from the light position)zm is the value to define the max z value in light space,so zl/zm will be within 0.0 to 1.0.so the zl/zm will be moved to oD0 each component.2: do the same way to get the depth value in light view space while rendering the scene, and compare with the value in map texture.i guess the reason is the precision of oD0, but i don't know how the oD0 is going on.
can someone give me more detail of the oD0?thanks.
Advertisement
Hello !

[...] there are many dark lines in the surface of the object, but the shadow is correct. [...]


This is a well known issue called shadow acne
It generally comes from a too small bias in the depth comparison

something like that should solve the problem :

shadowmap.Sample(...).x + depthBias <posLight.z ? 0.0f : 1.0f;

Nico

PS : Personally I get good results using values around [1e-4 , 1e-3] .
It's just like what you said, but i'm confused that how many bits oD0's each component has.
Thanks for your answer
If you're manually outputting depth from a pixel shader, then you just output a [0.0, 1.0] floating point value. It will then converted to whatever format is being used by the depth buffer, which is typically either 24-bit fixed point integer or 32-bit floating point.
Won't using that depth bias cause another shadow map artifact, "Peter Panning"?

Won't using that depth bias cause another shadow map artifact, "Peter Panning"?


Hello
Yes, that a compromise, the chosen value is the minimal value for which there's no more acne,
but adding a bias results in a small shift of the shadow map, as you said Peter Panning artifact.

If you're manually outputting depth from a pixel shader, then you just output a [0.0, 1.0] floating point value. It will then converted to whatever format is being used by the depth buffer, which is typically either 24-bit fixed point integer or 32-bit floating point.

while generating the shadow texture, I move the depth value to oD0, what i know is the clr will be clamp to [0.0, 1.0] in each component of oD0, but i'm not clear about how many bits to store the value.In the pixel shader, I use tex to retrieve the value in shadow texture,
what i have done has nothing to do with the depth buffer.while rendering the scene, i store the depth in oD1 in vertex shader, then get it from v1 in pixel shader.

Here are something i guess, but not sure.
the r# register is 128 bits with each component 32 bits.
the c# register is 128 bits with each component 32 bits.
the oD# register is 32 bits with each component 8 bits.

Are these correct ?




This topic is closed to new replies.

Advertisement