Hardware shadow mapping depth issues

Started by
-1 comments, last by Leafteaner 17 years, 2 months ago
I have a working shadow map system that uses 32-bit texture as the render target but I am trying to implement this using a depth-stencil texture to test performance / quality differences and am having some problems. I've been following the hardware shadow map example in the ATI SDK since i have a x1950Pro. I am using (D3DFORMAT)(MAKEFOURCC('D','F','1','6')) as the format and have tried (D3DFORMAT)(MAKEFOURCC('D','F','2','4')) with the same results. How are depth values written to the depth buffer? Is it just the projected Z value? In the sample here is how they export the position from the vertex shader: outPosition = mul( WorldTransformMatrix, inPosition ); outPosition.z *= RangeScale; where RangeScale is 1.0/FarClipZ and they use this depth for the comparison when rendering the scene: LightSpaceDepth = mul( WorldToLightTransformMatrix, Input.Position ).z * RangeScale; I've tried everything from following their sample, using the z values without the scaling and using the same z/w approach i use with my color target shadow maps. The shadow comparison always fails and the whole scene is in shadow except at very close distances to the light...

This topic is closed to new replies.

Advertisement