Jump to content
  • Advertisement
Sign in to follow this  
james19142

Shadow Mapping Issue

This topic is 563 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hello people,

I'm having an issue implementing shadow mapping in DirectX 10. Certain areas of most of the meshes seem to always fail the depth test. I've tried increasing the bias, but to have an effect on these faces, the bias would have to be to the point where all the other shadows begin floating from their point of contact.

I've also tried messing around with the slope scaled bias parameters of the rasterizer state which did not help as the cases do not seem to correspond to the surface normal and light direction. As these shadows don't seem to be affected by changing of the light source position relative to their surface, I ensured the issue is not the normals themselves. Under non-shadowed lighting, the faces are lit as expected.

It seems to be a floating point precision issue, although through changing the light's far plane from 1000 to 60, and then 45, the problem persists. Whether I use orthographic or perspective does not seem to have an effect except that when using an ortho matrix, the bias has to be set a lot higher, but ultimately leading to the same issue.

Depth is rendered as usual, using a null pixel shader with no render target bound. The depth buffer is in format DXGI_FORMAT_D32_FLOAT and shows nothing strange. 

Below is the relevant shader code:

inline float2 uvProjectionOfPosition(in float4 position)
 {
      float2 output =
      {
         position.x/position.w/ 2.0f + 0.5f,
         -position.y/position.w/ 2.0f + 0.5f
      };
      return output;
   }

//=============================================================================================
//needed until border sampler state is debugged
bool uvInRange(uniform float2 uv)
   {
      return saturate(uv.x)==uv.x&&saturate(uv.y)==uv.y;
   }
//=============================================================================================
inline bool pixelIsShadowed(uniform float4 positionFromLigthView, uniform float2 uvToDepthMap, uniform float4 sampledDepth, uniform float bias)
   {
      return !uvInRange(uvToDepthMap) ||(sampledDepth.x+ bias) < (positionFromLigthView.z /positionFromLigthView.w);
   }




//Copied from within the main lighting function
//=============================================================================================
 float4 positionInLightProjection = mul(float4(thisWorldPosition, 1.0f), lightProjection(ilight));
 float2 projectedLightUV = uvProjectionOfPosition(positionInLightProjection);
 float4 depthMapSample = lightDepthMapSample(projectedLightUV);

 float bias = 0.0000009f;//<- for projection matrix & about .015f for ortho
 if(pixelIsShadowed(positionInLightProjection, projectedLightUV, depthMapSample, bias))
    {
      thisShadowFactor = 0.0f;
    }
Edited by james19142

Share this post


Link to post
Share on other sites
Advertisement
A simple trick: offset the sampling position by the normal of the surface that receives the shadow.
kind of
float bias=0.001;
float4 positionInLightProjection = mul(float4(thisWorldPosition+thisWorldNormal*bias,1.0f),lightProjection(ilight));

Share this post


Link to post
Share on other sites

Could it be this line? 

-position.y/position.w/ 2.0f + 0.5f

I'm not 100% sure why you are flipping this, could that be the issue?

 

Also, you can create a matrix to transform your coordinates from [-1, 1] to [0, 1] and multiply it by your light matrix to save a bit of computation when you do the shadow comparison.

 

Also I believe that you don't need to divide by w for this or the z position. The z value in the shadow map is the projected z without the division, so doing it is pointless I believe. 

 

This is how I sample from my shadow map for reference:

vec4 position_shadow = light_matrix * vec4(position, 1.0);
vec2 tex_coord_shadow = position_shadow.xy;
float z = texture(tex_shadow, tex_coord_shadow).x;

position is the word space position of a fragment and my light matrix includes the matrix to transform it to [0, 1]

Share this post


Link to post
Share on other sites

A simple trick: offset the sampling position by the normal of the surface that receives the shadow.
kind of

float bias=0.001;
float4 positionInLightProjection = mul(float4(thisWorldPosition+thisWorldNormal*bias,1.0f),lightProjection(ilight));

I do see how this would work as the uv coordinate would be as if the meshes were slightly scaled in world space to evade minor depth differences when compared to the depth map. I've been messing around with it and I'm seeing that it helps with the shadow patches on the tops of the meshes, but areas consuming the lower halves seem to be unaffected. As expected, at a point, scale difference begins to notably affect the the shadow shape; so I'll probably have to combine this with another solution.

Share this post


Link to post
Share on other sites

Could it be this line?  -position.y/position.w/ 2.0f + 0.5f I'm not 100% sure why you are flipping this, could that be the issue?

 

This is to go from the 3D coordinate system used in DirectX where positive Y is up, to the UV coordinate system where positive Y is downward.

 

Also I believe that you don't need to divide by w for this or the z position. The z value in the shadow map is the projected z without the division, so doing it is pointless I believe. 

 

Division of z by w is respective of the nearest & far planes, essentially converting the z from its world value to its percentage from the near plane to the far. This ensures all Z values between the two fit between 0 and 1 as they are in the depth buffer, although with a projection matrix, the the values are non-linear.

 

 

This is how I sample from my shadow map for reference: vec4 position_shadow = light_matrix * vec4(position, 1.0); vec2 tex_coord_shadow = position_shadow.xy; float z = texture(tex_shadow, tex_coord_shadow).x; position is the word space position of a fragment and my light matrix includes the matrix to transform it to [0, 1]

 

This looks like GLSL; I believe OpenGL and DirectX use different coordinate systems which may explain much of our methods' unlikeness.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!