Jump to content
  • Advertisement
Sign in to follow this  
vaux1985

Shadow Map "Receiver" Depth Bias

This topic is 2058 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I've been trying to implement "receiver plane" depth bias based on these two references:
 
 
As I understand it the basic principle is as follows: convert the screen-space depth derivatives into shadow map space to work out how much the slope changes as you move around in the shadow map, then use this information to compute a per-tap depth bias. Here's my GLSL code, more or less as it appears in Tuft's presentation (the Microsoft reference above, slide 52 onwards):
 
//  get shadow map texture coords (viewSpacePosition is extracted from the 
//  z buffer and view ray)
  vec4 shadowCoords = uShadowMatrix * vec4(viewSpacePosition, 1.0);
  shadowCoords.xyz /= shadowCoords.w;
	
//  light space -> screen space matrix
  mat2 jacobian = mat2(
    dFdx(shadowCoords.x),		dFdy(shadowCoords.x),
    dFdx(shadowCoords.y),		dFdy(shadowCoords.y)
  );
  jacobian = inverse(jacobian);
	
//  get depth ratios (how depth varies as we move in shadow map)
  vec2 texelSize = 1.0 / vec2(textureSize(uShadowTex, 0));
  vec2 right = jacobian * vec2(texelSize.x, 0.0);
  vec2 up = jacobian * vec2(0.0, texelSize.y);
  vec2 dfDepth = vec2(dFdx(linearDepth), dFdy(linearDepth)); // linearDepth = linearized depth from z buffer
	
//  get depth deltas (per-tap depth offsets for filtering)
  float rightDelta = right.x * dfDepth.x + right.y * dfDepth.y;
  float upDelta = up.x * dfDepth.x + up.y * dfDepth.y;
		
//  perform shadow map filtering (25 tap box filter)
  float shadowResult = 0.0;
  for (int i = -2; i < 2; ++i) {
    for (int j = -2; j < 2; ++j) {
      vec2 offset = vec2(i, j);
      float sdepth = texture(uShadowTex, shadowCoords.xy + offset * texelSize).r;
			
    //  get bias for this tap
      float sbias = offset.x * rightDelta + offset.y * upDelta;
	
    //  apply bias
      sdepth += sbias * uSlopeScale + uConstantBias;
			
      shadowResult += step(shadowCoords.z, sdepth);
    }
  }
  shadowResult /= 25.0;
The shader above executes in the following context: I'm rendering front faces into the shadow map. The shader reads the depth from the hardware z buffer and gets the view space position by linearizing this depth and multiplying by a view ray (typical deferred rendering stuff). The matrix "uShadowMatrix" (which converts from view space to shadow map space) is constructed as follows:
 
shadowMatrix = mat44(
  0.5f,0.0f,0.0f,0.5f,
  0.0f,0.5f,0.0f,0.5f,
  0.0f,0.0f,0.5f,0.5f,
  0.0f,0.0f,0.0f,1.0f
);
shadowMatrix *= shadowCamra.getProjectionMatrix();
shadowMatrix *= shadowCamra.getViewMatrix();
shadowMatrix *= currentCamera.getWorldMatrix(); // inverse view matrix
I'm not sure that either of these things makes any difference. Also, I adjust uSlopeScale in the range [-2,2] and get no difference whether it is positive or negative. See the screenshots below.
 
I'm completely stumped and suspect that I'm missing something crucial, but I can't seemt to figure out what - all insights welcome!
 
Here's what I'm currently getting, note also that the large banding artefacts are view-dependant (they slide around as the camera moves):
 
[attachment=18766:bias_issues.png]

 

Share this post


Link to post
Share on other sites
Advertisement

Sorry to bump this, but I'm still stumped wacko.png

 

I'd be happy just to see some working code, if anyone has a working implementation?

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!