This topic is 2058 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

I've been trying to implement "receiver plane" depth bias based on these two references:

As I understand it the basic principle is as follows: convert the screen-space depth derivatives into shadow map space to work out how much the slope changes as you move around in the shadow map, then use this information to compute a per-tap depth bias. Here's my GLSL code, more or less as it appears in Tuft's presentation (the Microsoft reference above, slide 52 onwards):

//  get shadow map texture coords (viewSpacePosition is extracted from the
//  z buffer and view ray)

//  light space -> screen space matrix
mat2 jacobian = mat2(
);
jacobian = inverse(jacobian);

//  get depth ratios (how depth varies as we move in shadow map)
vec2 texelSize = 1.0 / vec2(textureSize(uShadowTex, 0));
vec2 right = jacobian * vec2(texelSize.x, 0.0);
vec2 up = jacobian * vec2(0.0, texelSize.y);
vec2 dfDepth = vec2(dFdx(linearDepth), dFdy(linearDepth)); // linearDepth = linearized depth from z buffer

//  get depth deltas (per-tap depth offsets for filtering)
float rightDelta = right.x * dfDepth.x + right.y * dfDepth.y;
float upDelta = up.x * dfDepth.x + up.y * dfDepth.y;

//  perform shadow map filtering (25 tap box filter)
for (int i = -2; i < 2; ++i) {
for (int j = -2; j < 2; ++j) {
vec2 offset = vec2(i, j);

//  get bias for this tap
float sbias = offset.x * rightDelta + offset.y * upDelta;

//  apply bias
sdepth += sbias * uSlopeScale + uConstantBias;

}
}

The shader above executes in the following context: I'm rendering front faces into the shadow map. The shader reads the depth from the hardware z buffer and gets the view space position by linearizing this depth and multiplying by a view ray (typical deferred rendering stuff). The matrix "uShadowMatrix" (which converts from view space to shadow map space) is constructed as follows:

shadowMatrix = mat44(
0.5f,0.0f,0.0f,0.5f,
0.0f,0.5f,0.0f,0.5f,
0.0f,0.0f,0.5f,0.5f,
0.0f,0.0f,0.0f,1.0f
);
shadowMatrix *= currentCamera.getWorldMatrix(); // inverse view matrix

I'm not sure that either of these things makes any difference. Also, I adjust uSlopeScale in the range [-2,2] and get no difference whether it is positive or negative. See the screenshots below.

I'm completely stumped and suspect that I'm missing something crucial, but I can't seemt to figure out what - all insights welcome!

Here's what I'm currently getting, note also that the large banding artefacts are view-dependant (they slide around as the camera moves):

[attachment=18766:bias_issues.png]

##### Share on other sites

Sorry to bump this, but I'm still stumped

I'd be happy just to see some working code, if anyone has a working implementation?

• 9
• 9
• 17
• 11
• 11