This topic is 2439 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

Since I have my shadow mapping working (sweet), I'm trying to figure out the best way to handle the "shadow bias".

It appears the "shadow bias" is different depending on the distance of the light to the object being shadowed. My initial thought is that each object has it's own "shadow bias" value based on trial and error (me keying in values to find the best result.).

Since I don't really understand what "shadow bias" is and why it's needed, not really sure what to think about it.

Can a number be dynamically calculated based on the distance between the object and the light? If so, any thoughts on how that would work?

##### Share on other sites
Hey!

It appears the "shadow bias" is different depending on the distance of the light to the object being shadowed.

That is right. Shadow mapping uses the depth values in the lights clipping space, which are not linearly distributed. For this reason you lose precision when objects move farer away. Hence, the artifacts become worse. Make sure that your near and far plane (when capturing the shadow map) are well-chosen such that you get a good distribution of z-values. In this context the term “pan-caking” comes up at times. It is an expression for: pack the near and far plane as close around the objects that cast shadows. You will always need a little depth bias to get rid of the quantization error introduced by the non-linear depth values.

Since I don't really understand what "shadow bias" is and why it's needed, not really sure what to think about it.

• The first is computed via projection of the world position into the light clipping space.
• The second is fetched from a *rasterized* depth map created from the lights view.The problem is you compare an analytically correct z-value to a discretized z-value. The following image is from David Tuft and can be found here. You can also listen to his talk on shadows at GamesFest in 2010 (audio track included).

Note that the discrete depth values (red lines) are constant per shadow map texel and are aligned to the lights view. The two black arrows from the camera indicate two shadow tests from adjacent camera pixels. Both fetch the depth value from the same depth map texel but the comparison yields a different result. Although this object is the closest to the light the bottom arrow will report a shadow area since we compare to the wrong (simply a too roughly discretized) depth value. The problem gets smaller as you increase the resolution of the shadow map, but it doesn’t vanish. The green line would be the right depth bias to apply. As you can also see in this image, the artifact is dependent on the slope of the object in the lights view.

In Dx10+ you can automatically add a depth bias dependent on the slope, called slope-scaled depth bias. The rasterizer state has the properties DepthBias (constant value) and SlopeScaledDepthBias, but unfortunately they are only applied to operations with the actual depth buffer (DXGI_FORMAT_D…), according to a talk of John Isidoro. If you use an ordinary render target (DXGI_FORMAT_R32_FLOAT for instance) you have to compute it yourself.

First thing, that comes to our mind: Alright let’s reimplemented the stuff the hardware can do with the actual depth buffer. That would be:

 ddistdx = ddx(dist); ddistdy = ddy(dist); dist += g_fSlopeBias * abs(ddistdx); dist += g_fSlopeBias * abs(ddistdy);
Whereas dist is the depth in the lights projection space (the thing I previously called the analytical depth value).

Fine, that simple one was easy enough, but sadly it will fail for large filter kernels, since ddx and ddy compute derivatives with respect to the screen space (from the camera). Instead, we need to know how much the analytical depth value changes with respect to a shadow map texture coordinate. All we want is to sample in a window around a coordinate in the shadow map texture, right? If we know how the analytical depth value will change, as we move around in that texture, we can take this change into account when we make the depth comparison with the value in the shadow map.

Well, let’s think about that a little. Who tells us how a coordinate in screen space changes, if we move in shadow texcoord space in a certain direction? Right, the Jacobian, if we take the derivatives from screen space, which luckily are provided by ddx and ddy.

J = [ ddx(txShadow.x), ddx(txShadow.y); ddy(txShadow.x), ddy(txShadow.y) ]
Is that what we need? No, not yet.
But we are close. Now, we know how a coordinate in screen space changes if we move in shadow texcoord space. Lucky for us, it is a linear dependence (linear 2x2 matrix)! This means, the inverse of the Jacobian tells us how a coordinate in shadow texcoord space changes if we move in screen space. This means, we found a way to “convert” derivatives between those two spaces. (If we know how something changes in one space, we know for the other space, too.)

So, let’s just go ahead and transform the change of the analytical depth value ddistdx and ddistdy from screen space to shadow coord space. All we need to do is multiply ddistdx and ddistdy with the inverse of J. Let’s call the result ddistdu and ddistdv.

Now we can iterate along our filter mask and can compute the amount of depth bias to apply.
The value ddistdu is the depth bias we have to add if we move one texel in the shadow map to the right (we have to add it twice if we move two texels) and ddistdv is the depth bias we have to add if we move one texel in the shadow map downwards. You see, we are now assuming that the surface we are looking at is locally a plane.

Well and that’s it. John Isidoro and David Tuft both provide some source code in their slides. I don’t copy it here, since you can learn a little more by browsing through their slides, I guess. ;)

Greetings!

##### Share on other sites
Big thanks for all that

Had no idea what I was getting myself into when adding shadow mapping. I've read over what you wrote a few times and I appreciate your incite. I also read that web page. so much to absorb but I think I'll be able to add it in.

I'm currently listening to the audio and following along with the PowerPoint presentation. This is eye opening!

Thanks again!

1. 1
2. 2
3. 3
4. 4
frob
13
5. 5

• 16
• 13
• 20
• 12
• 19
• ### Forum Statistics

• Total Topics
632171
• Total Posts
3004554

×