Shadow mapping depth buffer resolution problem

Started by
10 comments, last by phil_t 10 years, 6 months ago

Hi,

I'm implementing a line-of-sight calculation shader in which I have an observer which can be dragged about on a map, and the areas visible to the observer are visualized. This is done by traditional shadow mapping where I render the 3D terrain surface in 4 directions with 90 degree frustums to produce depth maps, and then render everything from above, using the four depth maps and frustums for lookup.

This works fairly well as long as there is some variation in the terrain. However, on flat surfaces, the depth map doesn't have enough precision to separate the z values as they get close to the horizon. I have tried to illustrate the problem in the following figure:

[attachment=18393:DepthMapPrecision.png]

This is one of the camera frustums as viewed from the side. When the camera, C gets close to the ground, the two depth values, Z1 and Z2 will map to the same pixel in the depth map, and only the closest one is used, causing the surface beyond this point to be visualized as invisible.

I know that this method will have its limitations, and that the range can't be too far, but does anyone have any ideas about how I could reduce this problem and increase the useful range of this method?

I have tried increasing the number of frustums up to 8 frustums with 45 degree FOV, and it helps a little bit, but not very much.

Cheers

Advertisement

Have you considered linear depth maps? They provide linear values distibution, providing better precision when far from camera.

Hi and thanks for the answer :-) I don't think the z value distribution is the problem here, I'm currently using logarithmic depth values which is supposed to increase the precision on the far values.

The problem here is rather that the y values map to the same pixel in the depth map, and of course only the closest one is stored.

For your depth maps, have you tried a non-square projection matrix? If you've got 4 depth maps, then you need a 90 degree FOV on the horizontal axis, but maybe you could get away with a much tighter FOV on the vertical axis. Then using the same resolution depth map you could squeeze a lot more pixel resolution in the vertical axis.

When rendering out the depth map, do you bias the depth value with a term proportional to the magnitude of the gradient (of depth)? This is commonly done with shadow mapping to reduce "surface acne" type problems.

The OP's problem is not with depth value precision, but more with depth map resolution.

Oh, now I understand your issue.

I don't see a simple solution.

You can increase the size of your shadow map, but it will still fail in some cases.

Cascaded Shadow Maps will help, but they are costly.

I didn't fully understand you initial problem (what are you drawing from above?). If I understand the problem correctly, then you can try this:

1. Generate regular shadow maps.

2. Generate height maps of the terrain for each frustrum.

3. Use the shadow maps to approximate the distance to the objects in sight.

4. Refine the result using parallax occlusion on the height maps to find the exact hit point.

I suggest you post your initial problem, perhaps there is a better solution than shadow maps.

The OP's problem is not with depth value precision, but more with depth map resolution.

Understood, but no amount of depth map resolution will fix this problem if the surface is at a sufficiently extreme angle to the observer. A gradient based bias is designed to mitigate this specific issue (not the problem of depth value precision, which is better handled with a constant bias term).

The stalker guys had some interesting approach to storing shadow maps for their min-max shadow map approach. They stored some plane equation stuff and reconstructed the depth there using that data. Might be worth a look for you? (Sorry, can't find paper atm).

Hi guys, thanks again for feedback!

The actual problem I am trying to solve is to make a visibility map for line of sight. Basically you have an observer which can be dragged about on a 2D map and you want all the areas that are visible from the observer's position to be visualized.

In the following screen shot you can see one example where it works more or less as planned with the observer on a mountain, The green areas are visible, while the red are invisible.

[attachment=18478:LineOfSight.png]

In the next shot, I have placed the observer 5 meters above the sea.

[attachment=18479:LineOfSight2.png]

Here you can see we get a cutoff at a couple of kilometers distance. I suspect this is due to the depth map resolution as described in the original post. Another effect I don't understand is the peculiar "pillow" shape of the visibility map. This shape only appears when I account for the earth's curvature in the calculations, but I can't see why it should be shaped like this, it should be circular and further out from the observer. If I do not account for earth's curvature, the shape is more square.

I have tried with non-square projection matrices. The results are much better when I reduce the vertical FOV, but I still don't get the full range, even with the FOV at 5 degrees.

I don't think cascading shadow maps would help in this case. IIRC, in CSM you split the view frustum and make one shadow map for each, section along the distance of the view, but in this case I would be more interested in splitting the light frustum.

This topic is closed to new replies.

Advertisement