Interpolating between directional shadow maps to reduce edge shimmer

This topic is 1026 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

Recommended Posts

I recently set about trying to implement something akin to sunlight using a directional light source.

As can be expected the movement of the light source causes shimmering shadow edges with the per-frame recomputation of the light matrix. One obvious solution to this would seem to be to only recompute the shadow maps so often*, lets say once per second.

Naturally this instead causes ugly shadow jumps every second when the light orientation changes. I therefore thought that I would generate two sets of shadow maps depending on the light's current position, limited into discrete steps. For example, if the current light cycle time is at 42649 milliseconds and my sample frequency is once per second, this would create one set of shadow maps for "light position" 42000 and another for 43000. I then interpolate between these shadow maps with a factor of 649 / 1000 = 0.649 in this particular frame in my shader. When using a long light cycle (ie. a slow moving light) this looks relatively OK while eliminating the edge shimmering. However if the light changes positions too fast, such as at a "sunset" or "sunrise" (ie. where the shadows become significantly longer and at a faster rate) the interpolation between two static shadow maps becomes very obvious and the result is much less visually appealing than just having the shimmering edges.

I was wondering if this kind of approach is worth exploring further? Is there perhaps some other kind of weighted interpolation that can be utilized (all I can think of is trying to predict the shadow's movement by comparing the previous and next shadow maps and go from there, but this would surely be very inefficient to do in a pixel shader? Then again my math skills are probably not that great, so perhaps there are better solutions.)

Another approach could be to increase the sampling rate depending on the light's position. But this would eventually deteriorate to reintroducing the edge shimmer if the frequency gets high enough.

The additional overhead of doing this is also quite high; I use cascaded shadow mapping so to begin with I need to render eight maps instead of four per directional light. Secondly there is more texture sampling in the lighting shaders and some extra math as well for interpolating. Is it ultimately even worth bothering?
I realize that there is no known perfect way of addressing this but since the rule of thumb in game making is "fake it until it's good enough" there must surely be some ways to at least improve it a bit, without wasting overly much processing power? Higher resolution shadow maps help a bit but I'd rather not go beyond 2048x2048 since that starts to eat up significant amounts of memory then. On the other hand maybe I'm just looking too much for artifacts and the average player wouldn't mind too much? There are some pretty high profile games out there that also have edge shimmering after all.

Thank you for your time and input.

* In fact they're still redrawn for every frame to account for camera movement since my shadow maps try to fit to the view frustum of the scene camera, but what I mean is that the orientation of the light source, as far as the shadow mapping is aware, only changes in discrete steps every second.

Share on other sites

Another approach could be to increase the sampling rate depending on the light's position. But this would eventually deteriorate to reintroducing the edge shimmer if the frequency gets high enough.

Have you tried to make your sampling freq. depending on light velocity?

low velovity -> 1000 ms
high velocity -> 100 ms

It's hard to imagine this would not work, and if light is very fast, flickering will go unnoticed anyways.

That's interesting because i have a similar problem with another context.
I calculate lightmaps in real time and because i can't update every texel every frame i use an even cheaper temporal filter,
e.g. texel = texel * 0.9 + lastUpdate * 0.1

It looks ok but currently i have no chance to test it on real game geometry.
It would be cool if you can make a video of your problem case. Edited by JoeJ

Share on other sites
It is possible to "stabilize" your cascades so that camera movement doesn't result in sub-texel movement. See this article (specifically the part titled "Moving the Light in Texel-sized Increments" for more info. My shadows sample also has an implementation that you can look at (look for the code executed when AppSettings::StabilizeCascades is enabled).

Share on other sites

Thanks MJP, I'm aware of that though and it has been implemented and is working well :-)

This pertains to when the light source is moving, not the viewing camera.

@JoeJ: Hm, that may work... I was thinking of trying to figure out when a certain mapped fragment will move at least one pixel in the X or Y direction compared to the last shadow mapped state, but I'm not entirely sure if this would be possible to implement in a proper, yet generalized way. After all, were it that easy this approach would probably be all over the net right?

I can try to get a movie uploaded tomorrow if you like; it really looks like what it sounds like though - the old shadow map fading out while the new one fades in, with both being overlayed on top of each other.

Share on other sites

I don't think you need to stabilize your shadowmap, heck Skyrim has been loved and played by so many people, and their sun shadowmapping looks horrible when updating - but people are just enjoying bashing dragons etc. :D

But maybe you could use the camera stabilization for the light position also, to ensure the light position always follows texel increments?

Or maybe you could try some alternative filtering methods instead?

I've implemented VSM shadows yesterday in my project, and the step from PCF shadowmapping to VSM gave me much smoother and nicer shadows (without bias artifacts) :)

Share on other sites

Thanks MJP, I'm aware of that though and it has been implemented and is working well :-)
This pertains to when the light source is moving, not the viewing camera.

Share on other sites

@JoeJ: Hm, that may work... I was thinking of trying to figure out when a certain mapped fragment will move at least one pixel in the X or Y direction compared to the last shadow mapped state, but I'm not entirely sure if this would be possible to implement in a proper, yet generalized way. After all, were it that easy this approach would probably be all over the net right? wink.png

To try out, this test should help:
Make the light move with constant velocity along its circle or whatever trajectory it takes.
At runtume you need to be able to tweak the sampling freq. manually (in my gui i can rightclick a number and drag mouse to smoothly change it).
Then you see what it's worth and if so start thinking about an equation for it (making a table and drawing a plot helps).
If it does not work well, make the light move forth / back a smaller section of its path to limit the influence of angle and tackle that later...

I assume you can get a perfect solution around the ground plane surrounding you, but at the wall of a house the transition cycle will be a bit off.
You could downsample the depthbuffer, do least squares fitting to find average plane of view and calc freq. based on that. Hehe, just kidding

Edit: If you're happy with good result on ground, the equation must be a very simple one with trigonometry, like texelsizeFactor / planeNormal.Dot(lightDir) Edited by JoeJ

Share on other sites
Ooops, I realize now interpolating on position as you said must be the right thing.

vec d1 = lightPos1 - playerPos; // from shadowmap 1
vec d2 = lightPos2 - playerPos; // from shadowmap 2
vec n = d1.Cross(d2).Normalized(); // plane between them
d1 -= (n * n.Dot(d1)).Normalized(); // project to plane to force the problem 2d
d2 -= (n * n.Dot(d2)).Normalized();

vec dC = currentLightPos - playerPos;
dC -= (n * n.Dot(dC)).Normalized();

vec tang = dC.Cross(n); // tangent to find angles on both sides of player
float angle1 = atan2(n.Dot(d1), tang.Dot(d1)); // may need to swap n and tang - never sure :)
float angle2 = atan2(n.Dot(d2), tang.Dot(d2)); // now one angle should be positive and one negative

float interpolationFactor = angle1 / (angle1 - angle2); // may need a sign change
interpolationFactor = clampOneZero(interpolationFactor); // blend depthmap results by this


This should work without the need to involve frequencies or velocities.
I was really complicating things before ahead.
But the manual tweaking test is still recommended. Edited by JoeJ

Share on other sites
... but you still need to know the angle increments to calculate the two light positions.

vec l = (currentLightPos - playerPos).Normalized();
vec d = fabs(l.Dot(worldUpVector)); // we try to optimize for the ground plane case

float angleIncrement = lightMapTexelToWorlsSizeRelation / max(d,0.05) * someTweakingConstant;
// if d is large, we can move the light in large angles, and if it's small we need small angles

Somehow like that?
Hope it helps. Edited by JoeJ

Share on other sites

I don't think you need to stabilize your shadowmap, heck Skyrim has been loved and played by so many people, and their sun shadowmapping looks horrible when updating - but people are just enjoying bashing dragons etc.

Haha, yes I was leaning towards that option as well, at least for the time being; I can always come back to try to improve the visuals later now that I at least have a working baseline system in place (and I'd rather move on to other parts for a while after being stuck with the lighting for longer than I'd care to admit). On the other hand I'm probably a bit too much of a perfectionist and just want to finish this with appealing results while I'm at it...

Or maybe you could try some alternative filtering methods instead?
I've implemented VSM shadows yesterday in my project, and the step from PCF shadowmapping to VSM gave me much smoother and nicer shadows (without bias artifacts)

That is an interesting approach. It does seem a bit cumbersome though as it seemingly requires two data channels (depth and squared depth) and thus has to be rendered to a colour render target (my current system relies a lot on depth-only passes since most of my geometry will be opaque and as such can have their depth maps rendered without the need for a pixel shader). Would you say that the improvements are great enough to warrant a change to such a system?

@JoeJ: Hm, interesting... I will see if I can dust off my linear algebra books and make sense of your calculations once my fever wears off

Thanks!

1. 1
2. 2
Rutin
17
3. 3
4. 4
5. 5

• 13
• 26
• 10
• 11
• 9
• Forum Statistics

• Total Topics
633735
• Total Posts
3013596
×