One of the cool things I came up with when doing the per-vertex shadows was something I called
'Averaged L Bump Mapping'. I have been working on bump mapping and per-pixel lighting since 1999,
so I knew I wanted it in the engine.
But I also know that 7 per-pixel lights might be overkill, and would require at least 7 rendering
passes, thus making my efficient lighting scheme decidedly less so. Since I was doing the shadowing
per-vertex, I wanted a way to do the bump mapping in a per-vertex manner as well.
The most important thing in bump mapping is to give a sense of how bumpy a surface is, and also a
rough sense for where the light is coming from. I have seen several games where the bump map
lighting is coming from the wrong direction, but it's hard to tell unless you're up close. So, I
felt approximation would be in order.
One detail I've left out is that I combined the attenuation term and the occlusion term. Because my
lights didn't move, I could multiply these two together and save computation later. I could still
make lights effectively change attenuation at runtime by darkening their color. The only trick is
that when you are deciding which lights affect which vertices at level build time, you have to use
their maximum range, and not brighten the lights beyond this at runtime to avoid artifacts.
The idea I settled on was to scale each light vector by its occlusion term during the per-vertex
lighting phase, and sum all L vectors up. Next move the total L vector into tangent space, and add
a slight bias term like < 0, 0, 0.1 > before normalizing. This bias term was to prevent two
opposite lights from producing the zero vector for the sum. Lastly I would normalize this vector to
produce the average L vector, and I passed it down to the pixel shader.
Then, I performed the diffuse lighting with the L vectors already shrunk by the lighting term, and
passed the average per-vertex color down to the pixel shader.
So, in the pixel shader, I combined the per-vertex color with the bump mapping term by using it to
exaggerate or dim the per-vertex color.
This worked fairly well, and had the effect of making the strongest light at any vertex dominate the
bump mapping direction. As a light turned on, it would shift the direction of the bump mapping to
match it more. As a light turned off, the bump mapping would shift towards other lights, or to
straight up if there were no lights around.
On to the Shadow map
Once I realized that per-vertex lighting wouldn't cut it, it was time to do the shadowing per-pixel.
As some of you know, I have done some research on shadowing techniques, and independently invented
several, including using alpha test for shadow mapping ( see Kilgard's cube of spheres demo ), and
robust Object IDs ( GPG2 and ShaderX2 ), so I thought I'd start with shadow mapping.
I tried both depth shadows and Object ID shadows, using pixel shader 1.1. I didn't want to rely on
just nvidia hw shadow maps, b/c they are still only supported on nvidia cards. I knew that would be
a good choice for nv cards, however.
Now, one thing you will learn if you do actual shadowing on world geometry is that most shadow demos
out there are a hack. They don't work well for world geometry. Often there is a simple ground
plane, sometimes treated specially so as not to produce artifacts. Basically, getting shadowing
working on an object casting on a terrain is not that hard. Getting the terrain casting shadows on
itself can be very challenging, especially if you have flat walls & floors, and not more curvy
shapes. This is because the curvy shapes can hide bias artifacts.
Many engines cheat, especially if largely outdoor, b/c they are only trying to give you the idea
that character is on the terrain, so they can just find the nearest tris to the player, and render
them with a blurry version of the character. Halo 1 & 2, HL2 do this. And I think this is
absolutely the right thing to do for these games. If you are trying to do realistic lighting,
though, this doesn't cut it. It also only works for objects or characters, and not for large chunks
of world geometry.
One of the problems with shadowing world geometry is the shadow resolution problem. I was using sunlight or moonlight as the main light in my game so the entire level would need to cast shadows into the shadow map. Now, not the whole level is visible, so I could restrict it to the nearby geometry chunks to improve shadow resolution. It turned out to be very hard to tell which chunks could possibly cast a shadow in view, b/c I have no occlusion structure, so if you simply extend each chunks' bounding box in a frustum away from a directional light, you will find that the entire level does need to be rendered into the shadow map due to shadows casting an infinite distance below the floor. I don't think I solved this for real, but worked on getting the shadowing itself looking ok before tackling this one. I think the upshot of this would have made the lights in my game immobile, therefore making real-time shadow maps a poor choice.
When rendering into the shadow map, you do it from the light's perspective. When testing against the shadow map, you do it from the light's perspective as well, but from the camera's different resolution and orientation. What this means is that a 100 pixel triangle from the viewer's POV may map to a 25 pixel triangle from the light's POV. This causes blocky shadows. Or the 100 pixel triangle may correspond to a 400 pixel triangle. This causes shadow popping & crawling. Both cause bias problems, because the depth calculated won't match between the viewer and the light.
This is due to the simple fact of interpolation.
Light's depth values 0.20 0.40 0.60
camera's depth values 0.21 0.41,0.61
Even assuming the slope is identical, if the pixel from the camera and light don't start on exactly the same value, they will neve exactly match, even if the slopes are the same. More common is a situation like :
Light's depth values 0.2 0.4 0.6
camera's depth values 0.2 0.300001, 0.4, 0.500001, 0.61
Just interpolating between depth values and calculating intermediate values will produce different results due to floating point error, so there is always some bias required to prevent horrendous self-shadowing artifacts. This applies to x & y values in addition to depth values.
This resolution mis-match fundamentally is what all of the perspective shadow map variants attempt to address. They use the camera's POV to warp the projection plane of the shadow map so that the parts near the viewer get more shadow resolution, and the parts distant from the camera get less, hopefully preserving the 1:1 pixel to texel ratio needed to reduce these problems. Often, however, these schemes end up sacrificing z precision to improve x & y precision, which can make bias problems worse.
Bias problems fall in to two main categories - shadow leaks and light leaks. Shadow leaks are not such a big deal, b/c often they happen in places that are dark anyway. Where they are a huge problem is on a large flat surface. Light leaks can happen at a wall-floor junction, that otherwise would be shadowed. You can get a nice line of light along the wall bottom, which is very distracting.
Here is a shot of the per-character dynamic shadows I was using a couple of months ago. These are my own variant of shadow maps for characters that works well on GF3 hw.