I spent the last two weeks working on shadowing.
The following journal update will be quite technical, so you're warned !
In graphics programming, there are tons of shadowing techniques available for real-time usage, and they all - no exception - have good and bad points.
I'll start with a quick review of shadowing techniques and how they've been used in various games.
Lightmaps: is a static method. In a pre-processing phase, all the geometry of the level is uniquely uv-mapped ( at a very low resolution, generally a few tens of pixels per square meter ) and lighting/shadows are embedded into textures, called lightmaps. Lightmaps are then later multiplied by the diffuse texture. Absolutely useless for Infinity, no pre-processing is allowed in any algorithm ( shadowing or not ) due to the size of the universe. Plus all entities are dynamic..
Lightmaps are a thing of the past, typically used in old games such as Quake 3 / Unreal. They assume the "level" is small enough so that all lightmaps fit into video memory.
Stencil shadows (also called shadow volumes): is a dynamic method, used in Doom3-type games. They work by extruding the silhouette of the objects as seen by the light, forming a shadow volume in space. The stencil buffer is then used to trace virtual rays from the camera, counting how many times they enter/leave a shadow volume. The final test acts as a boolean and tells if the pixel is shadowed or not.
The main drawback of stencil shadows is that they're not robust ( they assume the objects are closed, ie. have no holes ) ( note: yeah, i know. It's possible to make robust stencil shadows. I'm simplifying for people who aren't so technical.. ). One thing you can't do with stencil shadows are alpha masks, since it works on the geometry itself.. the typical example is the foliage of a tree, which uses textures, while the geometry is a set of quads. Finally, the performance is heavily dependant on CPU, transforms and fillrate. The CPU bottleneck can be removed by implementing stencil shadows on the GPU, but the other bottlenecks are harder to get ride of. Basically, performance of stencil shadows is very dependant on image resolution and complexity of geometry. Hence why Doom3 doesn't have very complex 3D models..
This performance isn't so bad when your lights are point lights and affect only a small range - again what Doom3 proposes, strange coincidence, heh ? :) -. When the range affected by the light ( hence shadows ) grows, performance decreases quickly. The worst are directional lights which stress fillrate ( drawing shadow volume pixels ) a lot. For a single directional light, you could easily render tens of small-range point lights.
The most important light in Infinity being the sun(s), it needs to support directional lights at a high performance. Plus, the 3D models are quite complex geometrically.. hence goodbye shadow volumes.
Shadow mapping: ah, the good old shadow mapping.. the easiest to understand, but maybe the hardest to implement. Shadow mapping involves two steps: first, rendering the scene from the light's point of view into a depth (Z) buffer, also called a shadow map ; then rendering the scene normally from the camera's point of view, and for each pixel, check if the distance between the pixel and the light is smaller or equal than the one stored in the shadow map projected back on this scene.
Shadow maps don't require to extrude any geometry, ie. the rendering cost of the shadow map is only dependant on the amount of geometry in the light frustum, no new geometry is created ( unlike shadow volumes ). It can also use alpha masks, so no problem for vegetation foliage or other alpha-based effects ( again unlike shadow volumes ). And unlike shadow volumes, it suffers from aliasing.. and that's a very, very bad thing (tm) !
Aliasing is the effect of the limited resolution of the shadow map when it's projected back onto the scene. While shadow volumes ( using the stencil buffer ) give a pixel-perfect shadow, shadow maps will use the same shadow result for an area of pixels. This is because a set of pixels on screen that are very close to each other end up being projected onto the same pixel in the shadow map, so the shadow result ends up being the same. It looks aliased, it looks ugly..
As for performance, even though shadow maps are IMO a bit better than shadow volumes, they're far from coming for free. It mostly depends on the area you want to be shadowed: rendering shadow maps for an area of 100 meters doesn't have the same cost than an area of 10 KM. Especially if you want a good quality..
Let's imagine for a moment that your scene is roughly a square of 100 meters, and that your directional light uses a shadow map of 1024x1024 pixels ( i'll round it to 1000x1000 so that results look more simple ). With uniform ( standard ) shadow maps you get a virtual resolution of 10 cm. Good enough..
Now, if your scene is 10 Km big, a 1000x1000 shadow map will give you a resolution of 10 meters. In Infinity, a fighter or an interpcetor is around 10 meters long. The conclusion is simple: if using standard shadow maps, an interceptor would make one huge shadowed blob, one dot in the shadow map. Increasing the resolution of the shadow map to 2048x2048 would slightly help, but a resolution of 5 meters is still not enough.. and a scene of 10 Km isn't so large when you consider that a Flamberge is 5 Km...
Another problem with shadow maps is the shadow acne ( Z-fighting of shadowed areas ) especially when light is at 90? with a polygon's normal. Fortunately it's possible to get ride of those issues and have excellent quality shadows. I'll discuss that more in details in my next update.
Next time i'll discuss about various improvements to shadow mapping: some that i have already experimented ( per-object shadow maps, cascaded shadow maps ), some that i might experiment ( light-space shadow maps, deformed shadow maps ), and some that i will likely not bother with ( perspective shadow maps, trapezoidal shadow maps, variance shadow maps ).
While waiting, here's a screenshot of cascaded shadow maps. The skybox is copyrighted to the Minas Tirith project - i was lazy to replace it by another one, fix the shaders, the lighting, etc.. It wil give you an idea of the challenge of supporting objects with very different scales.
The interceptor is around 15m, the intrepid around 600m, and the Flamberge in the background, around 5000m.
Or will those be done with simpler methods (such as a course-grained pre-frame render occlusion test)?