Shadowing part I

Published January 12, 2007
Advertisement
On Shadowing:

I spent the last two weeks working on shadowing.

The following journal update will be quite technical, so you're warned !

In graphics programming, there are tons of shadowing techniques available for real-time usage, and they all - no exception - have good and bad points.

I'll start with a quick review of shadowing techniques and how they've been used in various games.

Lightmaps: is a static method. In a pre-processing phase, all the geometry of the level is uniquely uv-mapped ( at a very low resolution, generally a few tens of pixels per square meter ) and lighting/shadows are embedded into textures, called lightmaps. Lightmaps are then later multiplied by the diffuse texture. Absolutely useless for Infinity, no pre-processing is allowed in any algorithm ( shadowing or not ) due to the size of the universe. Plus all entities are dynamic..

Lightmaps are a thing of the past, typically used in old games such as Quake 3 / Unreal. They assume the "level" is small enough so that all lightmaps fit into video memory.

Stencil shadows (also called shadow volumes): is a dynamic method, used in Doom3-type games. They work by extruding the silhouette of the objects as seen by the light, forming a shadow volume in space. The stencil buffer is then used to trace virtual rays from the camera, counting how many times they enter/leave a shadow volume. The final test acts as a boolean and tells if the pixel is shadowed or not.

The main drawback of stencil shadows is that they're not robust ( they assume the objects are closed, ie. have no holes ) ( note: yeah, i know. It's possible to make robust stencil shadows. I'm simplifying for people who aren't so technical.. ). One thing you can't do with stencil shadows are alpha masks, since it works on the geometry itself.. the typical example is the foliage of a tree, which uses textures, while the geometry is a set of quads. Finally, the performance is heavily dependant on CPU, transforms and fillrate. The CPU bottleneck can be removed by implementing stencil shadows on the GPU, but the other bottlenecks are harder to get ride of. Basically, performance of stencil shadows is very dependant on image resolution and complexity of geometry. Hence why Doom3 doesn't have very complex 3D models..

This performance isn't so bad when your lights are point lights and affect only a small range - again what Doom3 proposes, strange coincidence, heh ? :) -. When the range affected by the light ( hence shadows ) grows, performance decreases quickly. The worst are directional lights which stress fillrate ( drawing shadow volume pixels ) a lot. For a single directional light, you could easily render tens of small-range point lights.

The most important light in Infinity being the sun(s), it needs to support directional lights at a high performance. Plus, the 3D models are quite complex geometrically.. hence goodbye shadow volumes.

Shadow mapping: ah, the good old shadow mapping.. the easiest to understand, but maybe the hardest to implement. Shadow mapping involves two steps: first, rendering the scene from the light's point of view into a depth (Z) buffer, also called a shadow map ; then rendering the scene normally from the camera's point of view, and for each pixel, check if the distance between the pixel and the light is smaller or equal than the one stored in the shadow map projected back on this scene.

Shadow maps don't require to extrude any geometry, ie. the rendering cost of the shadow map is only dependant on the amount of geometry in the light frustum, no new geometry is created ( unlike shadow volumes ). It can also use alpha masks, so no problem for vegetation foliage or other alpha-based effects ( again unlike shadow volumes ). And unlike shadow volumes, it suffers from aliasing.. and that's a very, very bad thing (tm) !

Aliasing is the effect of the limited resolution of the shadow map when it's projected back onto the scene. While shadow volumes ( using the stencil buffer ) give a pixel-perfect shadow, shadow maps will use the same shadow result for an area of pixels. This is because a set of pixels on screen that are very close to each other end up being projected onto the same pixel in the shadow map, so the shadow result ends up being the same. It looks aliased, it looks ugly..

As for performance, even though shadow maps are IMO a bit better than shadow volumes, they're far from coming for free. It mostly depends on the area you want to be shadowed: rendering shadow maps for an area of 100 meters doesn't have the same cost than an area of 10 KM. Especially if you want a good quality..

Let's imagine for a moment that your scene is roughly a square of 100 meters, and that your directional light uses a shadow map of 1024x1024 pixels ( i'll round it to 1000x1000 so that results look more simple ). With uniform ( standard ) shadow maps you get a virtual resolution of 10 cm. Good enough..

Now, if your scene is 10 Km big, a 1000x1000 shadow map will give you a resolution of 10 meters. In Infinity, a fighter or an interpcetor is around 10 meters long. The conclusion is simple: if using standard shadow maps, an interceptor would make one huge shadowed blob, one dot in the shadow map. Increasing the resolution of the shadow map to 2048x2048 would slightly help, but a resolution of 5 meters is still not enough.. and a scene of 10 Km isn't so large when you consider that a Flamberge is 5 Km...

Another problem with shadow maps is the shadow acne ( Z-fighting of shadowed areas ) especially when light is at 90? with a polygon's normal. Fortunately it's possible to get ride of those issues and have excellent quality shadows. I'll discuss that more in details in my next update.

Next time i'll discuss about various improvements to shadow mapping: some that i have already experimented ( per-object shadow maps, cascaded shadow maps ), some that i might experiment ( light-space shadow maps, deformed shadow maps ), and some that i will likely not bother with ( perspective shadow maps, trapezoidal shadow maps, variance shadow maps ).

While waiting, here's a screenshot of cascaded shadow maps. The skybox is copyrighted to the Minas Tirith project - i was lazy to replace it by another one, fix the shaders, the lighting, etc.. It wil give you an idea of the challenge of supporting objects with very different scales.

The interceptor is around 15m, the intrepid around 600m, and the Flamberge in the background, around 5000m.

Previous Entry Summary status of the ICP
Next Entry Jpeg 2000
0 likes 5 comments

Comments

swiftcoder
And I suppose there are also planetary shadows, adding a whole nother magnitude of scale...
Or will those be done with simpler methods (such as a course-grained pre-frame render occlusion test)?
January 13, 2007 08:15 PM
LordHavoc
A few notes on shadowing technologies I'd like to add:
1. Robust shadow volumes are trivial, just project the volume from only front or back faces, not both, this avoids any need to seal the model. Also be sure to an infinite farclip matrix to avoid problems with farclipping of rear caps when using Carmack's Reverse shadowing.

I have an routine for shadow volume generation in my upcoming open source game DarkWar using triangle neighbors (much faster than edge lists):
http://svn.icculus.org/darkwar/trunk/r_main.c
Look for R_ConstructShadowVolume, I should change it to use vec4 vertices sometime so that SSE optimizations can apply to it though.

The performance is quite decent considering it's CPU-based (I haven't taken the time to benchmark it but my rough analysis indicates that it is over 20mtris/s on Athlon 64 3200+ socket 754).

Note that shadow volumes cull pixels quite effectively, which reduces lighting cost (and a good lighting shader can be much more expensive than the shadow volumes in terms of fillrate).



2. as you noted, shadowmaps have really bad resolution distribution, they are quite pathetic as a means of 'broadcast shadowing' for a light source, however UnrealEngine3 points to a better approach:

For each model/light interaction render a shadowmap, this can use a very tight frustum describing only the space between the light and the model, this gives great resolution on even far away models, and low fillrate consumption as well as using the least possible polygon processing.

Obviously if the light source is close to or even inside the model then this frustum can exceed 90 degrees and warrant multiple cube faces being rendered (which is very costly), but that is a rare case in practice (depending on the nature of the models), and I suspect anything up to nearly 180 degrees would work ok (considering that people often play FPS games with fov 120 or higher).

A fallback to shadow volumes in this case may suffice for some engines (use both stencil and shadowmap on each light, rendering some shadows using each technique according to needs).

This doesn't handle being on a planet very well, but it may be possible to clip the planet/sun frustum in some way to only affect areas on the screen (at least when the sun is not behind the viewer).



3. svbsp culling can work very well to determine triangles worthy of casting shadows (for any algorithm), this is a specialized kind of bsp tree that is built quickly, it is basically a 3 dimensional sphere cut up in pie slices by the polygon edges (a little hard to visualize until you understand it, but there are plenty of papers to explain).

I won't explain svbsp here, but any polygons that survive svbsp testing will by definition have at leat some part directly visible to the light source, and thus be worth including in the shadow casting process.

Intriguingly for the purpose of shadow volumes this can be applied to darkfaces (ones facing away from the light source) instead of litfaces (ones facing toward the light source) to produce less shadow volumes than litface casting does, because only the trailing side of a mountain will produce a shadow rather than the entire mountain, and if that trailing side is outside a light radius or otherwise culled (you can determine this on a chunk by chunk basis) you can skip that geometry.

This is far more useful indoors than outdoors though.
January 13, 2007 09:43 PM
Dirge
Looks nice, but I'm curious, how many CSM tiles do you plan to use, or are you planning to work up some kind of adaptive system? How many do you use in those shots?
January 16, 2007 09:06 AM
You must log in to join the conversation.
Don't have a GameDev.net account? Sign up!
Profile
Author
Advertisement
Advertisement