Hmm, you're right. It looks like they're using cascaded shadow maps for both the static and dynamic geometry, which is interesting. I assume they bake only the indirect lighting and then just add in the direct lighting on the fly. If nothing else, it's probably easier to implement than storing the contribution of direct light onto static geometry.
Guys, I understand the part with shadows. It's not interesting if they are using static shadow maps for static level geometry. I don't think they just bake the indirect lighting and that's it. The actors and other objects moving through the level receive indirect lighting as well. I have a feeling they have some sort of lightmap on static levels and also have some "fill lights" placed here and there simulate bounced light and to illuminate dynamic objects, that move around.
You're right that I got sidetracked.
I think what you're suggesting ("fill lights") probably most closely resembles "virtual point lights" which is something that is sometimes used in radiosity/deferred lighting systems. I've toyed with it a bit and found that it is probably best suited to non-realtime/"interactive" rendering, as it takes a fairly large number of VPLs as well as some kind of shadowing/occlusion to make it work well. Like I said, I've only played with this a bit, so there might well be some interesting optimizations/approximations that I'm not aware of.
That said,, MJP suggested baking light probes, which is a fairly similar idea. I'm not an expert on the subject, but I'll fill in the details to the best of my ability. I think that the three most common ways of achieving this are using probes that store just color information (with no directional information), spherical harmonic lights, or irradiance maps.
They all typically involve interpolating between the probes based on the position of the dynamic actor that is being rendered.
In the first case (color only), we have stored the amount/color of light that has reached a given probe, so that the probes make something that resembles a point cloud of colors, or a (very coarse) 3D texture. When we want to light a dynamic object, we just interpolate between the colors (based on position) and use that color as the indirect term.
In the second and third cases (irradiance or spherical harmonic), the probes also define a way of looking up the light color based on the normal of the dynamic object we are rendering; aside from that, the idea is the same: we interpolate between the probes based on the dynamic object's position, and then do the look-up, only with the normal as input.
These links might help you understand how to compute the probes for those cases. They're not explicitly geared toward pre-computing multiple probes (but rather computing irradiance for a single point efficiently) but they might help to point you in the right direction: http://http.developer.nvidia.com/GPUGems2/gpugems2_chapter10.html and http://codeflow.org/entries/2011/apr/18/advanced-webgl-part-3-irradiance-environment-map/
I invite anyone to correct my imperfect understanding of the subject.