That's how I used to do the in-game shadows - make a frustum from the shadow caster to the light's max range, and gather all triangles in the frustum, and then render the projective texture only on these triangles. This was way too CPU intensive when the player got near a geometrically dense object, so I switched to just using a frustum cull check vs each level chunk's per material AA boxes. That can draw a bit more than needed, but it's way cheaper CPU-wise, and doesn't suffer slow down in dense areas.
Here are a couple of shots of the decals. The first is in the editor, with a blast mark and some neon-red blood. The next is the same scene in the game.
Next up is to fix the character lighting. You can probably see that in the recent in-game shots, the characters are all black. That's because I was using the old cell-based navigation to cache lighting information as well. The character's don't receive per-pixel shadows from the environment ( although I could add that for large creatures if I needed to ); instead, they use a per-object shadowing term.
Originally, this was calculated via a set of raycasts from the player's bounding sphere towards the lights he might be influenced by. I still may go back to this approach, but I thought since lights don't move in my engine, I could instead cache the lit areas in the 2d grid of cells used for the navigation. This was faster, but suffered from occasional artifacts and had to go when I threw out the 2d grid navigation approach.
One approach I've been toying with is to make a per-light data structure that contains volumetric information about where the light is shadowed, and where it isn't. The most accurate way to do it would be static shadow volumes, stored in a solid/empty BSP tree. But I intend to do soft shadows on them, so any approach I use can be rather approximate, because I will do ~9 shadow tests and average the results. Since I never really tested my BSP code, and I'm afraid of all of the geometric clipping issues that would arise, I'm more inclined to use a bounding volume hierarchy or an octtree of voxels. Each voxel would be fully in shadow, out of shadow, or subdivided down to a certain level.
This way I would only have to find the character's position in the octtree to know if he were in shadow or not. I could store only the shadowed areas, or only the lit areas, and if a cell were missing, that would imply the opposite case.
OK, so what is the simplest thing that could possibly work? Well for a point light, a simple 3d voxel grid around the light would be the simplest thing. If stored at a 1x1x1 meter resolution, it wouldn't be too expensive memory wise.
For a directional light like the sun or moon, a full voxel grid would be too expensive. I could cut down on memory by storing a sparse voxel grid, so only store nodes near the level itself, and only storing shadowed areas.
A sparse octree would further compress a directional light's information, and would apply just as well to a point light.
Now let's see if an octree or bounding volume tree would be better. I have more faith in my bvh tree, and have been using it more & more lately. A bvh tree is simply a binary tree of bounding volumes, where each parent contains two potentially overlapping children. My bvh tree always uses Axis-Aligned Bounding Boxes, but one could make them for other shapes, or a mix of shapes.
My BVH build step uses the surface area heuristic used for KD tree building, which is designed to optimize ray casts by making the volumes tight and have low surface area. I suppose most of the time the character would be lit by a light that he is in range of, so I should optimize for that case. The BVH building code tries to make a raycast skip empty space as much as possible, so for a point test, I should make the tree store shadowed volumes.
Hmmmm, this is sounding quite complicated, especially when all I need is a simple in/out test, a single bit per cubic meter. That can't be too much memory, can it?
Ok, a 128x128x32 level would require 128x128x32 bits / 8 - 128x128x4 bytes = 64k. I could store a 4x4x2 volume in a single dword for cache-coherence, and I think we have a winner. An equivalent bvh tree would most likely take up way more memory, at least for the sunlight, because each bbox is 24 bytes alone.
Hopefully my next entry will contain properly lit characters again. I'll shoot for this evening.