I'd like your advice on whether the following is possible/feasible/sensible, and whether anyone has done it before:
Rather than rendering the whole scene to a shadow map for each light, we instead rasterise the scene once, to a large voxel texture stored in projection space.
Then for each fragment, we march into the voxel texture to find the first intersection (our volume is in projection space, so this is a simple linear walk), and from that intersection point we raymarch to the position of each light (which need to be passed in projection space), stopping if we encounter a filled voxel.[/quote]
Pros:
- Cuts the render equation down from geometry * lights to geometry + lights (much like deferred lighting).
- We can use the voxel representation to compute an ambient occlusion term.
Cons:
- Requires a lot of GPU memory for the voxel texture.
- To allow for shadow-caster outside of the clip volume, we have to extend our voxel volume beyond the clip volume, and this costs us valuable resolution.
- The low resolution, coupled with the volume not being aligned with light-space, may mean a lot of aliasing.