I've been reading about using spherical harmonics to create irradiance environment maps and I want to give it a shot. However, one question I have, which I can't seem to find an answer from in the resources I'm reading is how the original cube map is formed. So obviously in a static scene, you can just load a cube cross from disk, and viola. But, for real time rendering, you are going to need to regenerate this environment map on every single frame, theoretically. So the way I see it, this is my rendering batches:
- Render the current environment 6 times, one for each cube face, render to a texture
- Compute my irradiance environment cubemap, send it to the video card
- Render the environment again, this time from the POV of the camera
- Render all of my objects with my irradiance map
Now maybe I'm underestimating my GPU, but this seems like a lot of processing. It seems nearly impossible that I'm going to be able to do all of this in 16ms and still have time to spare to update the rest of my game world. Is there something I'm missing here, or is this generally the strategy used for real time irradiance environment maps?