Guys, thanks for the explanations.
I have a bit more questions, though...
Let's say I have built the SH harmonics world grid of probes. I can understand the part of rendering a cube map at every probe position and converting each cube map to a SH coefficient for storage and bandwidth reduction.
What I don't really get is how am I supposed to render level geometry with that grid if SH probes. The papers I read only explain how to render movable objects and characters.
I can understand that - they calculate the closest probe to that object and shade it with that probe. But they seams to make it per object, because the movable objects and characters are relatively small to the grid.
But I want to render level geometry and be able to beatufuly illuminate indoor scenes. In my test room, I have many light probes - the room is big and I don't think I should search for the closest light probe to the room - after all they are all contained in the room.
I need to make it per vertex , I guess. I'm I supposed to find the closest probe to every vertex and calculate lighting for that vertex based on that probes SH coefficients ? But I render a whole room with one draw call - how to pass so many ambient probes to vertex shader and in thye shader for every vertex find the closest probe ?
And what about if I want to make it per pixel ? There is definitely something I'm missing here..
Thanks in advance.