Am I on the right track with this ? As I can see from other demos, the compute spherical harmonics per vertex ? I tend to do it in my deferred lighting pass per pixel ?
How to sample a spherical harmonics "matrix" ( a set of float values ) using my direction normal to get the illumination value at that point ? I now sample my cube maps with my direction normals, but I want to use spherical harmonics to store the data more efficiently.
I'm not an expert with SH, but here are some directions which might be helpful.
Spherical harmonics, like a standard sphere or a cube map, is an approximation of the incoming light from a certain direction. A cube map is really hi-quality with an according resolution. SH on the other hand is a compression. A very simple compression would be to have a direction vector and an intensity to describe from which direction the most light is coming, this case would be a 4 component vector (dirX,dirY,dirZ,intensity). This is basically, more or less (not 100% correct ?), a simple SH, which is often refered as AO term (though often only the intensity without direction is saved as single component).
Now to expand the idea, you can use SH, which improve the resolution of the approximation by adding more components to the vector, so a 9 component SH has a better resolution as a 4 component SH. So, some games (Halo??) use a SH X-component term and save this as lightmap (not only on vertex base) to simulate a pre-computed global illumination. But it should be clear, that you will need huge amounts of memory to save this term in a proper resolution (9 floats per pixel are not really light weight).
Hate to come off like a dick, but, uh, I'm not sure what it is you're describing. It actually sounds more like spherical radial basis function projection (which, to be fair, is pretty closely related) rather than SH, but for clarity...
Spherical harmonics are actually polynomials. This, by itself, isn't too interesting/useful. What is cool, however, is when we use a branch of mathematics called Fourier analysis with said polynomials as a bases. Intuitively, we're trying to describe how accurately a given spherical harmonic function mimics the source lighting information, (tl; dr, multiply the SH function by the lighting information, scale by coverage on the sphere, add into the total) then store the result as a single weight. Repeat for three color channels, and holy crap, we have a complete representation of light for all angles in 3n2 floats (where 'n' is the SH order, and is basically a 'detail/accuracy' slider). The seminal paper on the subject (that I am aware of) has some wonderful pictures on the last page that make visualizing the process really, really easy. The final piece is calculating how light from all these angles contributes in the eye direction we care about. This boils down to projecting the BRDF into SH, (remember to rotate into the same frames of reference!) then doing a dot product of those two coefficient vectors.
As an aside, the whole 'dot product' notation here is actually mathematically correct, but really confusing for beginners since most people tend to associate it with directions and angles. There are actually no 'directions' involved in SH, since wah waaah wahh wahh wahh frequency domain. You're just multiplying the coefficients and adding the results as you go. Pick apart a dot product, and, hey, there's the same operations.
EDIT: I really should write an article on this.
clb: At the end of 2012, the positions of jupiter, saturn, mercury, and deimos are aligned so as to cause a denormalized flush-to-zero bug when computing earth's gravitational force, slinging it to the sun.