Spherical harmonics in pixel shader

Started by
4 comments, last by solenoidz 11 years ago

Hello guys.

Recently, I started looking to add realism to my scenes (mostly indoor ). I have a deferred renderer and I'm happy with it, but every time I look at some globally illuminated scenes and I realize how much the proper lighting contributes to realism. So I started looking to add some form of real-time global illumination to my scenes and choose to play around with some form of precomputed radiance transfer and light probes.

I put a camera in a corner, made it render around six times to fill a cubemap and capture the surrounding illumination.

Now I want to use that cubemap to sample it from my deferred lighting pass , using my pixels normals to find out what color is near that point and eventually bleeding to it. That part works as espected, but durring my research I read about compressing the low frequency illumination on a sphere using spherical harmonics. Luckily, D3DX has nice utility functions to compute spherical harmonics from a cube map. Now the questions so far a two :

Am I on the right track with this ? As I can see from other demos, the compute spherical harmonics per vertex ? I tend to do it in my deferred lighting pass per pixel ?

How to sample a spherical harmonics "matrix" ( a set of float values ) using my direction normal to get the illumination value at that point ? I now sample my cube maps with my direction normals, but I want to use spherical harmonics to store the data more efficiently.

Thanks in advance.

Advertisement

Am I on the right track with this ? As I can see from other demos, the compute spherical harmonics per vertex ? I tend to do it in my deferred lighting pass per pixel ?

How to sample a spherical harmonics "matrix" ( a set of float values ) using my direction normal to get the illumination value at that point ? I now sample my cube maps with my direction normals, but I want to use spherical harmonics to store the data more efficiently.

I'm not an expert with SH, but here are some directions which might be helpful.

Spherical harmonics, like a standard sphere or a cube map, is an approximation of the incoming light from a certain direction. A cube map is really hi-quality with an according resolution. SH on the other hand is a compression. A very simple compression would be to have a direction vector and an intensity to describe from which direction the most light is coming, this case would be a 4 component vector (dirX,dirY,dirZ,intensity). This is basically, more or less (not 100% correct ?), a simple SH, which is often refered as AO term (though often only the intensity without direction is saved as single component).

Now to expand the idea, you can use SH, which improve the resolution of the approximation by adding more components to the vector, so a 9 component SH has a better resolution as a 4 component SH. So, some games (Halo??) use a SH X-component term and save this as lightmap (not only on vertex base) to simulate a pre-computed global illumination. But it should be clear, that you will need huge amounts of memory to save this term in a proper resolution (9 floats per pixel are not really light weight).

Yes, you are on the right track. If you want to use normal maps, then you need to sample in pixel shader. You can find code for evaluating irradiance using given normal in the DXSDK samples (Samples\C++\Direct3D\IrradianceVolume/SHIrradianceEnvMap.fx).

Am I on the right track with this ? As I can see from other demos, the compute spherical harmonics per vertex ? I tend to do it in my deferred lighting pass per pixel ?

How to sample a spherical harmonics "matrix" ( a set of float values ) using my direction normal to get the illumination value at that point ? I now sample my cube maps with my direction normals, but I want to use spherical harmonics to store the data more efficiently.

I'm not an expert with SH, but here are some directions which might be helpful.

Spherical harmonics, like a standard sphere or a cube map, is an approximation of the incoming light from a certain direction. A cube map is really hi-quality with an according resolution. SH on the other hand is a compression. A very simple compression would be to have a direction vector and an intensity to describe from which direction the most light is coming, this case would be a 4 component vector (dirX,dirY,dirZ,intensity). This is basically, more or less (not 100% correct ?), a simple SH, which is often refered as AO term (though often only the intensity without direction is saved as single component).

Now to expand the idea, you can use SH, which improve the resolution of the approximation by adding more components to the vector, so a 9 component SH has a better resolution as a 4 component SH. So, some games (Halo??) use a SH X-component term and save this as lightmap (not only on vertex base) to simulate a pre-computed global illumination. But it should be clear, that you will need huge amounts of memory to save this term in a proper resolution (9 floats per pixel are not really light weight).

Hate to come off like a dick, but, uh, I'm not sure what it is you're describing. It actually sounds more like spherical radial basis function projection (which, to be fair, is pretty closely related) rather than SH, but for clarity...

Spherical harmonics are actually polynomials. This, by itself, isn't too interesting/useful. What is cool, however, is when we use a branch of mathematics called Fourier analysis with said polynomials as a bases. Intuitively, we're trying to describe how accurately a given spherical harmonic function mimics the source lighting information, (tl; dr, multiply the SH function by the lighting information, scale by coverage on the sphere, add into the total) then store the result as a single weight. Repeat for three color channels, and holy crap, we have a complete representation of light for all angles in 3n2 floats (where 'n' is the SH order, and is basically a 'detail/accuracy' slider). The seminal paper on the subject (that I am aware of) has some wonderful pictures on the last page that make visualizing the process really, really easy. The final piece is calculating how light from all these angles contributes in the eye direction we care about. This boils down to projecting the BRDF into SH, (remember to rotate into the same frames of reference!) then doing a dot product of those two coefficient vectors.

As an aside, the whole 'dot product' notation here is actually mathematically correct, but really confusing for beginners since most people tend to associate it with directions and angles. There are actually no 'directions' involved in SH, since wah waaah wahh wahh wahh frequency domain. You're just multiplying the coefficients and adding the results as you go. Pick apart a dot product, and, hey, there's the same operations.

EDIT: I really should write an article on this.

clb: At the end of 2012, the positions of jupiter, saturn, mercury, and deimos are aligned so as to cause a denormalized flush-to-zero bug when computing earth's gravitational force, slinging it to the sun.

As an aside, the whole 'dot product' notation here is actually mathematically correct, but really confusing for beginners since most people tend to associate it with directions and angles. There are actually no 'directions' involved in SH, since wah waaah wahh wahh wahh frequency domain. You're just multiplying the coefficients and adding the results as you go. Pick apart a dot product, and, hey, there's the same operations.

The basis functions are parameterized in spherical coordinates where the parameters represent a direction.

Personally, I like to think along the same lines that Ashaman73 does as it gives me an easy mental framework to reason about SH with.

I think it's very intuitive to think of SH as a set of lobes in a fixed set of directions. Each basis function adds another set of directions that you can project your signal onto. The directions represent the input angles to the basis function that produce the global maxima's of the polynomal. The spacing between the maxima's represent the resolution of the signal you can represent.

-= Dave

Graphics Programmer - Ready At Dawn Studios

Thank you gyus. I'll delve into it some more. I kinda like the Ashaman73 explanation. Points++

This topic is closed to new replies.

Advertisement