# Implementing spherical harmonics

This topic is 5036 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

Hi all, I've read a lot about spherical harmonics and understood mainly the concepts and maths behind it. But, I still a have a problem imagining how to use them in a 3D world. Let us say we have some 3D objects that are static. We define light sources at some positions having each a direction. Do we do an environment mapping for each objects(center of the sphere map or cube map will be the center of the 3D object) where we represent the light intensities on the model, then we compress this sphere map using spherical harmonics. Then how do we index our model vertices to the env map? Do we just precalculate where it's light index is on the sphere map, then having this position and the SH coefficients, we can decompress the value? Light can be rotated, can they also be translated? I'd really like to get this straight, so if someone can explain me the steps clearly on precomputing and rendering SH, I'd be really thankfull. Cheers, Jeff.

##### Share on other sites
Have a look at "An efficient representation for irradiance environment maps". The lighting is distant (and no self-shadowing) so the irradiance on your model depends only on the surface normal. The end result is you can take your spherical harmonic coefficients of the distant light (generated from your cube map or however) and the components of the surface normal at your model vertex and calculate the irradiance at that vertex.

Going to inter-reflections and self-shadowing involves more work which is what Sloan's PRT papers talk about (and what's done in DirectX9 D3dx library). This is where you start having to store SH transfer coefficients at each vertex (or even texel) which determine how the distant light is transfered by the surrounding environment and the object itself (eg, self-shadowing) and this usually requires a transport simulation (see "Spherical harmonic lighting: the gritty details" for an excellent explanation of all this - it's a must read). This is assuming a static environment where only the distant lighting changes.

##### Share on other sites
STEP 1 - Precomputation
-----------------------

1. Generate a large number of samples which sample the SH function over the full range, i.e. the entire surface of a sphere.

2. Loop through all the vertices in your model, for each vertex loop through each of the samples that you generated, each sample should have a direction (i.e.) theta, phi which you can easily convert to cartesian vectors, this can be used to construct a ray for each sample. The ray originates at (or close to) the vertex you are checking, the direction of the ray is the direction of the sample.

3. Now you have a choice about what to do with the ray and this depends entirely on the type of radiance transfer you want to model. If you just want to model shadowed transfer then all you need to do is test the ray against all the triangles in the scene (or just all the triangles on that particular object for just self-shadowing) If an intersection is found then that ray does not contribute any light and the samples values should not be used to contribute to TOTAL_SH_COEFFICIENTS_FOR_THIS_VERTEX (forgive the overly verbose name!). If the ray is not blocked then the samples contribution should be scaled by the cosine of the angle between the surface normal and the ray direction (for the cosine term in the rendering equation) and finally added to the coefficients for the vertex.

For interreflection things get a little more complicated, the solution needs to be dealt with recursively, so the light is added in passes, first off you calculate the shadowed light for each vertex, then you repeat the process but whenever a ray hits a triangle the three values at the vertices are interpolated to get the light reflected from that point on the triangle. You must also remember to scale the coefficients by the cosine term angle (i.e. the angle between the normal at the hit point and the ray direction) Then it can be added to the total coefficients for that vertex.

You must remember to scale each of the coefficients by 4pi / number of samples (Monte-Carlo Integration) This yields a vector of coefficients for each vertex which can be saved to a file for use in a run-time application.

STEP 2 - Run-Time
-----------------

1. The first step of the run-time application is to project the incident lighting function into SH coefficients. This is done by again generating a large number of SH Samples and looping through them. For each of the basis function samples get the theta and phi input values and plug these into your lighting function. If you are using environment maps use the theta and phi to calculate the texture coordinates to look up into the texture. This should give you the incident light arriving from that direction, multiply this by each of the basis coefficients to yield a new vector of coefficients for the lighting function.

2. Orthogonality of the basis functions means that to integrate the product of two of functions projected onto SH all you need to do is perform a dot product between the vector of coefficients for the two functions. i.e.

L += (I1 * T1) + (I2 * T2) + ... + In * Tn

L is the reflected light I is the incident light vector of coefficients, T is the transfer functions vector of coefficients. This is performed for each of the vertices transfer coefficients to yield a per-vertex colour.

Another point to note is that although the method described above does not handle local lighting, glossy transfer or neighbourhood transfer Sloans paper does outline these, although so far I've just looked at neighbourhood transfer(for shadowing inter-reflection of dynamic objects). So yes it is possible to translate the lights but not using the method above.

1. 1
Rutin
45
2. 2
3. 3
4. 4
5. 5
JoeJ
19

• 11
• 13
• 10
• 12
• 10
• ### Forum Statistics

• Total Topics
633001
• Total Posts
3009821
• ### Who's Online (See full list)

There are no registered users currently online

×