Spherical Harminics: Need Help :-(

Started by
15 comments, last by graveman 10 years, 5 months ago

Hi All, I've just started reading classical papers on SH (Green and Peter-Pike Sloan papers tbp)

Our renderer by now just uses "Reflective Shadows Maps" to evaluate VPL and attempt to create a global illumination solution. Problem is that "naive" RSM just can't be used in Real Time applications. So I've started looking at more advanced techniques like LPV and Radiance Hints.

So far I understand that SH are the base of those techniques....I'll try to explain my objectives:

Goal 1: We've a 3D scene with n lights and a 64x64x64 grid over the whole 3D scene. Every "cell" is a light probe and I want to use SH to store incoming illumination from the n lights

Goal 2: Use the 64x64x64 "probes" texture to interpolate GI for the scene in a final (deferred) renderer pass

Problem: I really really struggling understanding how SH works and how they can help me storing incoming radiance from scene lights.

Can someone help me on that and point me in the right direction?

Thanks in advance

Advertisement

Spherical harmonics can be used to store a function on the surface of a unit sphere. in simpler terms, this means that if you have a point in space then you can pick a normalized direction from that point and use SH to get back a value for that direction. It's basically like having a cubemap, where you also use a direction to look up a value. The catch with spherical harmonics is that they can usually only store a low-frequency (very blurry) version of your function. The amount of detail is tied to the order of the SH coefficients: higher order means more details, but it also means more storing more coefficients (the number of coefficients is Order^2). In general 3rd-order is considered sufficient for storing diffuse, since diffuse is very smooth at a given point.

To compute SH coefficients, you have to be able to integrate your function. In your case the function is the radiance (incoming lighting) at a given point for a given direction. The common way to do this is first render a cubemap at the given point, and then integrate the values in the cubemap texels against the SH basis functions. Rendering the cubemap is easy: just render in the 6 primary directions and apply lighting and shaders to your geometry the way that you normally would.For performing the integration, there's some pseudo code in Peter-Pike Sloan's "Stupid SH Tricks" paper. Another way to do this would be to use a ray-tracer, and shoot random rays out from the sample point and perform monte carlo integration.

Once you have the SH coefficients at all of your sample points, you can interpolate them based on the position of the surface being shaded. Then you can "look up" the diffuse lighting from the coefficients using the normal direction of the surface.

Spherical harmonics can be used to store a function on the surface of a unit sphere. in simpler terms, this means that if you have a point in space then you can pick a normalized direction from that point and use SH to get back a value for that direction. It's basically like having a cubemap, where you also use a direction to look up a value. The catch with spherical harmonics is that they can usually only store a low-frequency (very blurry) version of your function. The amount of detail is tied to the order of the SH coefficients: higher order means more details, but it also means more storing more coefficients (the number of coefficients is Order^2). In general 3rd-order is considered sufficient for storing diffuse, since diffuse is very smooth at a given point.

To compute SH coefficients, you have to be able to integrate your function. In your case the function is the radiance (incoming lighting) at a given point for a given direction. The common way to do this is first render a cubemap at the given point, and then integrate the values in the cubemap texels against the SH basis functions. Rendering the cubemap is easy: just render in the 6 primary directions and apply lighting and shaders to your geometry the way that you normally would.For performing the integration, there's some pseudo code in Peter-Pike Sloan's "Stupid SH Tricks" paper. Another way to do this would be to use a ray-tracer, and shoot random rays out from the sample point and perform monte carlo integration.

Once you have the SH coefficients at all of your sample points, you can interpolate them based on the position of the surface being shaded. Then you can "look up" the diffuse lighting from the coefficients using the normal direction of the surface.

Thanks for the quick reply MJP. After another night reading specific papers (and your response...)
Assumption #1
In my mind a "light probe" is a point in space X where I want to store incoming radiance from a set of VPL (virtual point lights).
To store this incoming radiance the "Rendering Equation" must be solved L(X,W) somehow; evaluating this equation can tell us how much radiance is coming to point X from direction W.
Rendering Equation is complex to solve and one way to solve it is "Monte Carlo" integration.
Assumption #2
Spherical Harmonics can be used to store incoming radiance to X and approximate the "Rendering Equation".
What I don't get :-(
To compute SH coefficients, you have to be able to integrate your function. In your case the function is the radiance (incoming lighting) at a given point for a given direction. The common way to do this is first render a cubemap at the given point, and then integrate the values in the cubemap texels against the SH basis functions. Rendering the cubemap is easy: just render in the 6 primary directions and apply lighting and shaders to your geometry the way that you normally would.For performing the integration, there's some pseudo code in Peter-Pike Sloan's "Stupid SH Tricks" paper. Another way to do this would be to use a ray-tracer, and shoot random rays out from the sample point and perform monte carlo integration.
This what I don't understand from your reply: It's necessary to render the cube maps? I mean I understand that SH will help me "compress" the cube maps (becouse storing them for let's say 32x32x32 grid of probes would requires too much memory)....and then lookup them using SH...but wouldn't be too expensive create those cube maps in real time? I'm using RSM to store virtual point light of the scene...is it possible to store incoming radiance into SH without using Cube Maps?
Thanks for future reply
EDIT:
Another way to do this would be to use a ray-tracer, and shoot random rays out from the sample point and perform monte carlo integration.
This is close to my goal: I have a probe point in space and I have n VPL...I want to shoot ray over the point sphere (hemishpere) and compute the form factor between the point and n VPL. But storing those informations requires a lot of memory....so SH comes in help....

This what I don't understand from your reply: It's necessary to render the cube maps? I mean I understand that SH will help me "compress" the cube maps (becouse storing them for let's say 32x32x32 grid of probes would requires too much memory)....and then lookup them using SH...but wouldn't be too expensive create those cube maps in real time? I'm using RSM to store virtual point light of the scene...is it possible to store incoming radiance into SH without using Cube Maps?

Most video cards can render the scene into 6 128x128 textures pretty damn quick, given that most video cards can render complex scenes at resolutions of 1280x720 and up without struggling. You would then also perform the integration of the cubemap in a shader as well, instead of pulling the rendered textures down form the GPU and performing it on the CPU. This yields a table of colours in a very small texture (usually a 9 pixel wide texture) which can then be used in your shaders at runtime. You dont need to recreate your SH probes every frame either, so the cost is quite small.

The Stupid SH tricks paper also has functions for projecting directional, point and spot lights directly onto the SH coefficients table, however the cubemap integration is much more ideal as you can trivially incorporate area and sky lights.

"integrate the values in the cubemap texels against the SH basis functions" This means, in unfortunately vague language, the standard procedure for function transforms with a set of orthogonal basis functions: integrate the product of the original function you want to approximate (in this case incoming light from every direction or the like, represented as a cubemap) and the conjugate of each basis function (in this case one of the low-frequency SH functions you want to include in your approximation) over the whole domain (in this case a sphere) to get the coefficient of that basis function (which should be normalized to reconstruct original function values). Where did you get the impression that a "cubemap" is a 3D object? It consists of 6 square panels forming the faces of a cube-shaped box, and as MJP suggests it is another representation for functions that are defined on a sphere. Technically, cubemap faces are regular 2D textures. OpenGL offers a family of special texture types and samplers and filtering to tie together the 6 faces and interpolate across cube edges, but these features are obviously an alternative to spherical harmonics. (You might find them useful to handle cube maps in precomputation phases).

Omae Wa Mou Shindeiru

Directional, spot and point lights are ideal lights. From the point of view of a probe, they emit radiance within an infinitesimal solid angle. In signal processing, you would call them impulses. You can only describe them analytically, not with a cubemap, unless you approximate them with an area light somehow. The way to go is by projecting them to SH basis with analytical formulas, as discussed in the Stupid SH tricks paper. So not, you don't need to render cubemaps in this case.


This what I don't understand from your reply: It's necessary to render the cube maps? I mean I understand that SH will help me "compress" the cube maps (becouse storing them for let's say 32x32x32 grid of probes would requires too much memory)....and then lookup them using SH...but wouldn't be too expensive create those cube maps in real time? I'm using RSM to store virtual point light of the scene...is it possible to store incoming radiance into SH without using Cube Maps?

Most video cards can render the scene into 6 128x128 textures pretty damn quick, given that most video cards can render complex scenes at resolutions of 1280x720 and up without struggling. You would then also perform the integration of the cubemap in a shader as well, instead of pulling the rendered textures down form the GPU and performing it on the CPU. This yields a table of colours in a very small texture (usually a 9 pixel wide texture) which can then be used in your shaders at runtime. You dont need to recreate your SH probes every frame either, so the cost is quite small.

The Stupid SH tricks paper also has functions for projecting directional, point and spot lights directly onto the SH coefficients table, however the cubemap integration is much more ideal as you can trivially incorporate area and sky lights.

Thanks for clarification...very helpful!

"...Where did you get the impression that a "cubemap" is a 3D object? ..."

really mate.....I didn't ever think about this.....but do I think that cubemaps are not so real time friendly.....I hope I'm wrong happy.png

And also most important: can be used to store irradiance for indoor scene or are they better suited for outdoor scenes?

Thx

Directional, spot and point lights are ideal lights. From the point of view of a probe, they emit radiance within an infinitesimal solid angle. In signal processing, you would call them impulses. You can only describe them analytically, not with a cubemap, unless you approximate them with an area light somehow. The way to go is by projecting them to SH basis with analytical formulas, as discussed in the Stupid SH tricks paper. So not, you don't need to render cubemaps in this case.

Ok I need to clarify this with some background informations:

my first attempt on using a GI in the engine was using RSM (reflective shadows maps); as you know the basic idea is:

1. Render a RSM for every light in the scene

2. Gather VPL (virtual point light) from every RSM and use those VPL as secondary light source


float3 Bounce2(float3 surfacePos, float3 normal, float3 worldSpace,float3 worldNormal,float3 Flux ){
	float distBias = 0.1f;

	float3 R = (worldSpace) - surfacePos;

	float l2 = dot (R,R);
	R *= rsqrt (l2);

	float lR = 1.0f / ( distBias + l2 * 3.141592);
	float cosThetaI = max (0.1, dot(worldNormal,-R));
	float cosThetaJ = max (0.0, dot(normal,R));

	float Fij = cosThetaI * cosThetaJ * lR;
	
	return Flux*Fij;

} 

The above code is used to compute how much light reaches a world space point from the (world space) secondary sources.

Clearly this approach has many drawbacks....slow slow slow (even using interpolation methods).

This lean towards the SH approach where my goal is to use the above Bounce function to collect incoming light from secondary sources and storing them using SH into the "probes".

So my question is how I store incoming lights from the VPL using SH into sample probes?

I am not familiar with RSMs but, judging from the code you posted, I believe SH suit your needs.

Remember that SH represent a signal defined over a spherical domain. In this case the signal is the total incoming light from all the secondary sources. A SH probe has an associated position in world space. For a <light, probe> pair, the signal is (cosThetaI * lR * Flux) in the direction defined by R (normalized). The cosThetaJ term depends on the receiver's normal and will be taken into account later at runtime when shading a surface.

Some pseudo-code:

for each probe

SH = 0

for each light

signal = R / | R | * (cosThetaI * lR * Flux)

SH += signalToSH(signal)

signalToSH is a function that encodes a directional light in the SH basis. See D3DXSHEvalDirectionalLight for an implementation.

At runtime, when shading a pixel, pick the probe spatially closest to it. The pixel has a normal associated. Assuming that the BRDF is purely diffuse, you have to convolve the cosine lobe around the pixel's normal with the signal encoded in the probe, achieved with a SH rotation and dot product. See the paper http://www.cs.berkeley.edu/~ravir/papers/envmap/envmap.pdf

I suggest you read more material on SH in order to have a better understanding of their properties and use cases. As for myself, I will read the RSM paper asap and edit my reply if I've made wrong assumptions.

Please let me know if you have more questions.

Stefano

This topic is closed to new replies.

Advertisement