Jump to content

  • Log In with Google      Sign In   
  • Create Account


Spherical Harminics: Need Help :-(


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
16 replies to this topic

#1 mauro78   Members   -  Reputation: 187

Like
0Likes
Like

Posted 23 October 2013 - 02:41 PM

Hi All, I've just started reading classical papers on SH (Green and Peter-Pike Sloan papers tbp) 

 

Our renderer by now just uses "Reflective Shadows Maps" to evaluate VPL and attempt to create a global illumination solution. Problem is that "naive" RSM just can't be used in Real Time applications. So I've started looking at more advanced techniques like LPV and Radiance Hints.

 

So far I understand that SH are the base of those techniques....I'll try to explain my objectives:

 

Goal 1: We've a 3D scene with n lights and a 64x64x64 grid over the whole 3D scene. Every "cell" is a light probe and I want to use SH to store incoming illumination from the n lights   

 

Goal 2: Use the 64x64x64 "probes" texture to interpolate GI for the scene in a final (deferred) renderer pass

 

Problem: I really really struggling understanding how SH works and how they can help me storing incoming radiance from scene lights.

 

Can someone help me on that and point me in the right direction?

 

Thanks in advance



Sponsor:

#2 MJP   Moderators   -  Reputation: 10928

Like
2Likes
Like

Posted 23 October 2013 - 03:15 PM

Spherical harmonics can be used to store a function on the surface of a unit sphere. in simpler terms, this means that if you have a point in space then you can pick a normalized direction from that point and use SH to get back a value for that direction. It's basically like having a cubemap, where you also use a direction to look up a value. The catch with spherical harmonics is that they can usually only store a low-frequency (very blurry) version of your function. The amount of detail is tied to the order of the SH coefficients: higher order means more details, but it also means more storing more coefficients (the number of coefficients is Order^2). In general 3rd-order is considered sufficient for storing diffuse, since diffuse is very smooth at a given point.

To compute SH coefficients, you have to be able to integrate your function. In your case the function is the radiance (incoming lighting) at a given point for a given direction. The common way to do this is first render a cubemap at the given point, and then integrate the values in the cubemap texels against the SH basis functions. Rendering the cubemap is easy: just render in the 6 primary directions and apply lighting and shaders to your geometry the way that you normally would.For performing the integration, there's some pseudo code in Peter-Pike Sloan's "Stupid SH Tricks" paper. Another way to do this would be to use a ray-tracer, and shoot random rays out from the sample point and perform monte carlo integration.

Once you have the SH coefficients at all of your sample points, you can interpolate them based on the position of the surface being shaded. Then you can "look up" the diffuse lighting from the coefficients using the normal direction of the surface.



#3 mauro78   Members   -  Reputation: 187

Like
1Likes
Like

Posted 24 October 2013 - 01:46 PM

Spherical harmonics can be used to store a function on the surface of a unit sphere. in simpler terms, this means that if you have a point in space then you can pick a normalized direction from that point and use SH to get back a value for that direction. It's basically like having a cubemap, where you also use a direction to look up a value. The catch with spherical harmonics is that they can usually only store a low-frequency (very blurry) version of your function. The amount of detail is tied to the order of the SH coefficients: higher order means more details, but it also means more storing more coefficients (the number of coefficients is Order^2). In general 3rd-order is considered sufficient for storing diffuse, since diffuse is very smooth at a given point.

To compute SH coefficients, you have to be able to integrate your function. In your case the function is the radiance (incoming lighting) at a given point for a given direction. The common way to do this is first render a cubemap at the given point, and then integrate the values in the cubemap texels against the SH basis functions. Rendering the cubemap is easy: just render in the 6 primary directions and apply lighting and shaders to your geometry the way that you normally would.For performing the integration, there's some pseudo code in Peter-Pike Sloan's "Stupid SH Tricks" paper. Another way to do this would be to use a ray-tracer, and shoot random rays out from the sample point and perform monte carlo integration.

Once you have the SH coefficients at all of your sample points, you can interpolate them based on the position of the surface being shaded. Then you can "look up" the diffuse lighting from the coefficients using the normal direction of the surface.

 

Thanks for the quick reply MJP. After another night reading specific papers (and your response...)
 
Assumption #1
In my mind a "light probe" is a point in space X where I want to store incoming radiance from a set of VPL (virtual point lights).
To store this incoming radiance the "Rendering Equation" must be solved L(X,W) somehow; evaluating this equation can tell us how much radiance is coming to point X from direction W.   
Rendering Equation is complex to solve and one way to solve it is "Monte Carlo" integration.
 
Assumption #2
Spherical Harmonics can be used to store incoming radiance to X and approximate the "Rendering Equation".
 
What I don't get :-(
To compute SH coefficients, you have to be able to integrate your function. In your case the function is the radiance (incoming lighting) at a given point for a given direction. The common way to do this is first render a cubemap at the given point, and then integrate the values in the cubemap texels against the SH basis functions. Rendering the cubemap is easy: just render in the 6 primary directions and apply lighting and shaders to your geometry the way that you normally would.For performing the integration, there's some pseudo code in Peter-Pike Sloan's "Stupid SH Tricks" paper. Another way to do this would be to use a ray-tracer, and shoot random rays out from the sample point and perform monte carlo integration.
 
This what I don't understand from your reply: It's necessary to render the cube maps? I mean I understand that SH will help me "compress" the cube maps (becouse storing them for let's say 32x32x32 grid of probes would requires too much memory)....and then lookup them using SH...but wouldn't be too expensive create those cube maps in real time? I'm using RSM to store virtual point light of the scene...is it possible to store incoming radiance into SH without using Cube Maps?
 
Thanks for future reply
 
EDIT:
Another way to do this would be to use a ray-tracer, and shoot random rays out from the sample point and perform monte carlo integration.
 
This is close to my goal: I have a probe point in space  and I have n VPL...I want to shoot ray over the point sphere (hemishpere) and compute the form factor between the point and n VPL. But storing those informations requires a lot of memory....so SH comes in help....

Edited by mauro78, 24 October 2013 - 02:00 PM.


#4 Digitalfragment   Members   -  Reputation: 814

Like
1Likes
Like

Posted 24 October 2013 - 04:58 PM


 

This what I don't understand from your reply: It's necessary to render the cube maps? I mean I understand that SH will help me "compress" the cube maps (becouse storing them for let's say 32x32x32 grid of probes would requires too much memory)....and then lookup them using SH...but wouldn't be too expensive create those cube maps in real time? I'm using RSM to store virtual point light of the scene...is it possible to store incoming radiance into SH without using Cube Maps?

Most video cards can render the scene into 6 128x128 textures pretty damn quick, given that most video cards can render complex scenes at resolutions of 1280x720 and up without struggling. You would then also perform the integration of the cubemap in a shader as well, instead of pulling the rendered textures down form the GPU and performing it on the CPU. This yields a table of colours in a very small texture (usually a 9 pixel wide texture) which can then be used in your shaders at runtime. You dont need to recreate your SH probes every frame either, so the cost is quite small.

The Stupid SH tricks paper also has functions for projecting directional, point and spot lights directly onto the SH coefficients table, however the cubemap integration is much more ideal as you can trivially incorporate area and sky lights.



#5 LorenzoGatti   Crossbones+   -  Reputation: 2663

Like
0Likes
Like

Posted 25 October 2013 - 02:07 AM

"integrate the values in the cubemap texels against the SH basis functions" This means, in unfortunately vague language, the standard procedure for function transforms with a set of orthogonal basis functions: integrate the product of the original function you want to approximate (in this case incoming light from every direction or the like, represented as a cubemap) and the conjugate of each basis function (in this case one of the low-frequency SH functions you want to include in your approximation) over the whole domain (in this case a sphere) to get the coefficient of that basis function (which should be normalized to reconstruct original function values). Where did you get the impression that a "cubemap" is a 3D object? It consists of 6 square panels forming the faces of a cube-shaped box, and as MJP suggests it is another representation for functions that are defined on a sphere. Technically, cubemap faces are regular 2D textures. OpenGL offers a family of special texture types and samplers and filtering to tie together the 6 faces and interpolate across cube edges, but these features are obviously an alternative to spherical harmonics. (You might find them useful to handle cube maps in precomputation phases).
Produci, consuma, crepa

#6 Reitano   Members   -  Reputation: 501

Like
1Likes
Like

Posted 25 October 2013 - 10:18 AM

Directional, spot and point lights are ideal lights. From the point of view of a probe, they emit radiance within an infinitesimal solid angle. In signal processing, you would call them impulses. You can only describe them analytically, not with a cubemap, unless you approximate them with an area light somehow. The way to go is by projecting them to SH basis with analytical formulas, as discussed in the Stupid SH tricks paper. So not, you don't need to render cubemaps in this case.



#7 mauro78   Members   -  Reputation: 187

Like
1Likes
Like

Posted 28 October 2013 - 12:58 PM

 


 

This what I don't understand from your reply: It's necessary to render the cube maps? I mean I understand that SH will help me "compress" the cube maps (becouse storing them for let's say 32x32x32 grid of probes would requires too much memory)....and then lookup them using SH...but wouldn't be too expensive create those cube maps in real time? I'm using RSM to store virtual point light of the scene...is it possible to store incoming radiance into SH without using Cube Maps?

Most video cards can render the scene into 6 128x128 textures pretty damn quick, given that most video cards can render complex scenes at resolutions of 1280x720 and up without struggling. You would then also perform the integration of the cubemap in a shader as well, instead of pulling the rendered textures down form the GPU and performing it on the CPU. This yields a table of colours in a very small texture (usually a 9 pixel wide texture) which can then be used in your shaders at runtime. You dont need to recreate your SH probes every frame either, so the cost is quite small.

The Stupid SH tricks paper also has functions for projecting directional, point and spot lights directly onto the SH coefficients table, however the cubemap integration is much more ideal as you can trivially incorporate area and sky lights.

 

 

Thanks for clarification...very helpful!


Edited by mauro78, 28 October 2013 - 01:01 PM.


#8 mauro78   Members   -  Reputation: 187

Like
1Likes
Like

Posted 28 October 2013 - 01:11 PM

"...Where did you get the impression that a "cubemap" is a 3D object? ..."

 

really mate.....I didn't ever think about this.....but do I think that cubemaps are not so real time friendly.....I hope I'm wrong happy.png

 

And also most important: can be used to store irradiance for indoor scene or are they better suited for outdoor scenes?

 

Thx


Edited by mauro78, 28 October 2013 - 02:47 PM.


#9 mauro78   Members   -  Reputation: 187

Like
1Likes
Like

Posted 28 October 2013 - 01:24 PM

Directional, spot and point lights are ideal lights. From the point of view of a probe, they emit radiance within an infinitesimal solid angle. In signal processing, you would call them impulses. You can only describe them analytically, not with a cubemap, unless you approximate them with an area light somehow. The way to go is by projecting them to SH basis with analytical formulas, as discussed in the Stupid SH tricks paper. So not, you don't need to render cubemaps in this case.

 

Ok I need to clarify this with some background informations:

 

my first attempt on using a GI in the engine was using RSM (reflective shadows maps); as you know the basic idea is:

 

1. Render a RSM for every light in the scene

2. Gather VPL (virtual point light) from every RSM and use those VPL as secondary light source

float3 Bounce2(float3 surfacePos, float3 normal, float3 worldSpace,float3 worldNormal,float3 Flux ){
	float distBias = 0.1f;

	float3 R = (worldSpace) - surfacePos;

	float l2 = dot (R,R);
	R *= rsqrt (l2);

	float lR = 1.0f / ( distBias + l2 * 3.141592);
	float cosThetaI = max (0.1, dot(worldNormal,-R));
	float cosThetaJ = max (0.0, dot(normal,R));

	float Fij = cosThetaI * cosThetaJ * lR;
	
	return Flux*Fij;

} 

The above code is used to compute how much light reaches a world space point from the (world space) secondary sources.

Clearly this approach has many drawbacks....slow slow slow (even using interpolation methods).

 

This lean towards the SH approach where my goal is to use the above Bounce function to collect incoming light from secondary sources and storing them using SH into the "probes".

 

So my question is how I store incoming lights from the VPL using SH into sample probes?



#10 Reitano   Members   -  Reputation: 501

Like
1Likes
Like

Posted 30 October 2013 - 08:57 AM

I am not familiar with RSMs but, judging from the code you posted, I believe SH suit your needs.

 

Remember that SH represent a signal defined over a spherical domain. In this case the signal is the total incoming light from all the secondary sources. A SH probe has an associated position in world space. For a <light, probe> pair, the signal is (cosThetaI * lR * Flux) in the direction defined by R (normalized). The cosThetaJ  term depends on the receiver's normal and will be taken into account later at runtime when shading a surface.

Some pseudo-code:

 

for each probe

  SH = 0

  for each light

     signal = R / | R | * (cosThetaI * lR * Flux)

     SH += signalToSH(signal)

 

signalToSH is a function that encodes a directional light in the SH basis. See D3DXSHEvalDirectionalLight for an implementation.

 

 

At runtime, when shading a pixel, pick the probe spatially closest to it. The pixel has a normal associated. Assuming that the BRDF is purely diffuse, you have to convolve the cosine lobe around the pixel's normal with the signal encoded in the probe, achieved with a SH rotation and dot product. See the paper http://www.cs.berkeley.edu/~ravir/papers/envmap/envmap.pdf

 

I suggest you read more material on SH in order to have a better understanding of their properties and use cases. As for myself, I will read the RSM paper asap and edit my reply if I've made wrong assumptions.

 

Please let me know if you have more questions.

 

Stefano

 

 



#11 mauro78   Members   -  Reputation: 187

Like
1Likes
Like

Posted 30 October 2013 - 04:07 PM

I am not familiar with RSMs but, judging from the code you posted, I believe SH suit your needs.

 

Remember that SH represent a signal defined over a spherical domain. In this case the signal is the total incoming light from all the secondary sources. A SH probe has an associated position in world space. For a <light, probe> pair, the signal is (cosThetaI * lR * Flux) in the direction defined by R (normalized). The cosThetaJ  term depends on the receiver's normal and will be taken into account later at runtime when shading a surface.

Some pseudo-code:

 

for each probe

  SH = 0

  for each light

     signal = R / | R | * (cosThetaI * lR * Flux)

     SH += signalToSH(signal)

 

signalToSH is a function that encodes a directional light in the SH basis. See D3DXSHEvalDirectionalLight for an implementation.

 

 

At runtime, when shading a pixel, pick the probe spatially closest to it. The pixel has a normal associated. Assuming that the BRDF is purely diffuse, you have to convolve the cosine lobe around the pixel's normal with the signal encoded in the probe, achieved with a SH rotation and dot product. See the paper http://www.cs.berkeley.edu/~ravir/papers/envmap/envmap.pdf

 

I suggest you read more material on SH in order to have a better understanding of their properties and use cases. As for myself, I will read the RSM paper asap and edit my reply if I've made wrong assumptions.

 

Please let me know if you have more questions.

 

Stefano

 

Ok first thanks for response and yes I need to dig into SH realms better.... but let's make it simple Stefano (well not so simple happy.png): given a normal vector  n (x,y,z) I can get second order SH coefficents c = (c0,c1,c2,c3)

 

 

where: 

 

c0 = sqrt(pi) / 2

c1 = -sqrt(pi/3) * y

c2 = sqrt(pi/3) * z

c3 = - sqrt(pi/3) * x

 

so for every VPL ( virtual point light ) I'll have a n vector and an associated SH coefficents c. 

 

The question is how I can go back and compute reflected radiance from those second order SH coefficents?

 

Thanks

 

Mauro


Edited by mauro78, 30 October 2013 - 04:33 PM.


#12 Reitano   Members   -  Reputation: 501

Like
0Likes
Like

Posted 31 October 2013 - 06:21 AM

Mauro, can you explain the meaning of that normal vector n ? Is it the normal of the receiving or emitting surface ? Or is it the vector connecting the emitter and receiver ? And how did you derive the four SH coefficients ?



#13 mauro78   Members   -  Reputation: 187

Like
0Likes
Like

Posted 01 November 2013 - 06:33 AM

Mauro, can you explain the meaning of that normal vector n ? Is it the normal of the receiving or emitting surface ? Or is it the vector connecting the emitter and receiver ? And how did you derive the four SH coefficients ?

N emitting normal
SH derivation is order 2 truncation of polynomial form...

#14 mauro78   Members   -  Reputation: 187

Like
0Likes
Like

Posted 04 November 2013 - 02:39 PM

I've read more on SH during this weekend and I'm even more conviced that is the right solution for my goal. To summarize right know with RSM (and associated Virtual Lights)  I'm doing this:
 
for every shaded pixel
  compute "irradiance" between the pixel and the i-th Virtual Light
 
This is very slow of course....so a potential solution using SH is:
 
teake some probes on the scene and for every probe compute incoming radiance and store it using SH. 
 
Then at runtime the shading pass will be:
for every shaded pixel
  compute the interpolated irradiance reading the nearest probe using the pixel normal as index into SH
 
BTW...I'm struggling storing and reading back the irradiance into SH :-(

Edited by mauro78, 04 November 2013 - 02:48 PM.


#15 Reitano   Members   -  Reputation: 501

Like
0Likes
Like

Posted 05 November 2013 - 06:05 AM

Mauro, that sounds like the correct approach. About storing the irradiance with SH, what is unclear about the description I wrote in my previous post ? To recap, given a probe and a light, the radiance from the light to the probe has an analytical formula and you can calculate directly the corresponding SH coefficients.



#16 mauro78   Members   -  Reputation: 187

Like
0Likes
Like

Posted 05 November 2013 - 03:23 PM

Mauro, that sounds like the correct approach. About storing the irradiance with SH, what is unclear about the description I wrote in my previous post ? To recap, given a probe and a light, the radiance from the light to the probe has an analytical formula and you can calculate directly the corresponding SH coefficients.

well almoust clear...working on this...hope to post reusults soon...also exploring "shooting" rays from the probe to get incoming radiance....

 

I get the key point after reading the paper "Stupid SH Tricks": for my own goal the best solution would be compute the incoming radiance from a VPL from all the possible directions over a sphere. This is just the integral over a surface. Of course I was aware that storing this was impossible for real time solution.
 
So I reading the paper I had sort of enlightment: SH will allow me to express the above integral using SH basis and original fuction samples. SH basis simple express the sampling direction....of course we've some degree of detail here....second order SH is just a raw solution....high order SH can better approximate the spherical function...
 
This will awill allow me to reconstruct original (approximated) function later on shading phase....Very similar to Fourier series for 1D case.

 

back on work now....

 

EDIT: the bad news is that I'll probably not go throught Light Propagation Volume but stick with Irradiance Volume...mostly becouse LPV requires DX11 hardware to be efficently implemented (even if volume textures can be handled in DX9 using some tricky code)


Edited by mauro78, 05 November 2013 - 04:17 PM.


#17 graveman   Members   -  Reputation: 110

Like
0Likes
Like

Posted 07 November 2013 - 12:42 AM

mauro78. Man, it is need study integrals, rows, differential expressions for understand SH. I study that from march of 2013. Some integrals of lighting decompose into orthogonal function (function can be presented as multidimensional vector and a dot will be match for integral).




Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS