• Create Account

# Spherical Area Lights

8 replies to this topic

### #1Hodgman  Moderators   -  Reputation: 19337

Like
5Likes
Like

Posted 22 October 2013 - 10:11 PM

I'm reading this presentation at the moment, specifically the Area Lights / Representative Point / Sphere Lights section.

I've using this "representative point" technique in my engine, where instead of using an L vector to the center of the light, you use an L vector that touches the light's shape (sphere, rect, etc) that is closest to the reflection vector (reflect(-V,N)).
So far so good, rectangles show up as a rectangular highlight on a glossy surface, and spheres show up as a elliptical highlight on a glossy surface.
The problem is that the energy conservation is now waaaay off. This is explained in the above presentation (Figure 11: Visualization of the widening effect explained with Equation 13).

I'm trying to follow Epic's explanation of it here, but they're not very verbose...
They quickly mention that "For GGX the normalization factor is 1/(Pi*a2)". Is this the general normalization factor for GGX? When I integrate GGX over the hemisphere, I get a normalization factor of just 1/Pi, completely independent of the roughness parameter. Am I using a version of GGX that someone's already pre-normalized to some degree (with respect to roughness)?
The formula I'm using is:
D = ( roughness / (dot(N,H)2*(roughness2-1)+1) )2

I forgot that I moved the division by Pi to somewhere else in my code as an optimization. My Actual GGX formula is this, which is equivalent to the one on Brian Karis' blog.
D = ( roughness / (dot(N,H)2*(roughness2-1)+1) )2/Pi
When I integrate this over the hemisphere, it's perfectly normalized -- the normalization term is 1.0! So what does Brian mean when he says that the normalization term is 1/(Pi*a2)?
[/edit]
To determine the correct normalization term, you have to know the solid angle subtended by the light (the green striped area in their diagram), then integrate this specific distribution. I guess they're not doing this per pixel...? But ideally, you'd like a point light and a sphere with 0.00001 radius to shade almost exactly the same, and if you use one term for point lights and one for sphere lights, this won't be the case... Do you think they lerp from the sphere normalization term to the point normalization term as the radius / solid angle decreases (to estimate the newNormalization term)?

They then divide out the old normalization factor and multiply in the new one... but then for some unexplained reason this is squared.
SphereNormalization = ( newNormalization / oldNormalization )2
Why is that squared?

Also, when I integrate my GGX over the hemisphere, but replace the N.H term with a constant 1 (as if the light source is a dome covering the entire hemisphere), I end up with a normalization factor of a2/Pi. So by my reasoning, the normalization factor for an area light that takes up some percentage of the hemisphere between 0% and 100%, is somewhere in between 1/Pi and a2/Pi...
If I integrate this same formula (with 1 instead of N.H) over just w radians of the hemisphere instead of the whole thing, then instead of a2/Pi, I get a normalization term of a2 / (sin(Pi*w/2)2* Pi).

I was already calculating the solid angle subtended by the sphere-light in my experiments, so I used this to quickly get the w value and calculate that normalization term per pixel:
float solidAngle = 2Pi * (1-sqrt(1-radius2/length2));
float angle = acos(-solidAngle/2Pi + 1); // this is w
float f = sin(Pi*angle/2);
float f = sin(Pi/2*asin(saturate(radius/distance))); //rewrite the above:
float norm = saturate( roughness2 / (f2) );

//The rest of the normalization term (1/Pi) is multiplied in later

This is completely different from the Epic presentation, but it actually looks like it's working correctly and it also converges to 1.0 as the radius shrinks, which means small spheres act exactly the same as point lights. I need to sort out some ground truth renders to compare against though... Not quite sure if I've accidentally created something wrong that just happens to look ok . Also, it involves two trig operations per pixel per light, which scares me...
Gifs of unnormalized sphere light vs this solution here: http://imgur.com/a/uuGoq

Can anyone help with my confusion above, or suggest any other references in creating cheap, physically plausible area lights?

Edited by Hodgman, 23 October 2013 - 11:32 PM.

### #2Frenetic Pony  Members   -  Reputation: 863

Like
4Likes
Like

Posted 23 October 2013 - 01:09 AM

### #3Tasty Texel  Members   -  Reputation: 908

Like
1Likes
Like

Posted 23 October 2013 - 05:39 AM

Your GGX D does not appear to be right in general: http://www.cs.cornell.edu/~srm/publications/EGSR07-btdf.pdf

### #4Hodgman  Moderators   -  Reputation: 19337

Like
1Likes
Like

Posted 23 October 2013 - 06:59 AM

The formula given in the paper needs to be converted from angles to vectors before being used in shader code, and it can be rearranged quite a bit too, which is why it often looks different.

I have:
D = ( roughness  / (    dot(N,H)2*(roughness2-1)+1) )2
...and later: Lighting /= Pi;
But Brian Karis (author of the above presentation) has:
D =   roughness2 / Pi*( dot(N,H)2*(roughness2-1)+1)2

I just had a moment of panic thinking that my D term was completely different to his, but they're both exactly equivalent. If you chuck either of them into wolfram alpha, it spits out the alternate form at the bottom:

 Disney use this version, which is also exactly equivalent to the above versions:

Thanks!

Their normalization term seems to be completely arbitrary, being connected to an arbitrary tweakable variable, and commented with //not sure at all about this??

The version that that's derived from (https://www.shadertoy.com/view/ldfGWs) uses this normalization term:

norm = saturate(radius / ( distance * 2.0 ) + roughness2)2

...but this doesn't seem to work for me (energy still isn't conserved), and I have no idea how they derived this formula from Brian's presentation

The different shaped area lights are awesome in those demos though!

Here's some Gifs comparing my normalization term compared to the above formula:

http://imgur.com/a/bluzs

Edited by Hodgman, 23 October 2013 - 08:23 PM.

### #5Tasty Texel  Members   -  Reputation: 908

Like
1Likes
Like

Posted 23 October 2013 - 09:49 AM

The formula given in the paper needs to be converted from angles to vectors before being used in shader code, and it can be rearranged quite a bit too, which is why it often looks different.
Ok, but the last time I checked GGX on being normalized I actually got 1. Have you forgotten to include the cosinus term perhaps?

### #6Hodgman  Moderators   -  Reputation: 19337

Like
0Likes
Like

Posted 23 October 2013 - 08:33 PM

The formula given in the paper needs to be converted from angles to vectors before being used in shader code, and it can be rearranged quite a bit too, which is why it often looks different.

Ok, but the last time I checked GGX on being normalized I actually got 1. Have you forgotten to include the cosinus term perhaps?

Yeah I confused myself, because when I originally switched over to GGX, I rearranged the equation (as above), to pull out the division by Pi. My diffuse light also has to be divided by Pi, so I moved this operation right to the end, after all the lights have been summed.

If I undo that optimization and put the Pi back into the GGX function, then it does integrate to 1 over the hemisphere, instead of Pi (so the normalization term is 1, not 1/Pi).
Edited my first post

### #7Hodgman  Moderators   -  Reputation: 19337

Like
2Likes
Like

Posted 23 October 2013 - 10:53 PM

I was plotting some values and noticed that the sin2 term looked just like the smoothstep function, and it turns out that it's coincidentally very close:

So I fitted the sin-1 term to a polynomial, and ended up with a trig-less approximation, which looks almost idential:

poly = 0.72216935842*n2 + 0.57534745225*n + 0.04360349982*saturate(n*10);
norm = saturate(roughness2 / (3*poly2 - 2*poly3)) //smoothstep

I don't know how expensive trig functions are in shaders these days, so it's a completely premature optimization  but it's nice to know I can remove those trig functions if I need to.

Edited by Hodgman, 23 October 2013 - 10:55 PM.

### #8Hodgman  Moderators   -  Reputation: 19337

Like
0Likes
Like

Posted 24 October 2013 - 02:44 AM

The version that that's derived from (https://www.shadertoy.com/view/ldfGWs) uses this normalization term:
norm = saturate(radius / ( distance * 2.0 ) + roughness2)2
...but this doesn't seem to work for me (energy still isn't conserved), and I have no idea how they derived this formula from Brian's presentation

Actually, that formula is taken straight from the presentation from the Specular D Modification section. Now I feel a bit stupid
Except the above isn't right, according to the presentation, it should be:
norm = ( roughness / saturate(radius/(distance*2) + roughness) )2

Ok... well, the results of that formula are looking better than my formula!

I'll look at the math some more and post some more Gifs later

### #9Tasty Texel  Members   -  Reputation: 908

Like
1Likes
Like

Posted 24 October 2013 - 05:54 AM

I was plotting some values and noticed that the sin2 term looked just like the smoothstep function, and it turns out that it's coincidentally very close:

That's basically how the vegetation's wind animation in Crysis has been optimized: https://developer.nvidia.com/content/gpu-gems-3-chapter-16-vegetation-procedural-animation-and-shading-crysis

I'm always using  x * x *( 3.0 - 2.0 * x ) instead of Smoothstep, which I think is often times more optimal since Smoothstep maps the input to the range [0..1] at first even if it is already in this required range.

PARTNERS