Spherical Area Lights

Started by
7 comments, last by Bummel 10 years, 5 months ago

I'm reading this presentation at the moment, specifically the Area Lights / Representative Point / Sphere Lights section.
http://www.unrealengine.com/files/downloads/2013SiggraphPresentationsNotes.pdf

I've using this "representative point" technique in my engine, where instead of using an L vector to the center of the light, you use an L vector that touches the light's shape (sphere, rect, etc) that is closest to the reflection vector (reflect(-V,N)).
So far so good, rectangles show up as a rectangular highlight on a glossy surface, and spheres show up as a elliptical highlight on a glossy surface.
The problem is that the energy conservation is now waaaay off. This is explained in the above presentation (Figure 11: Visualization of the widening effect explained with Equation 13).

I'm trying to follow Epic's explanation of it here, but they're not very verbose...
They quickly mention that "For GGX the normalization factor is 1/(Pi*a2)". Is this the general normalization factor for GGX? When I integrate GGX over the hemisphere, I get a normalization factor of just 1/Pi, completely independent of the roughness parameter. Am I using a version of GGX that someone's already pre-normalized to some degree (with respect to roughness)?
The formula I'm using is:
D = ( roughness / (dot(N,H)2*(roughness2-1)+1) )2
[edit]
I forgot that I moved the division by Pi to somewhere else in my code as an optimization. My Actual GGX formula is this, which is equivalent to the one on Brian Karis' blog.
D = ( roughness / (dot(N,H)2*(roughness2-1)+1) )2/Pi
When I integrate this over the hemisphere, it's perfectly normalized -- the normalization term is 1.0! So what does Brian mean when he says that the normalization term is 1/(Pi*a2)?
[/edit]
To determine the correct normalization term, you have to know the solid angle subtended by the light (the green striped area in their diagram), then integrate this specific distribution. I guess they're not doing this per pixel...? But ideally, you'd like a point light and a sphere with 0.00001 radius to shade almost exactly the same, and if you use one term for point lights and one for sphere lights, this won't be the case... Do you think they lerp from the sphere normalization term to the point normalization term as the radius / solid angle decreases (to estimate the newNormalization term)?

They then divide out the old normalization factor and multiply in the new one... but then for some unexplained reason this is squared.
SphereNormalization = ( newNormalization / oldNormalization )2
Why is that squared?

Also, when I integrate my GGX over the hemisphere, but replace the N.H term with a constant 1 (as if the light source is a dome covering the entire hemisphere), I end up with a normalization factor of a2/Pi. So by my reasoning, the normalization factor for an area light that takes up some percentage of the hemisphere between 0% and 100%, is somewhere in between 1/Pi and a2/Pi...
If I integrate this same formula (with 1 instead of N.H) over just w radians of the hemisphere instead of the whole thing, then instead of a2/Pi, I get a normalization term of a2 / (sin(Pi*w/2)2* Pi).

I was already calculating the solid angle subtended by the sphere-light in my experiments, so I used this to quickly get the w value and calculate that normalization term per pixel:
float solidAngle = 2Pi * (1-sqrt(1-radius2/length2));
float angle = acos(-solidAngle/2Pi + 1); // this is w
float f = sin(Pi*angle/2);
float f = sin(Pi/2*asin(saturate(radius/distance))); //rewrite the above:
float norm = saturate( roughness2 / (f2) );

//The rest of the normalization term (1/Pi) is multiplied in later
Jd0TQbU.png
This is completely different from the Epic presentation, but it actually looks like it's working correctly and it also converges to 1.0 as the radius shrinks, which means small spheres act exactly the same as point lights. I need to sort out some ground truth renders to compare against though... Not quite sure if I've accidentally created something wrong that just happens to look ok unsure.png. Also, it involves two trig operations per pixel per light, which scares me...
Gifs of unnormalized sphere light vs this solution here: http://imgur.com/a/uuGoq

Can anyone help with my confusion above, or suggest any other references in creating cheap, physically plausible area lights?

Advertisement

https://www.shadertoy.com/view/4ss3Ws

Your GGX D does not appear to be right in general: http://www.cs.cornell.edu/~srm/publications/EGSR07-btdf.pdf

The formula given in the paper needs to be converted from angles to vectors before being used in shader code, and it can be rearranged quite a bit too, which is why it often looks different.

I have:
D = ( roughness / ( dot(N,H)2*(roughness2-1)+1) )2
...and later: Lighting /= Pi;
But Brian Karis (author of the above presentation) has:
D = roughness2 / Pi*( dot(N,H)2*(roughness2-1)+1)2

I just had a moment of panic thinking that my D term was completely different to his, but they're both exactly equivalent. If you chuck either of them into wolfram alpha, it spits out the alternate form at the bottom:
7zrUFwf.png
[edit] Disney use this version, which is also exactly equivalent to the above versions:
oPsegzj.gif

https://www.shadertoy.com/view/4ss3Ws

Thanks!

Their normalization term seems to be completely arbitrary, being connected to an arbitrary tweakable variable, and commented with //not sure at all about this?? laugh.png

The version that that's derived from (https://www.shadertoy.com/view/ldfGWs) uses this normalization term:

norm = saturate(radius / ( distance * 2.0 ) + roughness2)2

...but this doesn't seem to work for me (energy still isn't conserved), and I have no idea how they derived this formula from Brian's presentation wacko.png

The different shaped area lights are awesome in those demos though!

[edit]Here's some Gifs comparing my normalization term compared to the above formula:

http://imgur.com/a/bluzs

The formula given in the paper needs to be converted from angles to vectors before being used in shader code, and it can be rearranged quite a bit too, which is why it often looks different.
Ok, but the last time I checked GGX on being normalized I actually got 1. Have you forgotten to include the cosinus term perhaps?

The formula given in the paper needs to be converted from angles to vectors before being used in shader code, and it can be rearranged quite a bit too, which is why it often looks different.

Ok, but the last time I checked GGX on being normalized I actually got 1. Have you forgotten to include the cosinus term perhaps?

Yeah I confused myself, because when I originally switched over to GGX, I rearranged the equation (as above), to pull out the division by Pi. My diffuse light also has to be divided by Pi, so I moved this operation right to the end, after all the lights have been summed.

If I undo that optimization and put the Pi back into the GGX function, then it does integrate to 1 over the hemisphere, instead of Pi (so the normalization term is 1, not 1/Pi).
Edited my first post smile.png

I was plotting some values and noticed that the sin2 term looked just like the smoothstep function, and it turns out that it's coincidentally very close:

yDvqmjb.png

So I fitted the sin-1 term to a polynomial, and ended up with a trig-less approximation, which looks almost idential:

n = saturate(radius/len);
poly = 0.72216935842*n2 + 0.57534745225*n + 0.04360349982*saturate(n*10);
norm = saturate(roughness2 / (3*poly2 - 2*poly3)) //smoothstep

Kq8FCj6.png

I don't know how expensive trig functions are in shaders these days, so it's a completely premature optimization tongue.png but it's nice to know I can remove those trig functions if I need to.

https://www.shadertoy.com/view/4ss3Ws

The version that that's derived from (https://www.shadertoy.com/view/ldfGWs) uses this normalization term:
norm = saturate(radius / ( distance * 2.0 ) + roughness2)2
...but this doesn't seem to work for me (energy still isn't conserved), and I have no idea how they derived this formula from Brian's presentation wacko.png

Actually, that formula is taken straight from the presentation from the Specular D Modification section. Now I feel a bit stupid wacko.png
Except the above isn't right, according to the presentation, it should be:
norm = ( roughness / saturate(radius/(distance*2) + roughness) )2

Ok... well, the results of that formula are looking better than my formula! unsure.png

I'll look at the math some more and post some more Gifs later cool.png

I was plotting some values and noticed that the sin2 term looked just like the smoothstep function, and it turns out that it's coincidentally very close:

That's basically how the vegetation's wind animation in Crysis has been optimized: https://developer.nvidia.com/content/gpu-gems-3-chapter-16-vegetation-procedural-animation-and-shading-crysis

I'm always using x * x *( 3.0 - 2.0 * x ) instead of Smoothstep, which I think is often times more optimal since Smoothstep maps the input to the range [0..1] at first even if it is already in this required range.

This topic is closed to new replies.

Advertisement