SH directional lights, what am I missing?

Started by
5 comments, last by ReaperSMS 9 years ago

We have a game here using a straightforward deferred lighting approach, but we'd like to get some lighting on our translucent objects. In an attempt to avoid recreating all the horrible things that came from shader combinations for every light combination, I've been trying to implement something similar to the technique Bungie described in their presentation on Destiny's lighting.

The idea is to collapse the light environment at various probe points into a spherical harmonic representation, that the shader would then use to compute lighting. Currently it's doing all of this on the CPU, but I've run into what seems to be a fundamental issue with projecting a directional light into SH.

After digging through all of the fundamental papers, everything seems to agree that the way to project a directional light into SH, convolved with the cosine response is


void project_directional( float* SH, float3 color, float3 dir )
{
   SH[0] = 0.282095f * color * pi;
   SH[1] = -0.48603f * color * dir.y * (pi * 2/3);
   SH[2] = 0.48603f * color * dir.z * (pi * 2/3);
   SH[3] = -0.48603f * color * dir.x * (pi * 2/3);
}
 
float3 eval_normal( float* SH, float3 dir )
{
   float3 result = 0;
 
   result = SH[0] * 0.282095f;
   result += SH[1] * -0.48603f * dir.y;
   result += SH[2] * 0.48603f * dir.z;
   result += SH[3] * -0.48603f * dir.x;
   return result;
}
 
// result is then scaled by diffuse

There's a normalization term or two, but the problem I've been running into, that I haven't seen any decent way to avoid, is that ambient term in SH[0]. If I plug in a simple light pointing down Z, normals pointing directly at it, or directly away from it behave reasonably, but a normal pointing down, say, the X axis will always be lit by at least 1/4 of the light color. It's produced a directional light that generates significant amounts of light at 90 degress off-axis.

I'm not seeing how this could ever behave differently. I can get vaguely reasonable results if I ignore the ambient term while merging diffuse lights in, but that breaks down the moment I try summing two lights, pointing in opposite directions in. Expanding out to the 9-term quadratic form does not help much either.

I get the feeling I've missed some fundamental thing to trim down the off-axis directional light response, but I'll be damned if I can see where it would come from. Is this just a basic artifact of using a single light as a test case? Is this likely to behave better by keeping the main directional lights out, and just using the SH set to collapse point lights in as sphere lights or attenuated directionals? Have I just royally screwed up my understanding of how to project a directional light into SH?

The usual pile of papers and articles from SCEE, Tom Forsyth, Sebastien Lagarde, etc have not helped. Someone had a random shadertoy that looked like it worked better in posted screenshots, but actually running it produces results more like what I've seen.

Advertisement

You should read the recent thread that I posted (and the link in the first post), that deals how to do this properly. My guess is that your light samples are not well distributed on the sphere, which causes weird behavior in the places where you don't have samples.

To get a better projection, you need to average the projection over many light samples (add and then divide by number of samples). In my system I do this using a simple monte-carlo integration to generate uniform random samples that interpolate (via barycentric coordinates) the nearest 3 original samples.

There are more sophisticated ways that might be faster (i.e. using Chebyshev integration points on the sphere).

I'm by no means an expert with regard to SHs, but I think what you are doing is correct. A directional light is basically described by a delta function which you can not reproduce with a finite amount of SH bands. Intuitively spoken, the best you can do with just two bands is to set the 'vector' part (2nd band (index 1)) to the direction of the delta function and use the constant SH term (1st band (index 0)) to account for the clamping in the negative direction. This is basically the same you do to represent a clamped cosine lobe with SHs (I'm not sure whether the weights are exactly the same, though). All you could therefore do to increase the quality, is to increase the number of used SH bands. I think.

Tasty Texel is pretty much correct: SH can't exactly represent the signal you're trying to approximate, which in this case a single clamped cosine lobe oriented in the direction of your directional light. If you go to page 8 of this paper you can see a plot of 3rd/5th order approximation vs. an actual clamped cosine lobe, and you'll see that even for the 3rd and 5th order case there can be significant error. For 2nd order the error will be worse, and that error can result in rather extreme artifacts for lights with very high intensities. An even worse problem with SH (in my opinion) is that you end up with a negative contribution in the direction opposite of your directional light. This means that adding very bright directional lights can essentially "suck the light" out on the opposite side of the sphere, which can look really bad in practice.

By the way, the code you've listed will actually give you the irradiance of your light source. If you're trying to use properly-balanced diffuse and specular BRDF's, then you'll want to make sure that you multiply your irradiance by the Lambertian diffuse BRDF, which is DiffuseAlbedo / Pi. A lot of people will just multiply by diffuse albedo, and you'll end up with diffuse that's too bright by a factor of Pi.

EDIT: forgot the link!

I was afraid of that.

The divide by pi is in there on the real code side, I left out some of the normalization to get down to just the SH bits. The lighting model for this project is ridiculously ad-hoc, as we didn't get a real PBS approach set up in the engine until a few months into production. Another project is using a much more well behaved setup, but it has the advantage of still being in preproduction.

For this project the scenes are sparse space-scapes, with a strong directional light, and an absurd number of relatively small radius point lights for effects, and only about three layers of objects (ships, foreground, and background). I suppose a brute force iteration over the light list might do the job well enough, as there might not be enough of these around to justify a fancy approach.

I was afraid of that.

The divide by pi is in there on the real code side, I left out some of the normalization to get down to just the SH bits. The lighting model for this project is ridiculously ad-hoc, as we didn't get a real PBS approach set up in the engine until a few months into production. Another project is using a much more well behaved setup, but it has the advantage of still being in preproduction.

For this project the scenes are sparse space-scapes, with a strong directional light, and an absurd number of relatively small radius point lights for effects, and only about three layers of objects (ships, foreground, and background). I suppose a brute force iteration over the light list might do the job well enough, as there might not be enough of these around to justify a fancy approach.

Can you explain those layers better? Is it 2d?

It's a 3D scene, but with the view direction restricted to slightly off-axis, and camera motion restricted to a 2D plane.

The main area of play is about 400 units in front of the camera, with some near-field objects about 200 units past that that can accept shadows. Tons and tons of background objects lie far beyond that, the far plane is set to around 100,000. It isn't particularly ideal.

That soup gets thrown at a deferred lighting renderer, which is all fine and great up until it needs to light things that don't write depth.

This topic is closed to new replies.

Advertisement