"Rough" material that is compatible with color-only lightmaps?

Started by
12 comments, last by cowsarenotevil 10 years, 9 months ago

So let's say I am storing the diffuse lighting in my scene with lightmaps, which store the amount of light that enters a given fragment, but no directional information.

Using only this information, I'd really like to be able to implement a convincing "rough" shader, such as Oren-Nayar, for which the apparent luminosity depends on the viewing angle.

However, a cursory look at implementations of Oren-Nayar reflectance with a single point light shows that they all seem to have terms involving the light direction. Obviously I'd expect to see something like "dot(normal, lightdirection)" (which would be perfectly fine as the lightmap already stores the sum of all such terms); the problem is that I don't have access to the light direction (or light directions, as the case may be) on its own.

My question, then, is whether it is possible to implement Oren-Nayar (or, if not any such similar material like Minnaert, etc.) in a way that does not require this directional information. Can anyone provide any tips? My fallback solution is just to play with the shaders to see if I can get the term I don't like to go away, but it seems like this might be a common enough question that someone has already done this.

Thanks much!

-~-The Cow of Darkness-~-
Advertisement

No, its not possible to remove the viewing direction from a BRDF that requires it without breaking its appearance.

In these cases, a lot of engines use baked lighting under multiple basis vectors, such as RNM or SH maps, and there is also directional lightmapping, where you have a bake of the direction to the most dominant light per pixel.


No, its not possible to remove the viewing direction from a BRDF that requires it without breaking its appearance.

Yeah, I realize that. My question, again, is which ones these are, and whether there are any that have the qualitative properties that I'm looking for where the "amount" of light received (represented in a single white directional light as dot(normal, lightdirection)) is sufficient.

It's obvious that there are materials where this is insufficient, but I still suspect (hope) that there is a material that meets my needs because a) there materials I'm trying to mimic don't "look shiny" and b) the key qualitative difference between them and ideal Lambert shading seems to be that the amount of light emitted toward the viewer is reduced when the surface is angled toward the viewer and increased when the surface is angled away.

I think maybe you're saying that Oren-Nayar, in particular, cannot be implemented without having the light direction. Are you, and if so, what about the various "Minnaert" shaders, etc.?

You might also be saying that the effect I'm looking for will never be possible (maybe because of conservation of energy, etc.) at least in a reasonably physically-accurate way. Are you, and if so, can you provide some more insight so that I can assess exactly what is and isn't possible? On a basic level, it seems that I could approximate what I'm looking for with some kind of "rim light" shader that is blended (multiplied) with the diffuse value.


In these cases, a lot of engines use baked lighting under multiple basis vectors, such as RNM or SH maps, and there is also directional lightmapping, where you have a bake of the direction to the most dominant light per pixel.

I'm aware of that also, but that's also not what I'm looking for at this time.

-~-The Cow of Darkness-~-

Short answer: No such BRDF exists.

The effect you're looking for has more in common with the Phong model specular term than its diffuse one. The information you're storing is for the diffuse term (sum of N dot L). The usefulness of that information hinges on the assumption that diffuse reflectance is equal in all directions (Lambertian surface). Rough surface models do away with that assumption.

This isn't to say you can't try and fake the effect: Treat the stored value as a maximum and modulate it based on view direction, but even then the modulation requires some light direction to compare with the view direction, or else you're back at a Lambertian model. This means you assume all light comes from one place (might work outdoors, will not work elsewhere), or you store a table of prominent light directions for regions of your scene. You don't seem to want to do this, however, so I am thinking you're out of luck.

In summary: To model a change in reflectance as viewing angle changes, an incident light direction is required to compare to the viewing direction.

I think this post is going to come across as stubborn and needlessly probing, but that's really only because I'm still a bit confused as to what exactly is wrong with what I'm trying to do. I do understand that what I'm hoping exists probably just doesn't, but I still feel like I could gain some additional insight into how close I can expect to come.

Treat the stored value as a maximum and modulate it based on view direction, but even then the modulation requires some light direction to compare with the view direction, or else you're back at a Lambertian model.

(...)

In summary: To model a change in reflectance as viewing angle changes, an incident light direction is required to compare to the viewing direction.

I'm not really sure why this is (and what do you mean by "or else you're back at a Lambertian model"). It seems to me that a Lambertian, ideally-diffuse surface implicitly assumes that a) the apparent luminance is the same from all viewing angles (that is, light leaves the surface according to the law of cosine with the viewing angle) and b) that this luminance is determined according to the law of cosine with the light angle. There's not really anything explicitly preventing me from violating a) without taking into account the light angle.

The assertion that "no such BRDF exists" is actually kind of trivial to refute: like I said before, I can make a BRDF that just assumes there is some kind of rim light in the background that moves with the camera. It wouldn't be physically plausible, and the result wouldn't vary based on all of the information input into the BRDF (but so what? A constant function is still a function), but it would still be a BRDF. My question, I guess, is why am I doomed to, at best, "try and fake" what I'm looking for (even just successfully faking it is all I'm hoping for)?

On that note, can you elaborate on what you mean by "has more in common with the phong model specular term than its diffuse one"? The Wikipedia page on Oren-Nayar has a table at the end that contrasts it with Torrance-Sparrow specular which seems to suggest otherwise, and, like I mentioned, it doesn't "look specular," (which I know is a stupid thing to say.)

The reason I was inspired to ask this question in the first place was that when I was playing with the Oren-Nayar shader in Blender, I initially assumed it didn't take into account the viewing angle at all -- that is, that light left the surface according to the law of cosines, but that the amount of light that left obeyed a non-Lambertian law (which presumably makes no more sense that what I'm hoping to find). I fairly quickly realized that that wasn't the case (and I also realized that it was a lot more similar to Blender's Minnaert shader than I had thought, which I'd in turn thought was the elusive dependent-on-viewing-angle-but-not-"specular" thing that I'm now seeking and probably doesn't really exist), which lead to my confusion: if it does depend on both the viewing angle and the light angle (which, as you suggest, makes it more like a non-mirror "specular" term), why is it almost invariably referred to as "diffuse" in contrast to "rough specular."

-~-The Cow of Darkness-~-

I'm not really sure why this is (and what do you mean by "or else you're back at a Lambertian model"). It seems to me that a Lambertian, ideally-diffuse surface implicitly assumes that a) the apparent luminance is the same from all viewing angles (that is, light leaves the surface according to the law of cosine with the viewing angle) and b) that this luminance is determined according to the law of cosine with the light angle. There's not really anything explicitly preventing me from violating a) without taking into account the light angle.


Perhaps I can answer this one. If you don't make the distribution of light leaving the surface uniform, how do you determine what directions get more light than others?

I guess you could send more light along the surface normal direction and less in more tangential directions, or the other way around: But this would probably just look weird and unphysical.

What you could try is calculating fresnell effect using normal and view direction and multiply baked light with inverse of this value. This does not look good at all for rough materials so you damp this using roughness.


float3 bakedLight = ...

float  ambientFresnelTerm = pow(1.0 - nDotV, 5.0);
float damp = (1.0 - roughness); //ambient fresnell need to be tuned down with rough materials. Maybe not best damp factor.
float3 ambientFresnel = specularColor + (1.0 - specularColor) * ambientFresnelTerm * damp; 
float3 ambientDiffuse = (1.0-ambientFresnel) * albedo * bakedLight;

Maybe you get more ideas from seblagarde blog. There he is talking about physically based shading with ambient cube maps. Which also need some kind of hacks to look right. http://seblagarde.wordpress.com/2011/08/17/hello-world/

I'm not really sure why this is (and what do you mean by "or else you're back at a Lambertian model"). It seems to me that a Lambertian, ideally-diffuse surface implicitly assumes that a) the apparent luminance is the same from all viewing angles (that is, light leaves the surface according to the law of cosine with the viewing angle) and b) that this luminance is determined according to the law of cosine with the light angle. There's not really anything explicitly preventing me from violating a) without taking into account the light angle.


Perhaps I can answer this one. If you don't make the distribution of light leaving the surface uniform, how do you determine what directions get more light than others?

I guess you could send more light along the surface normal direction and less in more tangential directions, or the other way around: But this would probably would just look weird and unphysical.

Actually, "send [less] light along the surface normal direction and [more] in tangential directions" is exactly what I'm looking for, and it is also something that's present in both the Oren-Nayar and Minnaert materials that I'm trying to emulate.

If you don't believe me and you have Blender, open it up and play with a Minnaert material; in fact, with the "darkness" parameter set to 2, the result looks pretty similar to taking the normal Lambertian result, and then multiplying it with 1-dot(normal, viewdirection) and it definitely doesn't resemble "specular" light at least in the colloquial sense.

-~-The Cow of Darkness-~-

What you could try is calculating fresnell effect using normal and view direction and multiply baked light with inverse of this value. This does not look good at all for rough materials so you damp this using roughness.


float3 bakedLight = ...

float  ambientFresnelTerm = pow(1.0 - nDotV, 5.0);
float damp = (1.0 - roughness); //ambient fresnell need to be tuned down with rough materials. Maybe not best damp factor.
float3 ambientFresnel = specularColor + (1.0 - specularColor) * ambientFresnelTerm * damp; 
float3 ambientDiffuse = (1.0-ambientFresnel) * albedo * bakedLight;

Maybe you get more ideas from seblagarde blog. There he is talking about physically based shading with ambient cube maps. Which also need some kind of hacks to look right. http://seblagarde.wordpress.com/2011/08/17/hello-world/

Yeah, this seems to be moving in the direction that I'm looking for (in fact I coincidentally even wrote "multiplying it with 1-dot(normal, viewdirection)" before I saw your post). Thanks!

And thanks to everyone else who has replied so far, even if I'm still a bit confused.

-~-The Cow of Darkness-~-

One thing you could do (and I would recommenced this if it is feasible) would be to calculate direct lighting dynamically (that way you could use a Oren-Nayar BRDF) and then for indirect lighting, use low resolution light maps that contain indirect light only. Alternatively you could use directional light maps or something like H-basis coefficients to get better indirect lighting. My intuition is that for the indirect light, Oren-Nayar wouldn't make too big of a difference.

This topic is closed to new replies.

Advertisement