Your preferred or desired BRDF?

Started by
50 comments, last by Hodgman 11 years, 1 month ago

If you are interested in a good overview of the semi-standard lighting models, take a look in the Lighting section of Programming Vertex...

Sorry for intrusion on this thread. I have a question related to "cook_torrance" shader shown on that link.


float NdotH = saturate( dot( normal, half_vector ) );
 
...
if( ROUGHNESS_LOOK_UP == roughness_mode )
{
// texture coordinate is:
float2 tc = { NdotH, roughness_value };
 
// Remap the NdotH value to be 0.0-1.0
// instead of -1.0..+1.0
tc.x += 1.0f;
tc.x /= 2.0f;
 
// look up the coefficient from the texture:
roughness = texRoughness.Sample( sampRoughness, tc );
}

See author comments in code. Is this a bug? Saturate already clamps value to 0.0 - 1.0 range?

This is indeed unnecessary, and it wouldn't be the first time I saw a mistake or oversight on gpwiki. In any case, I think you can probably do a lot better than a Beckmann lookup texture. The Beckmann distribution is not expensive to calculate and modern GPUs are limited by memory bandwidth, not instruction throughput. Lookup textures only make sense if you can use them to kill a lot of expensive instructions.

The article was actually part of a book project that got hosted here on GameDev.net initially. After some time and some significant stability issues with the server it was hosted on, the decision was made to move it to the gpwiki site. So please don't put the blame for one of our mistakes on the gpwiki guys!

Advertisement

I might be getting a bit off topic now... forgive me tongue.png

I went to bed last night with Helmholtz reciprocity on my mind -- apparently our physically based BRDF's should all obey this law, that if you swap a light source and a camera, you'll measure the same ray of light in either configuration, or in the case of our BRDF's, swapping L and V has no effect.

The thought experiment that caused me lost sleep was an optically-flat Lambertian diffuse plane (i.e. all microfacets are aligned with the normal, all refracted light is uniformly dispersed over the upper hemisphere), with the two observation/lighting angles being directly overhead (0º from the normal) and very nearly perpendicular (~90º).

When lit from above and viewed from the side, the majority of the light will be refracted into the surface and then diffused -- no matter where the camera is in the hemisphere, the surface will appear the same. The camera will receive a small percentage of the diffused light (which is the majority of the input light).

When viewed from above and lit from the side though, the majority of the light will reflect right off the surface, according to Fresnel! Only a very small fraction will be refracted, which is then diffused as above. The overhead camera won't receive any of the reflected light (which is the majority of the input), and instead only receives a small percentage of the diffused light (which itself is a small percentage of the input).

Have I thought about this all wrong? Or does reciprocity really break down when diffusers and Fresnel's laws are combined?

Actually, it's a lot easier to convert it to anisotropic than that.

Wow, thanks! I notice that the math for that distribution is exactly equal to your previous GGX distribution when the two roughness parameters are equal too... does the original GGX paper define this aniso version?

I'm still going to need some kind of retro-reflection hack (or, alternative physical BRDF) in my game so I can boost the effect right up for certain bits of paint and signage and... actual retro-reflector devices (like you put on your bicycle). You're right that there is a bit inherently in this BRDF, but it's mostly only at a grazing angle which is lost to N.L.
A macro-scale retro-reflector like you put on your bike -- a collection of 45º angled "V" shaped mirrored facets -- will direct almost all of the incoming light back towards the incident ray when lit from overhead, but performs worse at glancing angles, and it's this kind of behaviour that I'd ideally like to be able to model.

Also, Chris_F, be careful of this:

On that note, BRDF explorer's files spell "Ashikhmin" as "Ashikhman", and it's infecting me ohmy.png

If there were absolutely no limits I would like to evaluate spatially-varying and measured bidirectional texture functions. They show up a lot in my inverse rendering research and the results can look very realistic. Storing them in wavelet format makes them somewhat tractable and convenient to work with, but the system requirements add up rather quickly.

I think BTF's are only ever necessary for acquiring a material's response. After you've acquired the data you can cluster the material and decompose it into its constituent parts of uniform responses and create a texture mask to make it spatially varying. This has been done before quite a bit while fitting spatially varying BRDF's from acquired materials.

Graphics Programmer - Ready At Dawn Studios

Wow, thanks! I notice that the math for that distribution is exactly equal to your previous GGX distribution when the two roughness parameters are equal too... does the original GGX paper define this aniso version?

I found it here: http://disney-animation.s3.amazonaws.com/library/s2012_pbs_disney_brdf_notes_v2.pdf

I'm still going to need some kind of retro-reflection hack (or, alternative physical BRDF) in my game so I can boost the effect right up for certain bits of paint and signage and... actual retro-reflector devices (like you put on your bicycle). You're right that there is a bit inherently in this BRDF, but it's mostly only at a grazing angle which is lost to N.L.
A macro-scale retro-reflector like you put on your bike -- a collection of 45º angled "V" shaped mirrored facets -- will direct almost all of the incoming light back towards the incident ray when lit from overhead, but performs worse at glancing angles, and it's this kind of behaviour that I'd ideally like to be able to model.

Here is my own hack. I think it works similarly to yours. The assumption is that the retroreflectiveness decreases at glancing angles.


analytic

::begin parameters
color Diffuse 1 0 0
color Specular 1 1 1
float DiffuseScale 0 1 0.5
float SpecularScale 0 0.999 .028
float RoughnessX 0.005 2 0.2
float RoughnessY 0.005 2 0.2
float RetroReflection 0 1 0
bool isotropic 1
::end parameters

::begin shader

float saturate(float x) { return clamp(x,0,1); }

vec3 BRDF( vec3 L, vec3 V, vec3 N, vec3 X, vec3 Y )
{
    float PI = 3.1415926535897932;
    vec3 Kd = Diffuse * DiffuseScale;
    vec3 Ks = Specular * SpecularScale;

    float ax = RoughnessX;
    float ay = (isotropic) ? RoughnessX : RoughnessY;

    vec3 H = normalize(L + V);
    float NdotL = saturate(dot(N, L));
    float NdotV = dot(N, V);
    float NdotH = dot(N, H);
    float LdotH = dot(L, H);
    float LdotV = dot(L, V);
    float HdotX = dot(H, X);
    float HdotY = dot(H, Y);
    
    float ax_2 = ax * ax;
    float ay_2 = ay * ay;
    float a_2 = (ax_2 + ay_2) / 2;
    float NdotL_2 = NdotL * NdotL;
    float NdotV_2 = NdotV * NdotV;
    float NdotH_2 = NdotH * NdotH;
    float LdotV_2 = LdotV * LdotV;
    float HdotX_2 = HdotX * HdotX;
    float HdotY_2 = HdotY * HdotY;
    float OneMinusNdotL_2 = 1.0 - NdotL_2;
    float OneMinusNdotV_2 = 1.0 - NdotV_2;

    vec3 Fd = 1.0 - Ks;

    float gamma = saturate(dot(V - N * NdotV, L - N * NdotL));
    float A = 1.0 - 0.5 * (a_2 / (a_2 + 0.33));
    float B = 0.45 * (a_2 / (a_2 + 0.09));
    float C = sqrt(OneMinusNdotL_2 * OneMinusNdotV_2) / max(NdotL, NdotV);
    float OrenNayar = A + B * gamma * C;

    vec3 Rd = (Kd / PI) * Fd * OrenNayar;

    float GGX_forward = 1.0 / (PI * ax * ay * pow(HdotX_2 / ax_2 + HdotY_2 / ay_2 + NdotH_2, 2.0));
    float GGX_retro = a_2 / (PI * pow(LdotV_2 * (a_2 - 1.0) + 1.0, 2.0));

    float G1_1 = 2.0 / (1.0 + sqrt(1.0 + a_2 * (OneMinusNdotL_2 / NdotL_2)));
    float G1_2 = 2.0 / (1.0 + sqrt(1.0 + a_2 * (OneMinusNdotV_2 / NdotV_2)));
    float G_Smith = G1_1 * G1_2;

    float G_Retro = NdotV_2 * NdotL;

    float DG = mix(GGX_forward * G_Smith, GGX_retro * G_Retro, RetroReflection);

    vec3 Fs = Ks + Fd * exp(-6 * LdotH);

    vec3 Rs = (DG * Fs) / (4 * NdotV * NdotL);

    return Rd + Rs;
}

::end shader

I hope to maybe figure out how to model retroreflection in a more physically accurate way, and to maybe explore if the Smith G can be tailored for the anisotropic version of the distribution.

Have I thought about this all wrong? Or does reciprocity really break down when diffusers and Fresnel's laws are combined?

I think Helmholtz reciprocity doesn't apply to diffuse light at all, because diffuse light actually is the same as subsurface scattering, just in such a small scale, that one can approximate it by evaluating it at the entrance point. Diffuse light is the light scattering inside the surface, which is simply specular reflection thousands of times inside the surface. Helmholtz reciprocity isn't supposed to be correct for this process, because it's not just a single reflection. But it works for all the little specular reflections inside the material and the "macro" specular reflection on the surface.

And it works for cook torrance:

The halfway vector is the same whether you calculate it from (L+V)/length(L+V) or (V+L)/length(V+L), and thus the microfacet distribution function returns the same value, since it relies on NDotH. Fresnel relies on LDotH, but that's the same as VDotH. And the geometric term is the multiplication of the "sub geometric term", one time calculated for NDotV, and one time calculated for NDotL, and since scalar multiplication is commutative, the result is the same whether you switch NDotV and NDotL, or not. The same applies to the NDotL * NDotV in the denominator of cook torrance.

I think Helmholtz reciprocity doesn't apply to diffuse light at all

That's where I get confused, because I've read in many sources (wikipedia is the easiest to cite) that a physically plausible BRDF must obey reciprocity...
My Lambertian diffuse surface is physically plausible by this definition, until I try maintain energy conservation by splitting the energy between diffuse/specular using Fresnel's law. This article points out the same thing -- by maintaining energy conservation (making the diffuse darker when the spec is brighter), then it ruins the reciprocity.

So either everyone teaching that physically plausible BRDF's have to obey Helmholtz is wrong, or (Occam says: more likely) my method of conserving energy is just a rough approximation...

The halfway vector is the same whether you calculate it from (L+V)/length(L+V) or (V+L)/length(V+L), and thus the microfacet distribution function returns the same value, since it relies on NDotH. Fresnel relies on LDotH...

In the case of my perfectly flat surface, the distribution term will always be 0 for all cases except where H==N, in which case the distribution will be 100%. I think this example makes a few edge cases more visible.

In all cases where N!=H, the fresnel term calculated from LDotH is meaningless as it ends up being multiplied by 0; these microfacets don't exist. But, say they did exist, this fresnel term tells us how much energy is reflected and refracted (then diffused/re-emitted) for the sub-set of the total microfacets that are oriented towards H. I guess this means that to find out the total amount of refracted energy (energy available to the diffuse term), we'd have to evaluate the fresnel term for every possible microfacet orientation weighted by probability.

In my example case, the flat plane, this is simple; there is only one microfacet orientation, so we don't have to bother doing any integration! We just use the fresnel term for LDotN, as 100% of the microfacets are oriented towards N. Suddenly, we've got a part of the BRDF that relies on L but not V... The calculation to find the total reflected vs refracted energy balance only depends on L, N and F(0º) and the surface roughness. Hence my dilemma -- how do we implement physically correct energy conservation in any BRDF without upsetting Helmholtz? Or, is Helmholtz more of a guideline than a rule in the realm of BRDFs? Or is there some important detail elsewhere in the rendering equation, outside of the BRDF, that I'm missing?

[edit]

Helmholtz reciprocity isn't supposed to be correct for this process, because it's not just a single reflection

Are you saying it only breaks down because we're dealing with a composition of many different waves/particles instead of a single one? e.g. if we could track the path of one photon, it would obey the law, but when we end up with multiple overlapping probability distributions, we're no longer tracking individual rays so reciprocity has become irrelevant?

I think Helmholtz reciprocity doesn't apply to diffuse light at all, because diffuse light actually is the same as subsurface scattering, just in such a small scale, that one can approximate it by evaluating it at the entrance point. Diffuse light is the light scattering inside the surface, which is simply specular reflection thousands of times inside the surface. Helmholtz reciprocity isn't supposed to be correct for this process, because it's not just a single reflection. But it works for all the little specular reflections inside the material and the "macro" specular reflection on the surface.

No. All non-magnetic, non-optically active, linear (i.e. ordinary) light-matter interaction must obey Helmholtz reciprocity, no matter how many reflections and scatterings the light undergoes. It also applies in any ordinary participating medium (e.g. subsurface scattering) but that is generally approximated as well.

“If I understand the standard right it is legal and safe to do this but the resulting value could be anything.”

we just use the fresnel term for LDotN

Not quite. If you assume that your surface is perfectly flat, your microfacet distribution, as you mentioned, returns 1 where N=H and 0 where N!=H, and thus your BRDF only returns a value other than 0, when N=H. But H is still calculated from L and V. So LDotH and VDotH are still the same (because that's always the case, since H is the halfway vector). But your BRDF requires H to be N to return a value other than 0. But if that's the case, LDotH is the same as LDotN, and VDotH is the same as VDotN. But since LDotH and VDotH are the same, LDotN and VDotN are the same as well. And thus, Helmholtz reciprocity still applies in this case.

No. All non-magnetic, non-optically active, linear (i.e. ordinary) light-matter interaction must obey Helmholtz reciprocity, no matter how many reflections and scatterings the light undergoes. It also applies in any ordinary participating medium (e.g. subsurface scattering) but that is generally approximated as well.

Yeah, I don't know. Something is really weird in the situation Hodgman pointed out :(

Actually, I can kind of resolve my "flat plane paradox" with a bit of a reinterpretation of the law...
Let's say for simplicity that:
* when V is glancing and L is overhead: 0% of the light is reflected, meaning 100% is diffused. When it's diffused, 1% reaches the camera.
* when L is glancing and V is overhead: 99% of the light is reflected, meaning 1% is diffused. When it's diffused, 1% of that 1% reaches the camera.
nyXo7DM.png

The law only talks about particular rays of light, but when a ray hits a surface it splits in two! I can interpret the law as if "a ray" is one particular sequence of rays, picking one arbitrary exit ray at every surface interaction.

So, if I only track the light that takes one particular path at the boundary, the refracted path only:
* when L is glancing and V is overhead: 0.01% reaches the camera, but 99% of the input energy was invalidated as it took the wrong path. If I divide the measured light by the amount that took the "valid path", then I get 0.0001 / 0.01 == 1%, which is the same as when L and V are swapped.

I don't know if I'm tired and bending the rules to make garbage makes sense, or if this is the way I should be interpreting reciprocity...wacko.png
If this is true, then the specular term of a BRDF should obey the rule in isolation, and also the diffuse term in isolation... but when you add the two together, it's possible to break the rule, which maybe is OK, because you're adding output rays that followed different paths? If that's the case, then the real rule is that each particular path within a BRDF should obey reciprocity?

[edit] No, don't listen to me, this isn't how physics works. The fact that reciprocity doesn't hold for my above thought experiment shows that the thought experiment is based on flawed assumptions... [/edit]

Not quite. If you assume that your surface is perfectly flat, your microfacet distribution, as you mentioned, returns 1 where N=H and 0 where N!=H, and thus your BRDF only returns a value other than 0, when N=H.

Only the specular part of the BRDF has that behaviour -- the Lambertian part is just a constant number regardless of V/L/H (unless we try to make it energy conserving).
To try and make the Lambertian part energy conserving, I've got to find the amount of light that's refracted. Any physics textbook will tell you that you can use Fresnel's law for this, and it won't include the viewer's location at all! Only the light source and the normal are considered (in this example, our macro-normal and all our microfacet normals are equal, so we can ignore microfacet distributions).
The physically correct amount of energy to input into the Lambertian term in this example, is based on the Fresnel term for NdotL.

Any physics textbook will tell you that you can use Fresnel's law for this, and it won't include the viewer's location at all!

I guess that's the problem. The thing is that it should include the view direction as well, since fresnel's law applies as well, when the light is scattering out of the surface to the viewer's direction and not just when light is scattering into the surface.

Just take a look at section 5.3 in http://disney-animation.s3.amazonaws.com/library/s2012_pbs_disney_brdf_notes_v2.pdf

Their diffuse model is applying their modified fresnel 2 times, once for the view direction and once for the light direction.

This topic is closed to new replies.

Advertisement