Jump to content

  • Log In with Google      Sign In   
  • Create Account

FREE SOFTWARE GIVEAWAY

We have 4 x Pro Licences (valued at $59 each) for 2d modular animation software Spriter to give away in this Thursday's GDNet Direct email newsletter.


Read more in this forum topic or make sure you're signed up (from the right-hand sidebar on the homepage) and read Thursday's newsletter to get in the running!


Your preferred or desired BRDF?


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
51 replies to this topic

#21 MJP   Moderators   -  Reputation: 11790

Like
1Likes
Like

Posted 24 February 2013 - 09:15 PM

Shaders do need NdotL.  Division by PI should be precalculated and sent to the shader—IE the light values the shader receives should already have been divided by PI.

 

I don't really agree with that as general-case advice...it only makes sense for BRDF's that have a 1/pi term in them which isn't always the case.


Edited by MJP, 24 February 2013 - 09:20 PM.


Sponsor:

#22 Hodgman   Moderators   -  Reputation: 31938

Like
1Likes
Like

Posted 24 February 2013 - 10:42 PM

The features that I think I need so far are: Non-lambertian diffuse, IOR/F(0º)/spec-mask, anisotropic roughness, metal/non-metal, retro-reflectiveness and translucency.

I took Chris_F's BRDF containing Cook-Torrence/Schlick/GGX/Smith and Oren-Nayar, and re-implemented it with hacked support for anisotropy (based roughly on Ashikhmin-Shirley) and retroreflectivity.

If both the roughness factors are equal (or if the isotropic bool is true), then the distribution should be the same as GGX, otherwise it behaves a bit like Ashikhmin-Shirley. Also, the distribution isn't properly normalized any more though when using anisotropic roughness.

The retro-reflectivity is a complete hack and won't be energy conserving. When the retro-reflectivity factor is set to 0.5, you get two specular lobes -- a regular reflected one, and one reflected back at the light source -- without any attempt to split the energy between them. At 0 you just get the regular specular lobe, and at 1 you only get the retro-reflected one.
 
BRDF Explorer file for anyone interested: http://pastebin.com/6ZpQGgpP
 
Thanks again for sending me on a weekend BRDF exploration quest, Chris and Promit biggrin.png

Edited by Hodgman, 25 February 2013 - 12:49 AM.


#23 godmodder   Members   -  Reputation: 712

Like
0Likes
Like

Posted 25 February 2013 - 04:09 AM

If there were absolutely no limits I would like to evaluate spatially-varying and measured bidirectional texture functions. They show up a lot in my inverse rendering research and the results can look very realistic. Storing them in wavelet format makes them somewhat tractable and convenient to work with, but the system requirements add up rather quickly.



#24 belfegor   Crossbones+   -  Reputation: 2723

Like
1Likes
Like

Posted 25 February 2013 - 09:10 AM

If you are interested in a good overview of the semi-standard lighting models, take a look in the Lighting section of Programming Vertex...

 

 

Sorry for intrusion on this thread. I have a question related to "cook_torrance" shader shown on that link.

 

float NdotH = saturate( dot( normal, half_vector ) );
 
...
if( ROUGHNESS_LOOK_UP == roughness_mode )
{
// texture coordinate is:
float2 tc = { NdotH, roughness_value };
 
// Remap the NdotH value to be 0.0-1.0
// instead of -1.0..+1.0
tc.x += 1.0f;
tc.x /= 2.0f;
 
// look up the coefficient from the texture:
roughness = texRoughness.Sample( sampRoughness, tc );
}

 

See author comments in code. Is this a bug? Saturate already clamps value to 0.0 - 1.0 range?



#25 Chris_F   Members   -  Reputation: 2467

Like
2Likes
Like

Posted 25 February 2013 - 12:29 PM

The features that I think I need so far are: Non-lambertian diffuse, IOR/F(0º)/spec-mask, anisotropic roughness, metal/non-metal, retro-reflectiveness and translucency.

I took Chris_F's BRDF containing Cook-Torrence/Schlick/GGX/Smith and Oren-Nayar, and re-implemented it with hacked support for anisotropy (based roughly on Ashikhmin-Shirley) and retroreflectivity.

If both the roughness factors are equal (or if the isotropic bool is true), then the distribution should be the same as GGX, otherwise it behaves a bit like Ashikhmin-Shirley. Also, the distribution isn't properly normalized any more though when using anisotropic roughness.

The retro-reflectivity is a complete hack and won't be energy conserving. When the retro-reflectivity factor is set to 0.5, you get two specular lobes -- a regular reflected one, and one reflected back at the light source -- without any attempt to split the energy between them. At 0 you just get the regular specular lobe, and at 1 you only get the retro-reflected one.
 
BRDF Explorer file for anyone interested: http://pastebin.com/6ZpQGgpP
 
Thanks again for sending me on a weekend BRDF exploration quest, Chris and Promit biggrin.png

 

Actually, it's a lot easier to convert it to anisotropic than that.

 

 

analytic

::begin parameters
color Diffuse 1 0 0
color Specular 1 1 1
float DiffuseScale 0 1 0.5
float SpecularScale 0 0.999 .028
float RoughnessX 0.005 2 0.2
float RoughnessY 0.005 2 0.2
bool isotropic 1
::end parameters

::begin shader

float saturate(float x) { return clamp(x,0,1); }

vec3 BRDF( vec3 L, vec3 V, vec3 N, vec3 X, vec3 Y )
{
    float PI = 3.1415926535897932;
    vec3 Kd = Diffuse * DiffuseScale;
    vec3 Ks = Specular * SpecularScale;

    float ax = RoughnessX;
    float ay = (isotropic) ? RoughnessX : RoughnessY;

    vec3 H = normalize(L + V);
    float NdotL = saturate(dot(N, L));
    float NdotV = dot(N, V);
    float NdotH = dot(N, H);
    float LdotH = dot(L, H);
    float HdotX = dot(H, X);
    float HdotY = dot(H, Y);
    
    float ax_2 = ax * ax;
    float ay_2 = ay * ay;
    float a_2 = (ax_2 + ay_2) / 2;
    float NdotL_2 = NdotL * NdotL;
    float NdotV_2 = NdotV * NdotV;
    float NdotH_2 = NdotH * NdotH;
    float HdotX_2 = HdotX * HdotX;
    float HdotY_2 = HdotY * HdotY;
    float OneMinusNdotL_2 = 1.0 - NdotL_2;
    float OneMinusNdotV_2 = 1.0 - NdotV_2;

    vec3 Fd = 1.0 - Ks;

    float gamma = saturate(dot(V - N * NdotV, L - N * NdotL));
    float A = 1.0 - 0.5 * (a_2 / (a_2 + 0.33));
    float B = 0.45 * (a_2 / (a_2 + 0.09));
    float C = sqrt(OneMinusNdotL_2 * OneMinusNdotV_2) / max(NdotL, NdotV);
    float OrenNayar = A + B * gamma * C;

    vec3 Rd = (Kd / PI) * Fd * OrenNayar;

    float D = 1.0 / (PI * ax * ay * pow(HdotX_2 / ax_2 + HdotY_2 / ay_2 + NdotH_2, 2.0));

    vec3 Fs = Ks + Fd * exp(-6 * LdotH);

    float G1_1 = 2.0 / (1.0 + sqrt(1.0 + a_2 * (OneMinusNdotL_2 / NdotL_2)));
    float G1_2 = 2.0 / (1.0 + sqrt(1.0 + a_2 * (OneMinusNdotV_2 / NdotV_2)));
    float G = G1_1 * G1_2;

    vec3 Rs = (D * Fs * G) / (4 * NdotV * NdotL);

    return Rd + Rs;
}

::end shader

 

I left out the retro-reflection hack because this BRDF actually already exhibits a lot of retro-reflection. If you go to Image Slice in BRDF Explorer and look at the bottom edge, that is the retro part. This is probably a lot more physically plausible as far as retro-reflections go.


Edited by Chris_F, 25 February 2013 - 04:23 PM.


#26 Chris_F   Members   -  Reputation: 2467

Like
1Likes
Like

Posted 25 February 2013 - 12:50 PM

If you are interested in a good overview of the semi-standard lighting models, take a look in the Lighting section of Programming Vertex...

 

 

Sorry for intrusion on this thread. I have a question related to "cook_torrance" shader shown on that link.

 

 

float NdotH = saturate( dot( normal, half_vector ) );
 
...
if( ROUGHNESS_LOOK_UP == roughness_mode )
{
// texture coordinate is:
float2 tc = { NdotH, roughness_value };
 
// Remap the NdotH value to be 0.0-1.0
// instead of -1.0..+1.0
tc.x += 1.0f;
tc.x /= 2.0f;
 
// look up the coefficient from the texture:
roughness = texRoughness.Sample( sampRoughness, tc );
}

 

See author comments in code. Is this a bug? Saturate already clamps value to 0.0 - 1.0 range?

 

This is indeed unnecessary, and it wouldn't be the first time I saw a mistake or oversight on gpwiki. In any case, I think you can probably do a lot better than a Beckmann lookup texture. The Beckmann distribution is not expensive to calculate and modern GPUs are limited by memory bandwidth, not instruction throughput. Lookup textures only make sense if you can use them to kill a lot of expensive instructions.



#27 belfegor   Crossbones+   -  Reputation: 2723

Like
0Likes
Like

Posted 25 February 2013 - 01:26 PM

I didn't know about this article/book before (btw. thank you Jason Z), so these few days i started to experiment a little.

Long story short, i don't use Beckmann, i have modified lookup texture generation for gaussian model (since it looked better for me) and also moved fresnel term into it and set roughness_value to be constant for current slice of 3d/volume tex...



#28 L. Spiro   Crossbones+   -  Reputation: 14415

Like
0Likes
Like

Posted 25 February 2013 - 04:16 PM

He was talking about dividing by NdotL.
BRDF explorer will multiply by NdotL outside of the BRDF, so if you've included NdotL inside your BRDF (as we usually do in games), then you need to divide by NdotL at the end to cancel it out.

Oops. Misread.


Anyone have a database of more BRDF’s we can use with BRDF Explorer?

Also, Chris_F, be careful of this:


 

vec3 Rd = (Kd / PI) * Fd * OrenNayer;
 

 


Should be:

 

vec3 Rd = (Kd / PI) * Fd * OrenNayar;
 

 



L. Spiro


Edited by L. Spiro, 25 February 2013 - 04:21 PM.

It is amazing how often people try to be unique, and yet they are always trying to make others be like them. - L. Spiro 2011
I spent most of my life learning the courage it takes to go out and get what I want. Now that I have it, I am not sure exactly what it is that I want. - L. Spiro 2013
I went to my local Subway once to find some guy yelling at the staff. When someone finally came to take my order and asked, “May I help you?”, I replied, “Yeah, I’ll have one asshole to go.”
L. Spiro Engine: http://lspiroengine.com
L. Spiro Engine Forums: http://lspiroengine.com/forums

#29 Chris_F   Members   -  Reputation: 2467

Like
0Likes
Like

Posted 25 February 2013 - 04:23 PM

Also, Chris_F, be careful of this:

 

Actually, I fixed that in my version already, forgot to edit my post. I misspell Oren-Nayar about 50% of the time.



#30 Jason Z   Crossbones+   -  Reputation: 5412

Like
0Likes
Like

Posted 25 February 2013 - 05:15 PM

If you are interested in a good overview of the semi-standard lighting models, take a look in the Lighting section of Programming Vertex...

 

 

Sorry for intrusion on this thread. I have a question related to "cook_torrance" shader shown on that link.

 

 

float NdotH = saturate( dot( normal, half_vector ) );
 
...
if( ROUGHNESS_LOOK_UP == roughness_mode )
{
// texture coordinate is:
float2 tc = { NdotH, roughness_value };
 
// Remap the NdotH value to be 0.0-1.0
// instead of -1.0..+1.0
tc.x += 1.0f;
tc.x /= 2.0f;
 
// look up the coefficient from the texture:
roughness = texRoughness.Sample( sampRoughness, tc );
}

 

See author comments in code. Is this a bug? Saturate already clamps value to 0.0 - 1.0 range?

Good catch.  I think it is safe to say that the shader code is more for instructional use rather than highly optimized, plus it is getting a bit old (Jack wrote those articles quite some time ago now)...  Even so, I still find myself loading up the page every now and then to brush up on a concept that I don't use too often.



#31 Jason Z   Crossbones+   -  Reputation: 5412

Like
0Likes
Like

Posted 25 February 2013 - 05:17 PM

 

If you are interested in a good overview of the semi-standard lighting models, take a look in the Lighting section of Programming Vertex...

 

 

Sorry for intrusion on this thread. I have a question related to "cook_torrance" shader shown on that link.

 

 

float NdotH = saturate( dot( normal, half_vector ) );
 
...
if( ROUGHNESS_LOOK_UP == roughness_mode )
{
// texture coordinate is:
float2 tc = { NdotH, roughness_value };
 
// Remap the NdotH value to be 0.0-1.0
// instead of -1.0..+1.0
tc.x += 1.0f;
tc.x /= 2.0f;
 
// look up the coefficient from the texture:
roughness = texRoughness.Sample( sampRoughness, tc );
}

 

See author comments in code. Is this a bug? Saturate already clamps value to 0.0 - 1.0 range?

 

This is indeed unnecessary, and it wouldn't be the first time I saw a mistake or oversight on gpwiki. In any case, I think you can probably do a lot better than a Beckmann lookup texture. The Beckmann distribution is not expensive to calculate and modern GPUs are limited by memory bandwidth, not instruction throughput. Lookup textures only make sense if you can use them to kill a lot of expensive instructions.

The article was actually part of a book project that got hosted here on GameDev.net initially.  After some time and some significant stability issues with the server it was hosted on, the decision was made to move it to the gpwiki site.  So please don't put the blame for one of our mistakes on the gpwiki guys!



#32 Hodgman   Moderators   -  Reputation: 31938

Like
0Likes
Like

Posted 25 February 2013 - 07:21 PM

I might be getting a bit off topic now... forgive me tongue.png

I went to bed last night with Helmholtz reciprocity on my mind -- apparently our physically based BRDF's should all obey this law, that if you swap a light source and a camera, you'll measure the same ray of light in either configuration, or in the case of our BRDF's, swapping L and V has no effect.

The thought experiment that caused me lost sleep was an optically-flat Lambertian diffuse plane (i.e. all microfacets are aligned with the normal, all refracted light is uniformly dispersed over the upper hemisphere), with the two observation/lighting angles being directly overhead (0º from the normal) and very nearly perpendicular (~90º).

 

When lit from above and viewed from the side, the majority of the light will be refracted into the surface and then diffused -- no matter where the camera is in the hemisphere, the surface will appear the same. The camera will receive a small percentage of the diffused light (which is the majority of the input light).

 

When viewed from above and lit from the side though, the majority of the light will reflect right off the surface, according to Fresnel! Only a very small fraction will be refracted, which is then diffused as above. The overhead camera won't receive any of the reflected light (which is the majority of the input), and instead only receives a small percentage of the diffused light (which itself is a small percentage of the input).

 

Have I thought about this all wrong? Or does reciprocity really break down when diffusers and Fresnel's laws are combined?

 

Actually, it's a lot easier to convert it to anisotropic than that.

Wow, thanks! I notice that the math for that distribution is exactly equal to your previous GGX distribution when the two roughness parameters are equal too... does the original GGX paper define this aniso version?
 
I'm still going to need some kind of retro-reflection hack (or, alternative physical BRDF) in my game so I can boost the effect right up for certain bits of paint and signage and... actual retro-reflector devices (like you put on your bicycle). You're right that there is a bit inherently in this BRDF, but it's mostly only at a grazing angle which is lost to N.L.
A macro-scale retro-reflector like you put on your bike -- a collection of 45º angled "V" shaped mirrored facets -- will direct almost all of the incoming light back towards the incident ray when lit from overhead, but performs worse at glancing angles, and it's this kind of behaviour that I'd ideally like to be able to model.
 

Also, Chris_F, be careful of this:

On that note, BRDF explorer's files spell "Ashikhmin" as "Ashikhman", and it's infecting me ohmy.png


Edited by Hodgman, 25 February 2013 - 07:36 PM.


#33 David Neubelt   Members   -  Reputation: 794

Like
0Likes
Like

Posted 25 February 2013 - 08:47 PM

If there were absolutely no limits I would like to evaluate spatially-varying and measured bidirectional texture functions. They show up a lot in my inverse rendering research and the results can look very realistic. Storing them in wavelet format makes them somewhat tractable and convenient to work with, but the system requirements add up rather quickly.

 

I think BTF's are only ever necessary for acquiring a material's response. After you've acquired the data you can cluster the material and decompose it into its constituent parts of uniform responses and create a texture mask to make it spatially varying. This has been done before quite a bit while fitting spatially varying BRDF's from acquired materials.


Graphics Programmer - Ready At Dawn Studios

#34 Chris_F   Members   -  Reputation: 2467

Like
1Likes
Like

Posted 25 February 2013 - 10:58 PM

Wow, thanks! I notice that the math for that distribution is exactly equal to your previous GGX distribution when the two roughness parameters are equal too... does the original GGX paper define this aniso version?

 

I found it here: http://disney-animation.s3.amazonaws.com/library/s2012_pbs_disney_brdf_notes_v2.pdf

 

I'm still going to need some kind of retro-reflection hack (or, alternative physical BRDF) in my game so I can boost the effect right up for certain bits of paint and signage and... actual retro-reflector devices (like you put on your bicycle). You're right that there is a bit inherently in this BRDF, but it's mostly only at a grazing angle which is lost to N.L.
A macro-scale retro-reflector like you put on your bike -- a collection of 45º angled "V" shaped mirrored facets -- will direct almost all of the incoming light back towards the incident ray when lit from overhead, but performs worse at glancing angles, and it's this kind of behaviour that I'd ideally like to be able to model.

 

Here is my own hack. I think it works similarly to yours. The assumption is that the retroreflectiveness decreases at glancing angles.

 

analytic

::begin parameters
color Diffuse 1 0 0
color Specular 1 1 1
float DiffuseScale 0 1 0.5
float SpecularScale 0 0.999 .028
float RoughnessX 0.005 2 0.2
float RoughnessY 0.005 2 0.2
float RetroReflection 0 1 0
bool isotropic 1
::end parameters

::begin shader

float saturate(float x) { return clamp(x,0,1); }

vec3 BRDF( vec3 L, vec3 V, vec3 N, vec3 X, vec3 Y )
{
    float PI = 3.1415926535897932;
    vec3 Kd = Diffuse * DiffuseScale;
    vec3 Ks = Specular * SpecularScale;

    float ax = RoughnessX;
    float ay = (isotropic) ? RoughnessX : RoughnessY;

    vec3 H = normalize(L + V);
    float NdotL = saturate(dot(N, L));
    float NdotV = dot(N, V);
    float NdotH = dot(N, H);
    float LdotH = dot(L, H);
    float LdotV = dot(L, V);
    float HdotX = dot(H, X);
    float HdotY = dot(H, Y);
    
    float ax_2 = ax * ax;
    float ay_2 = ay * ay;
    float a_2 = (ax_2 + ay_2) / 2;
    float NdotL_2 = NdotL * NdotL;
    float NdotV_2 = NdotV * NdotV;
    float NdotH_2 = NdotH * NdotH;
    float LdotV_2 = LdotV * LdotV;
    float HdotX_2 = HdotX * HdotX;
    float HdotY_2 = HdotY * HdotY;
    float OneMinusNdotL_2 = 1.0 - NdotL_2;
    float OneMinusNdotV_2 = 1.0 - NdotV_2;

    vec3 Fd = 1.0 - Ks;

    float gamma = saturate(dot(V - N * NdotV, L - N * NdotL));
    float A = 1.0 - 0.5 * (a_2 / (a_2 + 0.33));
    float B = 0.45 * (a_2 / (a_2 + 0.09));
    float C = sqrt(OneMinusNdotL_2 * OneMinusNdotV_2) / max(NdotL, NdotV);
    float OrenNayar = A + B * gamma * C;

    vec3 Rd = (Kd / PI) * Fd * OrenNayar;

    float GGX_forward = 1.0 / (PI * ax * ay * pow(HdotX_2 / ax_2 + HdotY_2 / ay_2 + NdotH_2, 2.0));
    float GGX_retro = a_2 / (PI * pow(LdotV_2 * (a_2 - 1.0) + 1.0, 2.0));

    float G1_1 = 2.0 / (1.0 + sqrt(1.0 + a_2 * (OneMinusNdotL_2 / NdotL_2)));
    float G1_2 = 2.0 / (1.0 + sqrt(1.0 + a_2 * (OneMinusNdotV_2 / NdotV_2)));
    float G_Smith = G1_1 * G1_2;

    float G_Retro = NdotV_2 * NdotL;

    float DG = mix(GGX_forward * G_Smith, GGX_retro * G_Retro, RetroReflection);

    vec3 Fs = Ks + Fd * exp(-6 * LdotH);

    vec3 Rs = (DG * Fs) / (4 * NdotV * NdotL);

    return Rd + Rs;
}

::end shader

 

I hope to maybe figure out how to model retroreflection in a more physically accurate way, and to maybe explore if the Smith G can be tailored for the anisotropic version of the distribution.



#35 CryZe   Members   -  Reputation: 768

Like
0Likes
Like

Posted 26 February 2013 - 06:29 AM

Have I thought about this all wrong? Or does reciprocity really break down when diffusers and Fresnel's laws are combined?

I think Helmholtz reciprocity doesn't apply to diffuse light at all, because diffuse light actually is the same as subsurface scattering, just in such a small scale, that one can approximate it by evaluating it at the entrance point. Diffuse light is the light scattering inside the surface, which is simply specular reflection thousands of times inside the surface. Helmholtz reciprocity isn't supposed to be correct for this process, because it's not just a single reflection. But it works for all the little specular reflections inside the material and the "macro" specular reflection on the surface.

 

And it works for cook torrance:

 

The halfway vector is the same whether you calculate it from (L+V)/length(L+V) or (V+L)/length(V+L), and thus the microfacet distribution function returns the same value, since it relies on NDotH. Fresnel relies on LDotH, but that's the same as VDotH. And the geometric term is the multiplication of the "sub geometric term", one time calculated for NDotV, and one time calculated for NDotL, and since scalar multiplication is commutative, the result is the same whether you switch NDotV and NDotL, or not. The same applies to the NDotL * NDotV in the denominator of cook torrance.


Edited by CryZe, 26 February 2013 - 06:37 AM.


#36 Hodgman   Moderators   -  Reputation: 31938

Like
0Likes
Like

Posted 26 February 2013 - 07:24 AM

I think Helmholtz reciprocity doesn't apply to diffuse light at all

That's where I get confused, because I've read in many sources (wikipedia is the easiest to cite) that a physically plausible BRDF must obey reciprocity...
My Lambertian diffuse surface is physically plausible by this definition, until I try maintain energy conservation by splitting the energy between diffuse/specular using Fresnel's law. This article points out the same thing -- by maintaining energy conservation (making the diffuse darker when the spec is brighter), then it ruins the reciprocity.
 
So either everyone teaching that physically plausible BRDF's have to obey Helmholtz is wrong, or (Occam says: more likely) my method of conserving energy is just a rough approximation...

The halfway vector is the same whether you calculate it from (L+V)/length(L+V) or (V+L)/length(V+L), and thus the microfacet distribution function returns the same value, since it relies on NDotH. Fresnel relies on LDotH...

In the case of my perfectly flat surface, the distribution term will always be 0 for all cases except where H==N, in which case the distribution will be 100%. I think this example makes a few edge cases more visible.
 
In all cases where N!=H, the fresnel term calculated from LDotH is meaningless as it ends up being multiplied by 0; these microfacets don't exist. But, say they did exist, this fresnel term tells us how much energy is reflected and refracted (then diffused/re-emitted) for the sub-set of the total microfacets that are oriented towards H. I guess this means that to find out the total amount of refracted energy (energy available to the diffuse term), we'd have to evaluate the fresnel term for every possible microfacet orientation weighted by probability.
 
In my example case, the flat plane, this is simple; there is only one microfacet orientation, so we don't have to bother doing any integration! We just use the fresnel term for LDotN, as 100% of the microfacets are oriented towards N. Suddenly, we've got a part of the BRDF that relies on L but not V... The calculation to find the total reflected vs refracted energy balance only depends on L, N and F(0º) and the surface roughness. Hence my dilemma -- how do we implement physically correct energy conservation in any BRDF without upsetting Helmholtz? Or, is Helmholtz more of a guideline than a rule in the realm of BRDFs? Or is there some important detail elsewhere in the rendering equation, outside of the BRDF, that I'm missing?
 
[edit]
 

Helmholtz reciprocity isn't supposed to be correct for this process, because it's not just a single reflection

Are you saying it only breaks down because we're dealing with a composition of many different waves/particles instead of a single one? e.g. if we could track the path of one photon, it would obey the law, but when we end up with multiple overlapping probability distributions, we're no longer tracking individual rays so reciprocity has become irrelevant?

Edited by Hodgman, 26 February 2013 - 07:48 AM.


#37 Bacterius   Crossbones+   -  Reputation: 9300

Like
0Likes
Like

Posted 26 February 2013 - 07:34 AM

I think Helmholtz reciprocity doesn't apply to diffuse light at all, because diffuse light actually is the same as subsurface scattering, just in such a small scale, that one can approximate it by evaluating it at the entrance point. Diffuse light is the light scattering inside the surface, which is simply specular reflection thousands of times inside the surface. Helmholtz reciprocity isn't supposed to be correct for this process, because it's not just a single reflection. But it works for all the little specular reflections inside the material and the "macro" specular reflection on the surface.

 

No. All non-magnetic, non-optically active, linear (i.e. ordinary) light-matter interaction must obey Helmholtz reciprocity, no matter how many reflections and scatterings the light undergoes. It also applies in any ordinary participating medium (e.g. subsurface scattering) but that is generally approximated as well.


The slowsort algorithm is a perfect illustration of the multiply and surrender paradigm, which is perhaps the single most important paradigm in the development of reluctant algorithms. The basic multiply and surrender strategy consists in replacing the problem at hand by two or more subproblems, each slightly simpler than the original, and continue multiplying subproblems and subsubproblems recursively in this fashion as long as possible. At some point the subproblems will all become so simple that their solution can no longer be postponed, and we will have to surrender. Experience shows that, in most cases, by the time this point is reached the total work will be substantially higher than what could have been wasted by a more direct approach.

 

- Pessimal Algorithms and Simplexity Analysis


#38 CryZe   Members   -  Reputation: 768

Like
0Likes
Like

Posted 26 February 2013 - 07:43 AM

we just use the fresnel term for LDotN

Not quite. If you assume that your surface is perfectly flat, your microfacet distribution, as you mentioned, returns 1 where N=H and 0 where N!=H, and thus your BRDF only returns a value other than 0, when N=H. But H is still calculated from L and V. So LDotH and VDotH are still the same (because that's always the case, since H is the halfway vector). But your BRDF requires H to be N to return a value other than 0. But if that's the case, LDotH is the same as LDotN, and VDotH is the same as VDotN. But since LDotH and VDotH are the same, LDotN and VDotN are the same as well. And thus, Helmholtz reciprocity still applies in this case.

 

 

No. All non-magnetic, non-optically active, linear (i.e. ordinary) light-matter interaction must obey Helmholtz reciprocity, no matter how many reflections and scatterings the light undergoes. It also applies in any ordinary participating medium (e.g. subsurface scattering) but that is generally approximated as well.

Yeah, I don't know. Something is really weird in the situation Hodgman pointed out :(


Edited by CryZe, 26 February 2013 - 07:49 AM.


#39 Hodgman   Moderators   -  Reputation: 31938

Like
0Likes
Like

Posted 26 February 2013 - 07:53 AM

Actually, I can kind of resolve my "flat plane paradox" with a bit of a reinterpretation of the law...
Let's say for simplicity that:
* when V is glancing and L is overhead: 0% of the light is reflected, meaning 100% is diffused. When it's diffused, 1% reaches the camera.
* when L is glancing and V is overhead: 99% of the light is reflected, meaning 1% is diffused. When it's diffused, 1% of that 1% reaches the camera.
nyXo7DM.png

 

The law only talks about particular rays of light, but when a ray hits a surface it splits in two! I can interpret the law as if "a ray" is one particular sequence of rays, picking one arbitrary exit ray at every surface interaction.

So, if I only track the light that takes one particular path at the boundary, the refracted path only:
* when L is glancing and V is overhead: 0.01% reaches the camera, but 99% of the input energy was invalidated as it took the wrong path. If I divide the measured light by the amount that took the "valid path", then I get 0.0001 / 0.01 == 1%, which is the same as when L and V are swapped.
 
I don't know if I'm tired and bending the rules to make garbage makes sense, or if this is the way I should be interpreting reciprocity...wacko.png
If this is true, then the specular term of a BRDF should obey the rule in isolation, and also the diffuse term in isolation... but when you add the two together, it's possible to break the rule, which maybe is OK, because you're adding output rays that followed different paths? If that's the case, then the real rule is that each particular path within a BRDF should obey reciprocity?

 

[edit] No, don't listen to me, this isn't how physics works. The fact that reciprocity doesn't hold for my above thought experiment shows that the thought experiment is based on flawed assumptions... [/edit]

 

Not quite. If you assume that your surface is perfectly flat, your microfacet distribution, as you mentioned, returns 1 where N=H and 0 where N!=H, and thus your BRDF only returns a value other than 0, when N=H.

Only the specular part of the BRDF has that behaviour -- the Lambertian part is just a constant number regardless of V/L/H (unless we try to make it energy conserving).
To try and make the Lambertian part energy conserving, I've got to find the amount of light that's refracted. Any physics textbook will tell you that you can use Fresnel's law for this, and it won't include the viewer's location at all! Only the light source and the normal are considered (in this example, our macro-normal and all our microfacet normals are equal, so we can ignore microfacet distributions).
The physically correct amount of energy to input into the Lambertian term in this example, is based on the Fresnel term for NdotL.


Edited by Hodgman, 27 February 2013 - 07:13 PM.


#40 CryZe   Members   -  Reputation: 768

Like
0Likes
Like

Posted 26 February 2013 - 08:02 AM

Any physics textbook will tell you that you can use Fresnel's law for this, and it won't include the viewer's location at all!

I guess that's the problem. The thing is that it should include the view direction as well, since fresnel's law applies as well, when the light is scattering out of the surface to the viewer's direction and not just when light is scattering into the surface. 

 

Just take a look at section 5.3 in http://disney-animation.s3.amazonaws.com/library/s2012_pbs_disney_brdf_notes_v2.pdf

Their diffuse model is applying their modified fresnel 2 times, once for the view direction and once for the light direction.






Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS