Jump to content

  • Log In with Google      Sign In   
  • Create Account


Normalizing the Fresnel Equation


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
5 replies to this topic

#1 allingm   Members   -  Reputation: 417

Like
0Likes
Like

Posted 05 April 2013 - 04:58 PM

So, you are supposed to normalize a BRDF equation so that it sums to 1 over the hemisphere.  It seems that everybody goes out of their way to normalize the NDF portion; however, I was wondering about the other portions.  Wouldn't it make sense to normalize the whole equation including the Fresnel term for example?  I did some Google searching and came across this:

 

http://seblagarde.wordpress.com/2011/08/17/hello-world/

"When working with microfacet BRDFs, normalize only microfacet normal distribution function (NDF)"

 

…but then I ask myself, why?  The writer doesn’t seem to give any explanation why.  Does anybody know?



Sponsor:

#2 Chris_F   Members   -  Reputation: 1938

Like
0Likes
Like

Posted 05 April 2013 - 07:18 PM

Fresnel doesn't add or remove energy, it simply controls the ratio of refraction to reflection.



#3 Bacterius   Crossbones+   -  Reputation: 8157

Like
0Likes
Like

Posted 05 April 2013 - 08:57 PM

Fresnel doesn't add or remove energy, it simply controls the ratio of refraction to reflection.

 

Though when considering opaque materials, the refracted part is typically considered absorbed (and is, to a first order approximation). The BRDF doesn't have to sum up to exactly 1 over the hemisphere, it just can't sum up to more than 1 (or less than zero, obviously). The Fresnel equations are already normalized as Chris_F notes, being physically based and all, so you don't need to worry about it.


The slowsort algorithm is a perfect illustration of the multiply and surrender paradigm, which is perhaps the single most important paradigm in the development of reluctant algorithms. The basic multiply and surrender strategy consists in replacing the problem at hand by two or more subproblems, each slightly simpler than the original, and continue multiplying subproblems and subsubproblems recursively in this fashion as long as possible. At some point the subproblems will all become so simple that their solution can no longer be postponed, and we will have to surrender. Experience shows that, in most cases, by the time this point is reached the total work will be substantially higher than what could have been wasted by a more direct approach.

 

- Pessimal Algorithms and Simplexity Analysis


#4 MJP   Moderators   -  Reputation: 10226

Like
0Likes
Like

Posted 06 April 2013 - 12:21 AM

It's not something you need to normalize on its own, however if you're using a fresnel term then you should definitely include it in your integral when verifying that your full BRDF is energy conserving.



#5 allingm   Members   -  Reputation: 417

Like
0Likes
Like

Posted 06 April 2013 - 03:44 PM

Well, if I normalize Shlick's approximation I get:

 

http://www.wolframalpha.com/input/?i=solve%281+%3D+c+*+integrate%28s+%2B+%281-s%29%281-cos%28x%29%29^5*sin%28x%29%2C+x%2C+0%2C+pi%2F2%2C+y%2C+0%2C+2*pi%29%2C+c%29

 

and combining with Shlick's approximation I get:

 

http://www.wolframalpha.com/input/?i=3%2F%283+*+pi^2*s+-+pi+*+s+%2B+pi%29+*+%28s+%2B+%281-s%29*%281+-+cos%28x%29%29^5%29

 

but maybe this doesn't make sense.  I'm thinking it doesn't have to match 1 exactly, but if I integrate it over the hemisphere I get:

 

http://www.wolframalpha.com/input/?i=integrate%282+*pi+*%28s+%2B+%281-s%29%281-cos%28x%29%29^5*sin%28x%29%29%2C+x%2C+0%2C+pi%2F2%29

 

and if F(0) is 0 we get 1, and if F(0) is 1 we get:

 

http://www.wolframalpha.com/input/?i=pi^2+-+1%2F3+*+pi++%2B+1

 

Currently I've been looking at GGX and the GGX term itself is already normalized, but the GGX geometry term may or may not be normlized.  I have no way to verify this without shelling out money for Mathmatica.  So, I was hoping to trust the geometry term and normalize the Fresnel.



#6 MJP   Moderators   -  Reputation: 10226

Like
0Likes
Like

Posted 06 April 2013 - 04:56 PM

The reason you have to make sure that the NDF is normalized is because it tells you the fraction of microfacts that are "active" for a given incident angle. In that respect it's basically a probability density function, and if you know how a PDF works then you should know why PDF needs to integrate to 1. However this is not the case for fresnel and geometry terms, nor does it need to be. Instead you just need to be ensure that the entire combined BRDF integrates to <= 1 when integrating about all possible exitant (eye) directions on the hemisphere, which is how you ensure energy conservation. Typically the geometry term is a critical part of this and needs to be matched to the NDF in order to ensure energy conservation, and this is the case with the Smith visibility function provided in the GGX paper. You can try integrating it if you want for various incident light directions and F(0) values (an easy way to do it is to integrate numerically using monte carlo), you'll find that it will sum to <= 1.0 if you avoid any precision issues. The only way you'll break energy conservation is if you changed the BRDF, or if you added another diffuse or specular term that wasn't properly balanced.






Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS