Finally nailing the Torrance-Sparrow shader once and for all.

Started by
10 comments, last by mmikkelsen 13 years, 2 months ago
The Torrance-Sparrow model is a theoretical model.
Many game developers fail at getting their implementation to behave well because the model has some nasty
divisions by terms which tend toward zero. In the theoretical model this is not supposed to happen
but in computer graphics it does for various reasons. For instance the GPU will back-face cull a primitive
in a way which gives results similar to using the face normal for culling. However, we don't shade using the face normal.
This means v_dot_n in the denominator will in practice become zero when using normal maps (and even
interpolated vertex normals). Simply checking if the term is close to zero and then setting it to something else isn't
going to work either because this makes the lighting behave in a discontinuous fashion which is bad too.
So care must be taken to get the right limit value.

I have taken it upon myself to make a proposal for a reference implementation which according to my own tests
behaves extremely well.

http://jbit.net/~spa...cademic/illum.h

In case anyone is interested I derive the Torrance-Sparrow model from scratch in my academic paper:

http://jbit.net/~spa...mic/mm_brdf.pdf

I am hoping some people here might accept the task of
testing this and giving me some feedback on it.

There are 3 variants in the file:

normalized phong, Bechmann and normalized phong with a tilt.

For each of the 3 there are two versions. For instance for nphong:

1. BRDF_ts_nphong - default shading one normal
2. BRDF2_ts_nphong - optional modification (used in games)

The second one takes in addition to the shading normal also the normalized interpolated vertex normal.
This is used to disable the effect of bump mapping at the silhouette (self-shadowing).

I recommend testing these two normalized phong variants primarily.
The beckmann is mainly there as a reference since the beckmann distribution parameter can
be mapped to an nphong.

IMPORTANT! Do not multiply these by n_dot_l in your shader. This term is already built into all the implementations here
and so is the diffuse term.

Additionally, possibly experiment with disabling:

vN = FixNormal(vN, vV);

It appears it's not needed afterall. Please let me know what you think.

Thanks to all!

Morten.
Advertisement
Forgot to mention. Don't multiply this by n_dot_l in your shader. This term is already built into the implementations.
Watch out for HdotN == 0 in BRDF_ts_nphong() because pow(HdotN, n) will be INF (or is it -INF?) due to pow implementations typically expressed as exp(n * log(HdotN)) where log(HdotN) is undefined at zero.
I looked at your paper it really looks good. What would be a good scenario in which you want to use the Torrance-Sparrow model in a game?
I tried the Normalized Phong and Beckmann versions, and they were really easy to integrate into Unity. Both are very well-behaved both in their parameter ranges and at the angles you said were problematic in other formulations of the BRDF.

Removing the call to FixNormal() didn't seem to have any negative effect on either one.

I really like the look of the Beckmann distribution when m>2/3. The specular peak becomes a valley, which gives a velvety look.

In the Phong distribution, n is obviously supposed to be in [0,?). What about the Beckmann parameter? Is it intended to go above 2/3?

I wasn't able to try the Phong Tilt version because of the while loop, where Unity only supports SM3.
[color="#1C2837"]>Watch out for HdotN == 0 in BRDF_ts_nphong() because pow(HdotN, n) will be INF

[color="#1C2837"]Are you really sure about that? Almost any game related shader out there is currently using
[color="#1C2837"]pow(HdotN, n) for regular blinn-phong specular. Either way in the case where HdotN is 0.0f
[color="#1C2837"]the VisibDiv term will multiply by zero.

[color="#1C2837"]>I looked at your paper it really looks good. What would be a good scenario in which you want to use the Torrance-Sparrow model in a game?

[color="#1C2837"]Thank you very much :) The whole point to the original Torrance-Sparrow paper was modeling off-specular peaks. Previous authors had attributed
[color="#1C2837"]this effect to Fresnel and the Torrance-Sparrow paper points out that Fresnel alone cannot account for the off-specular peaks observed
[color="#1C2837"]in real life since metals also exhibit them. And fresnel reflectance, at a given wavelength, is near constant for metals.
[color="#1C2837"]That's what the VisibDiv term does in this shader. Even if you choose to set the Fresnel factor to a constant
[color="#1C2837"]the VisibDiv term will still provide off-specular peaks according to the Torrance-Sparrow model.
[color="#1C2837"]The short answer is I am thinking of using it always but with Fresnel set to a constant for metals.

[color="#1C2837"]> [color="#1C2837"]I tried the Normalized Phong and Beckmann versions, and they were really easy to integrate into Unity. [color="#1C2837"]Both are...

[color="#1C2837"]Thanks! This is excellent feedback.

[color="#1C2837"]> [color="#1C2837"]Removing the call to FixNormal() didn't seem to have any negative effect on either one.

[color="#1C2837"]Yeah, I am strongly considering commenting these out unless someone argues otherwise.

[color="#1C2837"]>In the Phong distribution, n is obviously supposed to be in [0,?). What about the Beckmann parameter? Is it intended to go above 2/3?

[color="#1C2837"]The parameter m is essentially the variance in a Gaussian distribution over the slopes of the microfacets. And numerically the slope can
[color="#1C2837"]be anything in the range from [color="#1C2837"][0,?) which means so can m. That being said as a distribution function I don't think it models anything
[color="#1C2837"]particular useful beyond a certain level. I suppose you can think of it as having slopes so extreme that the BRDF behaves almost like
[color="#1C2837"]a surface covered with micro circular cones. These will all reflect the light at a certain tilt relative to the macrosurface normal n.

[color="#1C2837"]The npong behaves in a more consistent way (bell-shape). Did you try remapping beckmann parameters to nphong using
[color="#1C2837"]float toNPhong(const float m) ? They appear very similar to me for m \in ]0; sqrt(0.2)] and of course nphong is cheaper.

[color="#1C2837"]Setting the record straight:
[color="#1C2837"]During the analysis of the paper it is shown how the Torrance-Sparrow formulation really is a general formulation for most
[color="#1C2837"]of the BRDFs out there in CG land. You can plug in any distribution function that you wish to. Subsequently, many authors
[color="#1C2837"]have had an urge to do so and name a "new" model after themselves (easy credit).
[color="#1C2837"]For instance at the end of section 2.4 in my paper it is shown that the cook-torrance model really is simply a torrance-sparrow
[color="#1C2837"]model with a couple of errors in the constants and with a specific choice of distribution function (namely the beckmann).
[color="#1C2837"]Beckmann is good for analysis but, for shading, the normalized phong is faster and gives visually identical results
[color="#1C2837"]for [color="#1C2837"]m \in [0; sqrt(0.2)] (when mapped to n). Though the introduction of the beckmann distribution function is relevant to
[color="#1C2837"]computer graphics I would still argue that there never was a "cook-torrance model". Just a specific implementation of
[color="#1C2837"]the torrance-sparrow model with a few minor bugs in it.


[color="#1C2837"]Daniel,

[color="#1C2837"]Thanks a lot for trying it out! Feel free to post any other feedback here.

[color="#1c2837"]>Watch out for HdotN == 0 in BRDF_ts_nphong() because pow(HdotN, n) will be INF

[color="#1c2837"]Are you really sure about that? Almost any game related shader out there is currently using
[color="#1c2837"]pow(HdotN, n) for regular blinn-phong specular. Either way in the case where HdotN is 0.0f
[color="#1c2837"]the VisibDiv term will multiply by zero.



I can assure you that a certain console GPU approximates pow this way 100% of the time (unless you write your own pow implementation). Also I'm pretty sure that in most cases 0.0 * INF will give you NaN.
[font="arial, verdana, tahoma, sans-serif"]Okay I decided to comment out the instances of vN = FixNormal(vN, vV) since VisibDiv really manages fine on its own in dealing with the division by zero.

I updated the online version and called it 1.1 but nothing else is changed in it so if you already DLed it you can just keep the one you have[/font]
and change it yourself if you want to.

[color="#1C2837"]>I can assure you that a certain console GPU approximates pow this way 100% of the time (unless you write your own pow implementation). Also I'm pretty sure that in most cases 0.0 * INF will give you NaN.
[color="#1C2837"]
[color="#1C2837"]Ah, I think I know which platform you're referring to :) On every other platform if you feed in 0.0 (fed from a buffer so the compiler doesn't know) and apply pow on it
[color="#1C2837"]you will actually get zero. The problem you're referring to is a known one when using the default pow used by the compiler for that platform
[color="#1C2837"]and it is ironic but it even hits people doing a completely basic blinn-phong (for that platform).
[color="#1C2837"]To my knowledge pow(fedValueZero, n) works everywhere else. But still relevant. Thanks for the heads-up.
This is fairly interesting, and I'll have to check it out in more detail later. In the mean time, how does this compare to the (pretty slick, in my humble experience) Kelemen Szirmay-Kalos BRDF? The latter's quite fast and appears to solve many of the same deficiencies that yours looks to combat (namely, the possible divisions by zero).
clb: At the end of 2012, the positions of jupiter, saturn, mercury, and deimos are aligned so as to cause a denormalized flush-to-zero bug when computing earth's gravitational force, slinging it to the sun.

This is fairly interesting, and I'll have to check it out in more detail later. In the mean time, how does this compare to the (pretty slick, in my humble experience) Kelemen Szirmay-Kalos BRDF? The latter's quite fast and appears to solve many of the same deficiencies that yours looks to combat (namely, the possible divisions by zero).




Their approximation, eq. 2, is actually not a very good one. It unconditionally predicts a peak as the angle between V and L grows.
If you look at my figur 3b on page 17 you'll see how, as the angle between L and V grows, the angle between N and H has to be smaller to reach peak.
This basically means to reach peak N and H must be closer and closer aligned the greater the angle between L and V is.
This is not captured by their approximation. However, you have to put it into context. They are not using that approximation because
they are hoping for a speed-up since the visibility term is fairly cheap to begin with. They are doing it because they want to "importance sample"
so they rely on the ability to express as much of the BRDF as possible as an analytical function (smooth etc).
The problem doesn't really apply in "our" context.

Another interesting observation about their paper is that their torrance-sparrow formulation, eq. (1), has a couple of bugs in it.
The D term is a "surface distribution function" and not a "probability distribution function" p() though the distinction is subtle.
The D term obeys that D * n_dot_h is normalized over the half-sphere. The pdf p() on the other hand is by itself normalized over the half-sphere.
In other words to perform their substitution they are missing a division by n_dot_h in the denominator since
D = p / n_dot_h.
Another error is that the beckmann surface distribution D (and also the p()) is missing a \pi factor in the denominator. Otherwise it's not correctly
normalized (D must obey eq. (14) given in my paper).

Hope this answers what you're asking?

Morten.

This topic is closed to new replies.

Advertisement