Sign in to follow this  
Followers 0
Promit

Your preferred or desired BRDF?

51 posts in this topic

If you are interested in a good overview of the semi-standard lighting models, take a look in the Lighting section of Programming Vertex...

 

 

Sorry for intrusion on this thread. I have a question related to "cook_torrance" shader shown on that link.

 

 

float NdotH = saturate( dot( normal, half_vector ) );
 
...
if( ROUGHNESS_LOOK_UP == roughness_mode )
{
// texture coordinate is:
float2 tc = { NdotH, roughness_value };
 
// Remap the NdotH value to be 0.0-1.0
// instead of -1.0..+1.0
tc.x += 1.0f;
tc.x /= 2.0f;
 
// look up the coefficient from the texture:
roughness = texRoughness.Sample( sampRoughness, tc );
}

 

See author comments in code. Is this a bug? Saturate already clamps value to 0.0 - 1.0 range?

 

This is indeed unnecessary, and it wouldn't be the first time I saw a mistake or oversight on gpwiki. In any case, I think you can probably do a lot better than a Beckmann lookup texture. The Beckmann distribution is not expensive to calculate and modern GPUs are limited by memory bandwidth, not instruction throughput. Lookup textures only make sense if you can use them to kill a lot of expensive instructions.

1

Share this post


Link to post
Share on other sites

I didn't know about this article/book before (btw. thank you Jason Z), so these few days i started to experiment a little.

Long story short, i don't use Beckmann, i have modified lookup texture generation for gaussian model (since it looked better for me) and also moved fresnel term into it and set roughness_value to be constant for current slice of 3d/volume tex...

0

Share this post


Link to post
Share on other sites

He was talking about dividing by NdotL.
BRDF explorer will multiply by NdotL outside of the BRDF, so if you've included NdotL inside your BRDF (as we usually do in games), then you need to divide by NdotL at the end to cancel it out.

Oops. Misread.


Anyone have a database of more BRDF’s we can use with BRDF Explorer?

Also, Chris_F, be careful of this:


 

vec3 Rd = (Kd / PI) * Fd * OrenNayer;
 

 


Should be:

 

vec3 Rd = (Kd / PI) * Fd * OrenNayar;
 

 



L. Spiro

Edited by L. Spiro
0

Share this post


Link to post
Share on other sites

Also, Chris_F, be careful of this:

 

Actually, I fixed that in my version already, forgot to edit my post. I misspell Oren-Nayar about 50% of the time.

0

Share this post


Link to post
Share on other sites

If you are interested in a good overview of the semi-standard lighting models, take a look in the Lighting section of Programming Vertex...

 

 

Sorry for intrusion on this thread. I have a question related to "cook_torrance" shader shown on that link.

 

 

float NdotH = saturate( dot( normal, half_vector ) );
 
...
if( ROUGHNESS_LOOK_UP == roughness_mode )
{
// texture coordinate is:
float2 tc = { NdotH, roughness_value };
 
// Remap the NdotH value to be 0.0-1.0
// instead of -1.0..+1.0
tc.x += 1.0f;
tc.x /= 2.0f;
 
// look up the coefficient from the texture:
roughness = texRoughness.Sample( sampRoughness, tc );
}

 

See author comments in code. Is this a bug? Saturate already clamps value to 0.0 - 1.0 range?

Good catch.  I think it is safe to say that the shader code is more for instructional use rather than highly optimized, plus it is getting a bit old (Jack wrote those articles quite some time ago now)...  Even so, I still find myself loading up the page every now and then to brush up on a concept that I don't use too often.

0

Share this post


Link to post
Share on other sites

 

If you are interested in a good overview of the semi-standard lighting models, take a look in the Lighting section of Programming Vertex...

 

 

Sorry for intrusion on this thread. I have a question related to "cook_torrance" shader shown on that link.

 

 

float NdotH = saturate( dot( normal, half_vector ) );
 
...
if( ROUGHNESS_LOOK_UP == roughness_mode )
{
// texture coordinate is:
float2 tc = { NdotH, roughness_value };
 
// Remap the NdotH value to be 0.0-1.0
// instead of -1.0..+1.0
tc.x += 1.0f;
tc.x /= 2.0f;
 
// look up the coefficient from the texture:
roughness = texRoughness.Sample( sampRoughness, tc );
}

 

See author comments in code. Is this a bug? Saturate already clamps value to 0.0 - 1.0 range?

 

This is indeed unnecessary, and it wouldn't be the first time I saw a mistake or oversight on gpwiki. In any case, I think you can probably do a lot better than a Beckmann lookup texture. The Beckmann distribution is not expensive to calculate and modern GPUs are limited by memory bandwidth, not instruction throughput. Lookup textures only make sense if you can use them to kill a lot of expensive instructions.

The article was actually part of a book project that got hosted here on GameDev.net initially.  After some time and some significant stability issues with the server it was hosted on, the decision was made to move it to the gpwiki site.  So please don't put the blame for one of our mistakes on the gpwiki guys!

0

Share this post


Link to post
Share on other sites

I might be getting a bit off topic now... forgive me tongue.png

I went to bed last night with Helmholtz reciprocity on my mind -- apparently our physically based BRDF's should all obey this law, that if you swap a light source and a camera, you'll measure the same ray of light in either configuration, or in the case of our BRDF's, swapping L and V has no effect.

The thought experiment that caused me lost sleep was an optically-flat Lambertian diffuse plane (i.e. all microfacets are aligned with the normal, all refracted light is uniformly dispersed over the upper hemisphere), with the two observation/lighting angles being directly overhead (0º from the normal) and very nearly perpendicular (~90º).

 

When lit from above and viewed from the side, the majority of the light will be refracted into the surface and then diffused -- no matter where the camera is in the hemisphere, the surface will appear the same. The camera will receive a small percentage of the diffused light (which is the majority of the input light).

 

When viewed from above and lit from the side though, the majority of the light will reflect right off the surface, according to Fresnel! Only a very small fraction will be refracted, which is then diffused as above. The overhead camera won't receive any of the reflected light (which is the majority of the input), and instead only receives a small percentage of the diffused light (which itself is a small percentage of the input).

 

Have I thought about this all wrong? Or does reciprocity really break down when diffusers and Fresnel's laws are combined?

 

Actually, it's a lot easier to convert it to anisotropic than that.

Wow, thanks! I notice that the math for that distribution is exactly equal to your previous GGX distribution when the two roughness parameters are equal too... does the original GGX paper define this aniso version?
 
I'm still going to need some kind of retro-reflection hack (or, alternative physical BRDF) in my game so I can boost the effect right up for certain bits of paint and signage and... actual retro-reflector devices (like you put on your bicycle). You're right that there is a bit inherently in this BRDF, but it's mostly only at a grazing angle which is lost to N.L.
A macro-scale retro-reflector like you put on your bike -- a collection of 45º angled "V" shaped mirrored facets -- will direct almost all of the incoming light back towards the incident ray when lit from overhead, but performs worse at glancing angles, and it's this kind of behaviour that I'd ideally like to be able to model.
 

Also, Chris_F, be careful of this:

On that note, BRDF explorer's files spell "Ashikhmin" as "Ashikhman", and it's infecting me ohmy.png

Edited by Hodgman
0

Share this post


Link to post
Share on other sites

If there were absolutely no limits I would like to evaluate spatially-varying and measured bidirectional texture functions. They show up a lot in my inverse rendering research and the results can look very realistic. Storing them in wavelet format makes them somewhat tractable and convenient to work with, but the system requirements add up rather quickly.

 

I think BTF's are only ever necessary for acquiring a material's response. After you've acquired the data you can cluster the material and decompose it into its constituent parts of uniform responses and create a texture mask to make it spatially varying. This has been done before quite a bit while fitting spatially varying BRDF's from acquired materials.

0

Share this post


Link to post
Share on other sites

Wow, thanks! I notice that the math for that distribution is exactly equal to your previous GGX distribution when the two roughness parameters are equal too... does the original GGX paper define this aniso version?

 

I found it here: http://disney-animation.s3.amazonaws.com/library/s2012_pbs_disney_brdf_notes_v2.pdf

 

I'm still going to need some kind of retro-reflection hack (or, alternative physical BRDF) in my game so I can boost the effect right up for certain bits of paint and signage and... actual retro-reflector devices (like you put on your bicycle). You're right that there is a bit inherently in this BRDF, but it's mostly only at a grazing angle which is lost to N.L.
A macro-scale retro-reflector like you put on your bike -- a collection of 45º angled "V" shaped mirrored facets -- will direct almost all of the incoming light back towards the incident ray when lit from overhead, but performs worse at glancing angles, and it's this kind of behaviour that I'd ideally like to be able to model.

 

Here is my own hack. I think it works similarly to yours. The assumption is that the retroreflectiveness decreases at glancing angles.

 

analytic

::begin parameters
color Diffuse 1 0 0
color Specular 1 1 1
float DiffuseScale 0 1 0.5
float SpecularScale 0 0.999 .028
float RoughnessX 0.005 2 0.2
float RoughnessY 0.005 2 0.2
float RetroReflection 0 1 0
bool isotropic 1
::end parameters

::begin shader

float saturate(float x) { return clamp(x,0,1); }

vec3 BRDF( vec3 L, vec3 V, vec3 N, vec3 X, vec3 Y )
{
    float PI = 3.1415926535897932;
    vec3 Kd = Diffuse * DiffuseScale;
    vec3 Ks = Specular * SpecularScale;

    float ax = RoughnessX;
    float ay = (isotropic) ? RoughnessX : RoughnessY;

    vec3 H = normalize(L + V);
    float NdotL = saturate(dot(N, L));
    float NdotV = dot(N, V);
    float NdotH = dot(N, H);
    float LdotH = dot(L, H);
    float LdotV = dot(L, V);
    float HdotX = dot(H, X);
    float HdotY = dot(H, Y);
    
    float ax_2 = ax * ax;
    float ay_2 = ay * ay;
    float a_2 = (ax_2 + ay_2) / 2;
    float NdotL_2 = NdotL * NdotL;
    float NdotV_2 = NdotV * NdotV;
    float NdotH_2 = NdotH * NdotH;
    float LdotV_2 = LdotV * LdotV;
    float HdotX_2 = HdotX * HdotX;
    float HdotY_2 = HdotY * HdotY;
    float OneMinusNdotL_2 = 1.0 - NdotL_2;
    float OneMinusNdotV_2 = 1.0 - NdotV_2;

    vec3 Fd = 1.0 - Ks;

    float gamma = saturate(dot(V - N * NdotV, L - N * NdotL));
    float A = 1.0 - 0.5 * (a_2 / (a_2 + 0.33));
    float B = 0.45 * (a_2 / (a_2 + 0.09));
    float C = sqrt(OneMinusNdotL_2 * OneMinusNdotV_2) / max(NdotL, NdotV);
    float OrenNayar = A + B * gamma * C;

    vec3 Rd = (Kd / PI) * Fd * OrenNayar;

    float GGX_forward = 1.0 / (PI * ax * ay * pow(HdotX_2 / ax_2 + HdotY_2 / ay_2 + NdotH_2, 2.0));
    float GGX_retro = a_2 / (PI * pow(LdotV_2 * (a_2 - 1.0) + 1.0, 2.0));

    float G1_1 = 2.0 / (1.0 + sqrt(1.0 + a_2 * (OneMinusNdotL_2 / NdotL_2)));
    float G1_2 = 2.0 / (1.0 + sqrt(1.0 + a_2 * (OneMinusNdotV_2 / NdotV_2)));
    float G_Smith = G1_1 * G1_2;

    float G_Retro = NdotV_2 * NdotL;

    float DG = mix(GGX_forward * G_Smith, GGX_retro * G_Retro, RetroReflection);

    vec3 Fs = Ks + Fd * exp(-6 * LdotH);

    vec3 Rs = (DG * Fs) / (4 * NdotV * NdotL);

    return Rd + Rs;
}

::end shader

 

I hope to maybe figure out how to model retroreflection in a more physically accurate way, and to maybe explore if the Smith G can be tailored for the anisotropic version of the distribution.

1

Share this post


Link to post
Share on other sites

Have I thought about this all wrong? Or does reciprocity really break down when diffusers and Fresnel's laws are combined?

I think Helmholtz reciprocity doesn't apply to diffuse light at all, because diffuse light actually is the same as subsurface scattering, just in such a small scale, that one can approximate it by evaluating it at the entrance point. Diffuse light is the light scattering inside the surface, which is simply specular reflection thousands of times inside the surface. Helmholtz reciprocity isn't supposed to be correct for this process, because it's not just a single reflection. But it works for all the little specular reflections inside the material and the "macro" specular reflection on the surface.

 

And it works for cook torrance:

 

The halfway vector is the same whether you calculate it from (L+V)/length(L+V) or (V+L)/length(V+L), and thus the microfacet distribution function returns the same value, since it relies on NDotH. Fresnel relies on LDotH, but that's the same as VDotH. And the geometric term is the multiplication of the "sub geometric term", one time calculated for NDotV, and one time calculated for NDotL, and since scalar multiplication is commutative, the result is the same whether you switch NDotV and NDotL, or not. The same applies to the NDotL * NDotV in the denominator of cook torrance.

Edited by CryZe
0

Share this post


Link to post
Share on other sites

I think Helmholtz reciprocity doesn't apply to diffuse light at all

That's where I get confused, because I've read in many sources (wikipedia is the easiest to cite) that a physically plausible BRDF must obey reciprocity...
My Lambertian diffuse surface is physically plausible by this definition, until I try maintain energy conservation by splitting the energy between diffuse/specular using Fresnel's law. This article points out the same thing -- by maintaining energy conservation (making the diffuse darker when the spec is brighter), then it ruins the reciprocity.
 
So either everyone teaching that physically plausible BRDF's have to obey Helmholtz is wrong, or (Occam says: more likely) my method of conserving energy is just a rough approximation...

The halfway vector is the same whether you calculate it from (L+V)/length(L+V) or (V+L)/length(V+L), and thus the microfacet distribution function returns the same value, since it relies on NDotH. Fresnel relies on LDotH...

In the case of my perfectly flat surface, the distribution term will always be 0 for all cases except where H==N, in which case the distribution will be 100%. I think this example makes a few edge cases more visible.
 
In all cases where N!=H, the fresnel term calculated from LDotH is meaningless as it ends up being multiplied by 0; these microfacets don't exist. But, say they did exist, this fresnel term tells us how much energy is reflected and refracted (then diffused/re-emitted) for the sub-set of the total microfacets that are oriented towards H. I guess this means that to find out the total amount of refracted energy (energy available to the diffuse term), we'd have to evaluate the fresnel term for every possible microfacet orientation weighted by probability.
 
In my example case, the flat plane, this is simple; there is only one microfacet orientation, so we don't have to bother doing any integration! We just use the fresnel term for LDotN, as 100% of the microfacets are oriented towards N. Suddenly, we've got a part of the BRDF that relies on L but not V... The calculation to find the total reflected vs refracted energy balance only depends on L, N and F(0º) and the surface roughness. Hence my dilemma -- how do we implement physically correct energy conservation in any BRDF without upsetting Helmholtz? Or, is Helmholtz more of a guideline than a rule in the realm of BRDFs? Or is there some important detail elsewhere in the rendering equation, outside of the BRDF, that I'm missing?
 
[edit]
 

Helmholtz reciprocity isn't supposed to be correct for this process, because it's not just a single reflection

Are you saying it only breaks down because we're dealing with a composition of many different waves/particles instead of a single one? e.g. if we could track the path of one photon, it would obey the law, but when we end up with multiple overlapping probability distributions, we're no longer tracking individual rays so reciprocity has become irrelevant? Edited by Hodgman
0

Share this post


Link to post
Share on other sites

I think Helmholtz reciprocity doesn't apply to diffuse light at all, because diffuse light actually is the same as subsurface scattering, just in such a small scale, that one can approximate it by evaluating it at the entrance point. Diffuse light is the light scattering inside the surface, which is simply specular reflection thousands of times inside the surface. Helmholtz reciprocity isn't supposed to be correct for this process, because it's not just a single reflection. But it works for all the little specular reflections inside the material and the "macro" specular reflection on the surface.

 

No. All non-magnetic, non-optically active, linear (i.e. ordinary) light-matter interaction must obey Helmholtz reciprocity, no matter how many reflections and scatterings the light undergoes. It also applies in any ordinary participating medium (e.g. subsurface scattering) but that is generally approximated as well.

0

Share this post


Link to post
Share on other sites

we just use the fresnel term for LDotN

Not quite. If you assume that your surface is perfectly flat, your microfacet distribution, as you mentioned, returns 1 where N=H and 0 where N!=H, and thus your BRDF only returns a value other than 0, when N=H. But H is still calculated from L and V. So LDotH and VDotH are still the same (because that's always the case, since H is the halfway vector). But your BRDF requires H to be N to return a value other than 0. But if that's the case, LDotH is the same as LDotN, and VDotH is the same as VDotN. But since LDotH and VDotH are the same, LDotN and VDotN are the same as well. And thus, Helmholtz reciprocity still applies in this case.

 

 

No. All non-magnetic, non-optically active, linear (i.e. ordinary) light-matter interaction must obey Helmholtz reciprocity, no matter how many reflections and scatterings the light undergoes. It also applies in any ordinary participating medium (e.g. subsurface scattering) but that is generally approximated as well.

Yeah, I don't know. Something is really weird in the situation Hodgman pointed out :(

Edited by CryZe
0

Share this post


Link to post
Share on other sites

Actually, I can kind of resolve my "flat plane paradox" with a bit of a reinterpretation of the law...
Let's say for simplicity that:
* when V is glancing and L is overhead: 0% of the light is reflected, meaning 100% is diffused. When it's diffused, 1% reaches the camera.
* when L is glancing and V is overhead: 99% of the light is reflected, meaning 1% is diffused. When it's diffused, 1% of that 1% reaches the camera.
nyXo7DM.png

 

The law only talks about particular rays of light, but when a ray hits a surface it splits in two! I can interpret the law as if "a ray" is one particular sequence of rays, picking one arbitrary exit ray at every surface interaction.

So, if I only track the light that takes one particular path at the boundary, the refracted path only:
* when L is glancing and V is overhead: 0.01% reaches the camera, but 99% of the input energy was invalidated as it took the wrong path. If I divide the measured light by the amount that took the "valid path", then I get 0.0001 / 0.01 == 1%, which is the same as when L and V are swapped.
 
I don't know if I'm tired and bending the rules to make garbage makes sense, or if this is the way I should be interpreting reciprocity...wacko.png
If this is true, then the specular term of a BRDF should obey the rule in isolation, and also the diffuse term in isolation... but when you add the two together, it's possible to break the rule, which maybe is OK, because you're adding output rays that followed different paths? If that's the case, then the real rule is that each particular path within a BRDF should obey reciprocity?

 

[edit] No, don't listen to me, this isn't how physics works. The fact that reciprocity doesn't hold for my above thought experiment shows that the thought experiment is based on flawed assumptions... [/edit]

 

Not quite. If you assume that your surface is perfectly flat, your microfacet distribution, as you mentioned, returns 1 where N=H and 0 where N!=H, and thus your BRDF only returns a value other than 0, when N=H.

Only the specular part of the BRDF has that behaviour -- the Lambertian part is just a constant number regardless of V/L/H (unless we try to make it energy conserving).
To try and make the Lambertian part energy conserving, I've got to find the amount of light that's refracted. Any physics textbook will tell you that you can use Fresnel's law for this, and it won't include the viewer's location at all! Only the light source and the normal are considered (in this example, our macro-normal and all our microfacet normals are equal, so we can ignore microfacet distributions).
The physically correct amount of energy to input into the Lambertian term in this example, is based on the Fresnel term for NdotL.

Edited by Hodgman
0

Share this post


Link to post
Share on other sites

Any physics textbook will tell you that you can use Fresnel's law for this, and it won't include the viewer's location at all!

I guess that's the problem. The thing is that it should include the view direction as well, since fresnel's law applies as well, when the light is scattering out of the surface to the viewer's direction and not just when light is scattering into the surface. 

 

Just take a look at section 5.3 in http://disney-animation.s3.amazonaws.com/library/s2012_pbs_disney_brdf_notes_v2.pdf

Their diffuse model is applying their modified fresnel 2 times, once for the view direction and once for the light direction.

0

Share this post


Link to post
Share on other sites

The thing is that it should include the view direction as well, since fresnel's law applies as well, when the light is scattering out of the surface to the viewer's direction and not just when light is scattering into the surface.

When the light leaves the surface back into the air, yes some will reflect off the boundary back into the surface, but seeing as it's an infinitely thick plane with zero absorption (and for simplicity, let's say the same IOR as air), it will eventually all make it back out. The only effect that this will have is to bias the 'diffuse' distribution slightly more towards the normal (as at every 'attempt' for light to escape, this direction has the maximal chance). Light is scattering out of the surface in every direction, not just the viewer's direction, so it's not special.
There's still no reason to take the viewer's position into account when measuring the amount of light that refracts into the plane -- where I'm standing relative to a prism has no bearing on the amount of light that's refracted upon striking it's surface.

[edit] d'oh. You need to take the viewers position into account in the exit fresnel equation to measure the amount of diffuse light that is emitted in the direction of the viewer, because that's what I'm measuring. That's pretty damn obvious once I say it out loud... [/edit]

Edited by Hodgman
0

Share this post


Link to post
Share on other sites

Casting my vote for Kelemen Szirmay-Kalos with normalized Blinn-Phong microfacets. Numerically it's pretty well-behaved, importance samples pretty well and supports some very nifty analytical antialiasing in the form of Toksvig-filtered normal maps. It's also extremely cheap and includes a view-dependent diffuse BRDF for extra credit. I'd like to putz around with the latter, actually; Mark Olano had some interesting ideas on antialiasing diffuse shading and I'd like to try and apply them to a non-Lambertian BRDF if possible. If not, texture space shading is looking increasingly good, especially as triangles get smaller and smaller.

0

Share this post


Link to post
Share on other sites

Casting my vote for Kelemen Szirmay-Kalos with normalized Blinn-Phong microfacets. Numerically it's pretty well-behaved, importance samples pretty well and supports some very nifty analytical antialiasing in the form of Toksvig-filtered normal maps. It's also extremely cheap and includes a view-dependent diffuse BRDF for extra credit. I'd like to putz around with the latter, actually; Mark Olano had some interesting ideas on antialiasing diffuse shading and I'd like to try and apply them to a non-Lambertian BRDF if possible. If not, texture space shading is looking increasingly good, especially as triangles get smaller and smaller.

Texture space shading could work alright if you're getting a megatexture samples down enough, what did Carmack use as a tile? I can't remember but it would be far too big for these high instruction count BRDF's. But if you could get it down enough, to like a 16x16 tile, maaaybe.

 

I still just don't see it as likely though, not with something like Toksvig or LEAN mapping as an alternative, at least not with the new consoles admittedly limited compute power versus Moore's law's expectations. There are, to me, more valuable things to spend those resources on.

 

Also, this is great thread, learning a lot. Off topic here's a thank you for having it.

0

Share this post


Link to post
Share on other sites

Casting my vote for Kelemen Szirmay-Kalos with normalized Blinn-Phong microfacets. Numerically it's pretty well-behaved, importance samples pretty well and supports some very nifty analytical antialiasing in the form of Toksvig-filtered normal maps. It's also extremely cheap and includes a view-dependent diffuse BRDF for extra credit. I'd like to putz around with the latter, actually; Mark Olano had some interesting ideas on antialiasing diffuse shading and I'd like to try and apply them to a non-Lambertian BRDF if possible. If not, texture space shading is looking increasingly good, especially as triangles get smaller and smaller.

Texture space shading could work alright if you're getting a megatexture samples down enough, what did Carmack use as a tile? I can't remember but it would be far too big for these high instruction count BRDF's. But if you could get it down enough, to like a 16x16 tile, maaaybe.

 

I still just don't see it as likely though, not with something like Toksvig or LEAN mapping as an alternative, at least not with the new consoles admittedly limited compute power versus Moore's law's expectations. There are, to me, more valuable things to spend those resources on.

 

Also, this is great thread, learning a lot. Off topic here's a thank you for having it.

 

Tile size merely controls granularity, virtual texturing is entirely agnostic to resolution. Theoretically you just want texel:pixel density to sit near 1 and can subdivide all you want until you hit that magic number. In practice you'll actually want more than that so you can take advantage of the fancypants texture filtering algorithms your GPU provides, (or just EWA That S**t(tm) and call it a night) but details ;)

 

Timothy Lottes also pointed this out, but concepts like LEAN/Toksvig completely stop being useful when triangles start getting smaller than a pixel onscreen and everything turns in to a shimmery mess. In this case, attacking the problem from the perspective of texture-based methods won't do anything; you're actually getting killed by edge/triangle coverage aliasing. Movies currently solve this by taking like 16-64 samples per pixel and we'd essentially need to use MSAA just to get something that doesn't look like crap.

 

EDIT: Thanks for the GGX tip, will definitely look into this some more.

Edited by InvalidPointer
0

Share this post


Link to post
Share on other sites

 

Casting my vote for Kelemen Szirmay-Kalos with normalized Blinn-Phong microfacets. Numerically it's pretty well-behaved, importance samples pretty well and supports some very nifty analytical antialiasing in the form of Toksvig-filtered normal maps. It's also extremely cheap and includes a view-dependent diffuse BRDF for extra credit. I'd like to putz around with the latter, actually; Mark Olano had some interesting ideas on antialiasing diffuse shading and I'd like to try and apply them to a non-Lambertian BRDF if possible. If not, texture space shading is looking increasingly good, especially as triangles get smaller and smaller.

Texture space shading could work alright if you're getting a megatexture samples down enough, what did Carmack use as a tile? I can't remember but it would be far too big for these high instruction count BRDF's. But if you could get it down enough, to like a 16x16 tile, maaaybe.

 

I still just don't see it as likely though, not with something like Toksvig or LEAN mapping as an alternative, at least not with the new consoles admittedly limited compute power versus Moore's law's expectations. There are, to me, more valuable things to spend those resources on.

 

Also, this is great thread, learning a lot. Off topic here's a thank you for having it.

 

Tile size merely controls granularity, virtual texturing is entirely agnostic to resolution. Theoretically you just want texel:pixel density to sit near 1 and can subdivide all you want until you hit that magic number. In practice you'll actually want more than that so you can take advantage of the fancypants texture filtering algorithms your GPU provides, (or just EWA That S**t(tm) and call it a night) but details ;)

 

Timothy Lottes also pointed this out, but concepts like LEAN/Toksvig completely stop being useful when triangles start getting smaller than a pixel onscreen and everything turns in to a shimmery mess. In this case, attacking the problem from the perspective of texture-based methods won't do anything; you're actually getting killed by edge/triangle coverage aliasing. Movies currently solve this by taking like 16-64 samples per pixel and we'd essentially need to use MSAA just to get something that doesn't look like crap.

 

EDIT: Thanks for the GGX tip, will definitely look into this some more.

Sure virtual texture tile size doesn't matter in hypothesis, in practice you need to get a good balance between searching for new texture tiles all the time and your buffer size and etc.

 

Plus, the point was texture space shading, which essentially means you are shading an entire tile of a virtual texture, so you can filter and not getting any aliasing, which is essentially just supersampling and my point was that such sounds far too expensive for most games for this (upcoming? need a name) generation. Heck you can go to 1080p with the same shaders as a "current" gen game and you've already doubled your shading costs, going even to say, 16 times that in texture space to get texture space shading is just too costly.

 

Thanks for pointing out that Toksvig and etc. aren't going to work on sub pixel triangles, hadn't really considered that and I suppose there are going to be cases of having that problem now, especially with faces and other ultra high detail things. Maybe, as Chris F suggested, one could selectively supersample screenspace regions with sub pixel triangles? It still sounds expensive, especially since this thread is a discussion of BDRF's with much higher costs than your typical Blinn/Phong. But if you're getting rid of aliasing anyway, having it pop up on what is assumedly going to be the focus of attention isn't going to do you any good. Maybe you could just go back to vertex shading, since they're smaller than the pixel anyway? laugh.png

Edited by Frenetic Pony
0

Share this post


Link to post
Share on other sites

But if you're getting rid of aliasing anyway, having it pop up on what is assumedly going to be the focus of attention isn't going to do you any good. Maybe you could just go back to vertex shading, since they're smaller than the pixel anyway?

 

Reyes uses vertex shading combined with micropolygons, but thaw would be suicide on today's GPUs. If your triangles are covering less than 16 pixels, you aren't fully utilizing the rasterizer and you are overshading.

0

Share this post


Link to post
Share on other sites

What about selective supersampling to combat specular aliasing?


Supersampling helps reduce aliasing by increasing your sampling rate, but the problem is that the required sampling rate per pixel will shoot up to unrealistic levels as a surface gets further from the camera and/or becomes oblique to the camera. So you won't really solve the aliasing, you'll just make it a little less objectionable.

0

Share this post


Link to post
Share on other sites

Thanks to this thread, I've found something approaching my desired BRDF biggrin.png

The code is similar to the ones that Chris_F and I posted earlier in the thread:

http://pastebin.com/m7NLvtWk [edit] updated [/edit]
 
Following Disney's example, I've got minimum number of parameters with sensible ranges:
Color -- Used as diffuse color for non-metals, or specular color (F0) for metals.
Specular [0-1] -- The F0 param. From 0.0 to 0.5 are non-metals, with 0.5 being around diamond level shininess. 0.5 to 1.0 are impure metal to pure metal.
Roughness X/Y [0-1] -- 0 is perfectly flat, 0.5 is a lambertian level of roughness, 0.5 to 1.0 are super rough surfaces that appear quite flat and get some diffuse rim highlights.
FacetAngle [0-2Pi] -- rotates the tangent space / roughness X/Y axis. Even though it's an angle parameter, this can be baked into two [-1 to 1] sin(a)/cos(a) parameters.
Retroreflective [0-1] -- bends the specular lobe back towards the light source.
Isotropic (bool) -- if true, sets roughness Y to equal roughness X as an optimization.

These are all lit by a spotlight behind the camera:
33WaAnM.png

The diffuse model is based loosely on a very cheap approximation of Oren-Nayer mixed with the below energy conversation for flat surfaces.

The Retroreflective param works with anisotropy, but does change very slightly with the isotropic bool on/off, and also changes very slightly with helmholtz reciprocity... I've made some small error with it obviously [edit] fixed the bug, everything obeys reciprocity now, and the output matches the optimized isotropic code-path when both roughnesses are equal[/edit].

It's pretty expensive though, so I might go with something simpler like what Chris posted earlier (lerping from one distribution to another), unless I find a really good use-case for anisotropic retroreflectors.


I've been having an issue with different Cook-Torrance geometry terms though - most of these BRDF's are modulated by NdotV in some way, with the assumption that this can't be negative, else the fragment wouldn't be visible. However, in actual game scenes, this assumption doesn't hold! Simply picture a cube that's had normals generated without hard-edges / with every face in the same smoothing group (or alternatively, picture a sphere that's been LOD'ed super-aggressively into a cube - same thing). In this case there's a huge number of fragments where NdotV will be negative, but simply cutting off the lighting for these fragments looks really unnatural.
To get around these unnatural cut-offs in my game scenes, I've simply scaled/biased NdotV (and NdotL to maintain reciprocity) into the 0-1 range right before doing the specular calculations, which produces a "wrapped geometry term", instead of one that becomes zero at the horizon...
Has anyone else dealt with this issue?
 

If your triangles are covering less than 16 pixels, you aren't fully utilizing the rasterizer and you are overshading.

Yeah, the way current GPU's work, sub-pixel sized triangles really should be avoided. On my last game, implementing mesh LODs (which reduced the vertex/triangle count with distance) gave us a huge performance boost in the pixel shader, due to larger triangles being rasterized/shaded more efficiently.
This performance issue can be somewhat mitigated with deferred rendering, as most of your shading is done in screen-space, not directly after rasterization, but you've still got the shimmering quality issue anyway.
 
I'm guessing we'll need some new hardware support, or have to wait for everyone to start re-implementing the rasterizer in compute shaders, before we see another giant leap in anti-aliasing, closer to something like what REYES does.
 
 

Any physics textbook will tell you that you can use Fresnel's law for this, and it won't include the viewer's location at all!

I guess that's the problem. The thing is that it should include the view direction as well, since fresnel's law applies as well, when the light is scattering out of the surface to the viewer's direction and not just when light is scattering into the surface. 
 
Just take a look at section 5.3 in http://disney-animation.s3.amazonaws.com/library/s2012_pbs_disney_brdf_notes_v2.pdf
Their diffuse model is applying their modified fresnel 2 times, once for the view direction and once for the light direction.

Thanks, this was the key to my failure to understand reciprocity...
Once the light has refracted into the perfectly smooth lambertian diffuser (according to NdotL), it's evenly spread over the hemisphere, but the amount of that re-emitted energy that actually escapes towards the viewer does depend on NdotV.
Adding both of these factors does allow you to apply the Fresnel term to a perfectly smooth Lambertian surface and maintain reciprocity...
For anyone interested, a perfectly flat Lambertian surface is quite a bit darker at glancing angles than just the normal NdotL falloff that we see.

//NdotL not included:
    float lambert = 1/PI;
    float refractedIn = 1-pow(1-dot( N,L), 5);//schlick's approximation of fresnel
    float refractedOut = 1-pow(1-dot(N,V), 5);
    return vec3( lambert * refractedIn * refractedOut );

However, this model assumes that any of the about-to-be-re-emitted energy that reflects off the surface/air barrier, is absorbed into the surface and lost, meaning the properties of the surface have changed from my original thought experiment. Though at this point I'm happy enough to imagine that there probably is a formula to calculate how much of this internally-reflected energy there is, and what it's extra contribition to the BRDF should be, in such a way that reciprocity is maintained.

Edited by Hodgman
4

Share this post


Link to post
Share on other sites

I've been having an issue with different Cook-Torrance geometry terms though - most of these BRDF's are modulated by NdotV in some way, with the assumption that this can't be negative, else the fragment wouldn't be visible. However, in actual game scenes, this assumption doesn't hold! Simply picture a cube that's had normals generated without hard-edges / with every face in the same smoothing group (or alternatively, picture a sphere that's been LOD'ed super-aggressively into a cube - same thing). In this case there's a huge number of fragments where NdotV will be negative, but simply cutting off the lighting for these fragments looks really unnatural.
To get around these unnatural cut-offs in my game scenes, I've simply scaled/biased NdotV (and NdotL to maintain reciprocity) into the 0-1 range right before doing the specular calculations, which produces a "wrapped geometry term", instead of one that becomes zero at the horizon...
Has anyone else dealt with this issue?

Kelemen Szirmay-Kalos! Kelemen Szirmay-Kalos! Kelemen Szirmay-Kalos!

 

Okay, okay, I'll try and add some useful content later on. But seriously, it's designed to be a Cook-Torrance geometry term that doesn't suck. It succeeds.

Edited by InvalidPointer
0

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now
Sign in to follow this  
Followers 0