Jump to content

  • Log In with Google      Sign In   
  • Create Account


Your preferred or desired BRDF?


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
51 replies to this topic

#41 Hodgman   Moderators   -  Reputation: 29497

Like
0Likes
Like

Posted 26 February 2013 - 08:49 AM

The thing is that it should include the view direction as well, since fresnel's law applies as well, when the light is scattering out of the surface to the viewer's direction and not just when light is scattering into the surface.

When the light leaves the surface back into the air, yes some will reflect off the boundary back into the surface, but seeing as it's an infinitely thick plane with zero absorption (and for simplicity, let's say the same IOR as air), it will eventually all make it back out. The only effect that this will have is to bias the 'diffuse' distribution slightly more towards the normal (as at every 'attempt' for light to escape, this direction has the maximal chance). Light is scattering out of the surface in every direction, not just the viewer's direction, so it's not special.
There's still no reason to take the viewer's position into account when measuring the amount of light that refracts into the plane -- where I'm standing relative to a prism has no bearing on the amount of light that's refracted upon striking it's surface.

[edit] d'oh. You need to take the viewers position into account in the exit fresnel equation to measure the amount of diffuse light that is emitted in the direction of the viewer, because that's what I'm measuring. That's pretty damn obvious once I say it out loud... [/edit]


Edited by Hodgman, 01 March 2013 - 06:52 AM.


Sponsor:

#42 InvalidPointer   Members   -  Reputation: 1422

Like
0Likes
Like

Posted 27 February 2013 - 09:56 PM

Casting my vote for Kelemen Szirmay-Kalos with normalized Blinn-Phong microfacets. Numerically it's pretty well-behaved, importance samples pretty well and supports some very nifty analytical antialiasing in the form of Toksvig-filtered normal maps. It's also extremely cheap and includes a view-dependent diffuse BRDF for extra credit. I'd like to putz around with the latter, actually; Mark Olano had some interesting ideas on antialiasing diffuse shading and I'd like to try and apply them to a non-Lambertian BRDF if possible. If not, texture space shading is looking increasingly good, especially as triangles get smaller and smaller.


clb: At the end of 2012, the positions of jupiter, saturn, mercury, and deimos are aligned so as to cause a denormalized flush-to-zero bug when computing earth's gravitational force, slinging it to the sun.

#43 Frenetic Pony   Members   -  Reputation: 1271

Like
0Likes
Like

Posted 28 February 2013 - 12:12 AM

Casting my vote for Kelemen Szirmay-Kalos with normalized Blinn-Phong microfacets. Numerically it's pretty well-behaved, importance samples pretty well and supports some very nifty analytical antialiasing in the form of Toksvig-filtered normal maps. It's also extremely cheap and includes a view-dependent diffuse BRDF for extra credit. I'd like to putz around with the latter, actually; Mark Olano had some interesting ideas on antialiasing diffuse shading and I'd like to try and apply them to a non-Lambertian BRDF if possible. If not, texture space shading is looking increasingly good, especially as triangles get smaller and smaller.

Texture space shading could work alright if you're getting a megatexture samples down enough, what did Carmack use as a tile? I can't remember but it would be far too big for these high instruction count BRDF's. But if you could get it down enough, to like a 16x16 tile, maaaybe.

 

I still just don't see it as likely though, not with something like Toksvig or LEAN mapping as an alternative, at least not with the new consoles admittedly limited compute power versus Moore's law's expectations. There are, to me, more valuable things to spend those resources on.

 

Also, this is great thread, learning a lot. Off topic here's a thank you for having it.



#44 InvalidPointer   Members   -  Reputation: 1422

Like
0Likes
Like

Posted 28 February 2013 - 09:52 AM

Casting my vote for Kelemen Szirmay-Kalos with normalized Blinn-Phong microfacets. Numerically it's pretty well-behaved, importance samples pretty well and supports some very nifty analytical antialiasing in the form of Toksvig-filtered normal maps. It's also extremely cheap and includes a view-dependent diffuse BRDF for extra credit. I'd like to putz around with the latter, actually; Mark Olano had some interesting ideas on antialiasing diffuse shading and I'd like to try and apply them to a non-Lambertian BRDF if possible. If not, texture space shading is looking increasingly good, especially as triangles get smaller and smaller.

Texture space shading could work alright if you're getting a megatexture samples down enough, what did Carmack use as a tile? I can't remember but it would be far too big for these high instruction count BRDF's. But if you could get it down enough, to like a 16x16 tile, maaaybe.

 

I still just don't see it as likely though, not with something like Toksvig or LEAN mapping as an alternative, at least not with the new consoles admittedly limited compute power versus Moore's law's expectations. There are, to me, more valuable things to spend those resources on.

 

Also, this is great thread, learning a lot. Off topic here's a thank you for having it.

 

Tile size merely controls granularity, virtual texturing is entirely agnostic to resolution. Theoretically you just want texel:pixel density to sit near 1 and can subdivide all you want until you hit that magic number. In practice you'll actually want more than that so you can take advantage of the fancypants texture filtering algorithms your GPU provides, (or just EWA That S**t™ and call it a night) but details ;)

 

Timothy Lottes also pointed this out, but concepts like LEAN/Toksvig completely stop being useful when triangles start getting smaller than a pixel onscreen and everything turns in to a shimmery mess. In this case, attacking the problem from the perspective of texture-based methods won't do anything; you're actually getting killed by edge/triangle coverage aliasing. Movies currently solve this by taking like 16-64 samples per pixel and we'd essentially need to use MSAA just to get something that doesn't look like crap.

 

EDIT: Thanks for the GGX tip, will definitely look into this some more.


Edited by InvalidPointer, 28 February 2013 - 09:56 AM.

clb: At the end of 2012, the positions of jupiter, saturn, mercury, and deimos are aligned so as to cause a denormalized flush-to-zero bug when computing earth's gravitational force, slinging it to the sun.

#45 Chris_F   Members   -  Reputation: 2237

Like
0Likes
Like

Posted 28 February 2013 - 05:08 PM

What about selective supersampling to combat specular aliasing?



#46 Frenetic Pony   Members   -  Reputation: 1271

Like
0Likes
Like

Posted 28 February 2013 - 06:57 PM

 

Casting my vote for Kelemen Szirmay-Kalos with normalized Blinn-Phong microfacets. Numerically it's pretty well-behaved, importance samples pretty well and supports some very nifty analytical antialiasing in the form of Toksvig-filtered normal maps. It's also extremely cheap and includes a view-dependent diffuse BRDF for extra credit. I'd like to putz around with the latter, actually; Mark Olano had some interesting ideas on antialiasing diffuse shading and I'd like to try and apply them to a non-Lambertian BRDF if possible. If not, texture space shading is looking increasingly good, especially as triangles get smaller and smaller.

Texture space shading could work alright if you're getting a megatexture samples down enough, what did Carmack use as a tile? I can't remember but it would be far too big for these high instruction count BRDF's. But if you could get it down enough, to like a 16x16 tile, maaaybe.

 

I still just don't see it as likely though, not with something like Toksvig or LEAN mapping as an alternative, at least not with the new consoles admittedly limited compute power versus Moore's law's expectations. There are, to me, more valuable things to spend those resources on.

 

Also, this is great thread, learning a lot. Off topic here's a thank you for having it.

 

Tile size merely controls granularity, virtual texturing is entirely agnostic to resolution. Theoretically you just want texel:pixel density to sit near 1 and can subdivide all you want until you hit that magic number. In practice you'll actually want more than that so you can take advantage of the fancypants texture filtering algorithms your GPU provides, (or just EWA That S**t™ and call it a night) but details ;)

 

Timothy Lottes also pointed this out, but concepts like LEAN/Toksvig completely stop being useful when triangles start getting smaller than a pixel onscreen and everything turns in to a shimmery mess. In this case, attacking the problem from the perspective of texture-based methods won't do anything; you're actually getting killed by edge/triangle coverage aliasing. Movies currently solve this by taking like 16-64 samples per pixel and we'd essentially need to use MSAA just to get something that doesn't look like crap.

 

EDIT: Thanks for the GGX tip, will definitely look into this some more.

Sure virtual texture tile size doesn't matter in hypothesis, in practice you need to get a good balance between searching for new texture tiles all the time and your buffer size and etc.

 

Plus, the point was texture space shading, which essentially means you are shading an entire tile of a virtual texture, so you can filter and not getting any aliasing, which is essentially just supersampling and my point was that such sounds far too expensive for most games for this (upcoming? need a name) generation. Heck you can go to 1080p with the same shaders as a "current" gen game and you've already doubled your shading costs, going even to say, 16 times that in texture space to get texture space shading is just too costly.

 

Thanks for pointing out that Toksvig and etc. aren't going to work on sub pixel triangles, hadn't really considered that and I suppose there are going to be cases of having that problem now, especially with faces and other ultra high detail things. Maybe, as Chris F suggested, one could selectively supersample screenspace regions with sub pixel triangles? It still sounds expensive, especially since this thread is a discussion of BDRF's with much higher costs than your typical Blinn/Phong. But if you're getting rid of aliasing anyway, having it pop up on what is assumedly going to be the focus of attention isn't going to do you any good. Maybe you could just go back to vertex shading, since they're smaller than the pixel anyway? laugh.png


Edited by Frenetic Pony, 28 February 2013 - 06:58 PM.


#47 Chris_F   Members   -  Reputation: 2237

Like
0Likes
Like

Posted 28 February 2013 - 08:06 PM

But if you're getting rid of aliasing anyway, having it pop up on what is assumedly going to be the focus of attention isn't going to do you any good. Maybe you could just go back to vertex shading, since they're smaller than the pixel anyway?

 

Reyes uses vertex shading combined with micropolygons, but thaw would be suicide on today's GPUs. If your triangles are covering less than 16 pixels, you aren't fully utilizing the rasterizer and you are overshading.



#48 MJP   Moderators   -  Reputation: 10928

Like
0Likes
Like

Posted 28 February 2013 - 08:55 PM

What about selective supersampling to combat specular aliasing?


Supersampling helps reduce aliasing by increasing your sampling rate, but the problem is that the required sampling rate per pixel will shoot up to unrealistic levels as a surface gets further from the camera and/or becomes oblique to the camera. So you won't really solve the aliasing, you'll just make it a little less objectionable.



#49 Hodgman   Moderators   -  Reputation: 29497

Like
4Likes
Like

Posted 28 February 2013 - 10:56 PM

Thanks to this thread, I've found something approaching my desired BRDF biggrin.png

The code is similar to the ones that Chris_F and I posted earlier in the thread:

http://pastebin.com/m7NLvtWk [edit] updated [/edit]
 
Following Disney's example, I've got minimum number of parameters with sensible ranges:
Color -- Used as diffuse color for non-metals, or specular color (F0) for metals.
Specular [0-1] -- The F0 param. From 0.0 to 0.5 are non-metals, with 0.5 being around diamond level shininess. 0.5 to 1.0 are impure metal to pure metal.
Roughness X/Y [0-1] -- 0 is perfectly flat, 0.5 is a lambertian level of roughness, 0.5 to 1.0 are super rough surfaces that appear quite flat and get some diffuse rim highlights.
FacetAngle [0-2Pi] -- rotates the tangent space / roughness X/Y axis. Even though it's an angle parameter, this can be baked into two [-1 to 1] sin(a)/cos(a) parameters.
Retroreflective [0-1] -- bends the specular lobe back towards the light source.
Isotropic (bool) -- if true, sets roughness Y to equal roughness X as an optimization.

These are all lit by a spotlight behind the camera:
33WaAnM.png

The diffuse model is based loosely on a very cheap approximation of Oren-Nayer mixed with the below energy conversation for flat surfaces.

The Retroreflective param works with anisotropy, but does change very slightly with the isotropic bool on/off, and also changes very slightly with helmholtz reciprocity... I've made some small error with it obviously [edit] fixed the bug, everything obeys reciprocity now, and the output matches the optimized isotropic code-path when both roughnesses are equal[/edit].

It's pretty expensive though, so I might go with something simpler like what Chris posted earlier (lerping from one distribution to another), unless I find a really good use-case for anisotropic retroreflectors.


I've been having an issue with different Cook-Torrance geometry terms though - most of these BRDF's are modulated by NdotV in some way, with the assumption that this can't be negative, else the fragment wouldn't be visible. However, in actual game scenes, this assumption doesn't hold! Simply picture a cube that's had normals generated without hard-edges / with every face in the same smoothing group (or alternatively, picture a sphere that's been LOD'ed super-aggressively into a cube - same thing). In this case there's a huge number of fragments where NdotV will be negative, but simply cutting off the lighting for these fragments looks really unnatural.
To get around these unnatural cut-offs in my game scenes, I've simply scaled/biased NdotV (and NdotL to maintain reciprocity) into the 0-1 range right before doing the specular calculations, which produces a "wrapped geometry term", instead of one that becomes zero at the horizon...
Has anyone else dealt with this issue?
 

If your triangles are covering less than 16 pixels, you aren't fully utilizing the rasterizer and you are overshading.

Yeah, the way current GPU's work, sub-pixel sized triangles really should be avoided. On my last game, implementing mesh LODs (which reduced the vertex/triangle count with distance) gave us a huge performance boost in the pixel shader, due to larger triangles being rasterized/shaded more efficiently.
This performance issue can be somewhat mitigated with deferred rendering, as most of your shading is done in screen-space, not directly after rasterization, but you've still got the shimmering quality issue anyway.
 
I'm guessing we'll need some new hardware support, or have to wait for everyone to start re-implementing the rasterizer in compute shaders, before we see another giant leap in anti-aliasing, closer to something like what REYES does.
 
 

Any physics textbook will tell you that you can use Fresnel's law for this, and it won't include the viewer's location at all!

I guess that's the problem. The thing is that it should include the view direction as well, since fresnel's law applies as well, when the light is scattering out of the surface to the viewer's direction and not just when light is scattering into the surface. 
 
Just take a look at section 5.3 in http://disney-animation.s3.amazonaws.com/library/s2012_pbs_disney_brdf_notes_v2.pdf
Their diffuse model is applying their modified fresnel 2 times, once for the view direction and once for the light direction.

Thanks, this was the key to my failure to understand reciprocity...
Once the light has refracted into the perfectly smooth lambertian diffuser (according to NdotL), it's evenly spread over the hemisphere, but the amount of that re-emitted energy that actually escapes towards the viewer does depend on NdotV.
Adding both of these factors does allow you to apply the Fresnel term to a perfectly smooth Lambertian surface and maintain reciprocity...
For anyone interested, a perfectly flat Lambertian surface is quite a bit darker at glancing angles than just the normal NdotL falloff that we see.

//NdotL not included:
    float lambert = 1/PI;
    float refractedIn = 1-pow(1-dot( N,L), 5);//schlick's approximation of fresnel
    float refractedOut = 1-pow(1-dot(N,V), 5);
    return vec3( lambert * refractedIn * refractedOut );

However, this model assumes that any of the about-to-be-re-emitted energy that reflects off the surface/air barrier, is absorbed into the surface and lost, meaning the properties of the surface have changed from my original thought experiment. Though at this point I'm happy enough to imagine that there probably is a formula to calculate how much of this internally-reflected energy there is, and what it's extra contribition to the BRDF should be, in such a way that reciprocity is maintained.


Edited by Hodgman, 01 March 2013 - 07:24 AM.


#50 InvalidPointer   Members   -  Reputation: 1422

Like
0Likes
Like

Posted 01 March 2013 - 10:35 PM

I've been having an issue with different Cook-Torrance geometry terms though - most of these BRDF's are modulated by NdotV in some way, with the assumption that this can't be negative, else the fragment wouldn't be visible. However, in actual game scenes, this assumption doesn't hold! Simply picture a cube that's had normals generated without hard-edges / with every face in the same smoothing group (or alternatively, picture a sphere that's been LOD'ed super-aggressively into a cube - same thing). In this case there's a huge number of fragments where NdotV will be negative, but simply cutting off the lighting for these fragments looks really unnatural.
To get around these unnatural cut-offs in my game scenes, I've simply scaled/biased NdotV (and NdotL to maintain reciprocity) into the 0-1 range right before doing the specular calculations, which produces a "wrapped geometry term", instead of one that becomes zero at the horizon...
Has anyone else dealt with this issue?

Kelemen Szirmay-Kalos! Kelemen Szirmay-Kalos! Kelemen Szirmay-Kalos!

 

Okay, okay, I'll try and add some useful content later on. But seriously, it's designed to be a Cook-Torrance geometry term that doesn't suck. It succeeds.


Edited by InvalidPointer, 01 March 2013 - 10:36 PM.

clb: At the end of 2012, the positions of jupiter, saturn, mercury, and deimos are aligned so as to cause a denormalized flush-to-zero bug when computing earth's gravitational force, slinging it to the sun.

#51 Chris_F   Members   -  Reputation: 2237

Like
1Likes
Like

Posted 02 March 2013 - 02:28 AM

I've been having an issue with different Cook-Torrance geometry terms though - most of these BRDF's are modulated by NdotV in some way, with the assumption that this can't be negative, else the fragment wouldn't be visible. However, in actual game scenes, this assumption doesn't hold! Simply picture a cube that's had normals generated without hard-edges / with every face in the same smoothing group (or alternatively, picture a sphere that's been LOD'ed super-aggressively into a cube - same thing). In this case there's a huge number of fragments where NdotV will be negative, but simply cutting off the lighting for these fragments looks really unnatural.
To get around these unnatural cut-offs in my game scenes, I've simply scaled/biased NdotV (and NdotL to maintain reciprocity) into the 0-1 range right before doing the specular calculations, which produces a "wrapped geometry term", instead of one that becomes zero at the horizon...
Has anyone else dealt with this issue?

Kelemen Szirmay-Kalos! Kelemen Szirmay-Kalos! Kelemen Szirmay-Kalos!

 

Okay, okay, I'll try and add some useful content later on. But seriously, it's designed to be a Cook-Torrance geometry term that doesn't suck. It succeeds.

 

The Kelemen Szirmay-Kalos visibility approximation is indeed very handy. My minimalistic Cook-Torrance uses it because it's dirt cheap and gives good results, but it suffers from the same shortcoming as the original Cook-Torrance geometry term, and that is that it does not take the roughness of the surface into account at all.

 

I'm still not sure about the way retroreflection is being handled. It seems to me that most natural materials display very little in the way of retroreflection, and mostly at grazing angles. This is captured by Oren-Nayar and GGX. Objects with very high levels of retroreflection are synthetic and consist of macroscopic corner reflectors, or similar. It would be nice if these could be modeled more accurately.

 

Path tracing of a flat surface made up of corner reflectors. L = V = ~45°

 

untitled.png

 

 


Edited by Chris_F, 02 March 2013 - 02:50 AM.


#52 Hodgman   Moderators   -  Reputation: 29497

Like
0Likes
Like

Posted 02 March 2013 - 02:48 AM

Unless I've misread (edit: yes I have), Kelemen doesn't help with that issue, because the geometry factor still becomes zero when N.V becomes zero.

 

To illustrate, here's my test cube with smoothed normals, something pretty common in low to medium poly game scenes:

EdtyN6a.png

 

And here's how it looks with the Cook-Torrence specular using Smith or Kelemen geometry terms, with and without my dodgey scale/bias hack (the difference in saturation is due to the tone-mapper). This cube is surrounded by directional lights so that there's one at least pointing at each face, yet there's huge black areas, simply because they're apparently not visible (N.V is negative).

44OqYRf.png

N.B. the contribution for each light is still multiplied by the real, unbiased NdotL, so that faces that aren't visible to each light aren't lit by them.

 

[edit] Ah yeah, the Kelemen visibility approximation (without my scale/bias hack) looks like:

wcYJlTk.png

But as Chris pointed out, it doesn't take roughness into account, which is a bit disappointing.


Edited by Hodgman, 02 March 2013 - 03:09 AM.





Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS