Thanks to this thread, I've found something approaching my desired BRDF
The code is similar to the ones that Chris_F and I posted earlier in the thread:
http://pastebin.com/m7NLvtWk  updated [/edit]
Following Disney's example, I've got minimum number of parameters with sensible ranges:
Color -- Used as diffuse color for non-metals, or specular color (F0) for metals.
Specular [0-1] -- The F0 param. From 0.0 to 0.5 are non-metals, with 0.5 being around diamond level shininess. 0.5 to 1.0 are impure metal to pure metal.
Roughness X/Y [0-1] -- 0 is perfectly flat, 0.5 is a lambertian level of roughness, 0.5 to 1.0 are super rough surfaces that appear quite flat and get some diffuse rim highlights.
FacetAngle [0-2Pi] -- rotates the tangent space / roughness X/Y axis. Even though it's an angle parameter, this can be baked into two [-1 to 1] sin(a)/cos(a) parameters.
Retroreflective [0-1] -- bends the specular lobe back towards the light source.
Isotropic (bool) -- if true, sets roughness Y to equal roughness X as an optimization.
These are all lit by a spotlight behind the camera:
The diffuse model is based loosely on a very cheap approximation of Oren-Nayer mixed with the below energy conversation for flat surfaces.
The Retroreflective param works with anisotropy,
but does change very slightly with the isotropic bool on/off, and also changes very slightly with helmholtz reciprocity... I've made some small error with it obviously  fixed the bug, everything obeys reciprocity now, and the output matches the optimized isotropic code-path when both roughnesses are equal[/edit].
It's pretty expensive though, so I might go with something simpler like what Chris posted earlier (lerping from one distribution to another), unless I find a really good use-case for anisotropic retroreflectors.
I've been having an issue with different Cook-Torrance geometry terms though - most of these BRDF's are modulated by NdotV in some way, with the assumption that this can't be negative, else the fragment wouldn't be visible. However, in actual game scenes, this assumption doesn't hold! Simply picture a cube that's had normals generated without hard-edges / with every face in the same smoothing group (or alternatively, picture a sphere that's been LOD'ed super-aggressively into a cube - same thing). In this case there's a huge number of fragments where NdotV will be negative, but simply cutting off the lighting for these fragments looks really unnatural.
To get around these unnatural cut-offs in my game scenes, I've simply scaled/biased NdotV (and NdotL to maintain reciprocity) into the 0-1 range right before doing the specular calculations, which produces a "wrapped geometry term", instead of one that becomes zero at the horizon...
Has anyone else dealt with this issue?
If your triangles are covering less than 16 pixels, you aren't fully utilizing the rasterizer and you are overshading.
Yeah, the way current GPU's work, sub-pixel sized triangles really should be avoided. On my last game, implementing mesh LODs (which reduced the vertex/triangle count with distance) gave us a huge performance boost in the pixel shader, due to larger triangles being rasterized/shaded more efficiently.
This performance issue can be somewhat mitigated with deferred rendering, as most of your shading is done in screen-space, not directly after rasterization, but you've still got the shimmering quality issue anyway.
I'm guessing we'll need some new hardware support, or have to wait for everyone to start re-implementing the rasterizer in compute shaders, before we see another giant leap in anti-aliasing, closer to something like what REYES does.
I guess that's the problem. The thing is that it should include the view direction as well, since fresnel's law applies as well, when the light is scattering out of the surface to the viewer's direction and not just when light is scattering into the surface.
Any physics textbook will tell you that you can use Fresnel's law for this, and it won't include the viewer's location at all!
Just take a look at section 5.3 in http://disney-animation.s3.amazonaws.com/library/s2012_pbs_disney_brdf_notes_v2.pdf
Their diffuse model is applying their modified fresnel 2 times, once for the view direction and once for the light direction.
Thanks, this was the key to my failure to understand reciprocity...
Once the light has refracted into the perfectly smooth lambertian diffuser (according to NdotL), it's evenly spread over the hemisphere, but the amount of that re-emitted energy that actually escapes towards the viewer does depend on NdotV.
Adding both of these factors does allow you to apply the Fresnel term to a perfectly smooth Lambertian surface and maintain reciprocity...
For anyone interested, a perfectly flat Lambertian surface is quite a bit darker at glancing angles than just the normal NdotL falloff that we see.
//NdotL not included:
float lambert = 1/PI;
float refractedIn = 1-pow(1-dot( N,L), 5);//schlick's approximation of fresnel
float refractedOut = 1-pow(1-dot(N,V), 5);
return vec3( lambert * refractedIn * refractedOut );
However, this model assumes that any of the about-to-be-re-emitted energy that reflects off the surface/air barrier, is absorbed into the surface and lost, meaning the properties of the surface have changed from my original thought experiment. Though at this point I'm happy enough to imagine that there probably is a formula to calculate how much of this internally-reflected energy there is, and what it's extra contribition to the BRDF should be, in such a way that reciprocity is maintained.
Edited by Hodgman, 01 March 2013 - 07:24 AM.