Sign in to follow this  
ZachBethel

Having trouble with energy conservation with IBL.

Recommended Posts

ZachBethel    921

Hey,

 

I'm playing around with the IBL technique described by Epic in their 2012 Siggraph Paper: http://blog.selfshadow.com/publications/s2013-shading-course/karis/s2013_pbs_epic_notes_v2.pdf 

 

I've seen quite a few topics on this paper around Gamedev, but none that address this particular issue.

 

The problem I'm having is that my roughness factor effectively blurs the highlights around but doesn't seem to diminish them. I would expect that by integrating over the hemisphere with a wider lobe, I would see less energy reflected towards the eye, but it appears that this is not the case--as you can see in the attached image.

 

The code is pretty much straight from the paper, although I'll pulled the GGX and G_Smith terms from elsewhere on these forums:

float GGX(float nDotV, float a) {

    float aa = a*a;
    float oneMinusAa = 1 - aa;
    float nDotV2 = 2 * nDotV;
    float root = aa + oneMinusAa * nDotV * nDotV;
    return nDotV2 / (nDotV + sqrt(root));
}

 
float G_Smith(float a, float nDotV, float nDotL) {
    return GGX(nDotL,a) * GGX(nDotV,a);
}
 

vec3 ImportanceSampleGGX( vec2 Xi, float Roughness, vec3 N ) {
  float a = Roughness * Roughness;
  float Phi = 2 * PI * Xi.x;
  float CosTheta = sqrt( (1 - Xi.y) / ( 1 + (a*a - 1) * Xi.y ) );
  float SinTheta = sqrt( 1 - CosTheta * CosTheta );
 
  vec3 H;
  H.x = SinTheta * cos( Phi );
  H.y = SinTheta * sin( Phi );
  H.z = CosTheta;
 
  vec3 UpVector = abs(N.z) < 0.999 ? vec3(0,0,1) : vec3(1,0,0);
  vec3 TangentX = normalize( cross( UpVector, N ) );
  vec3 TangentY = cross( N, TangentX );
 
  // Tangent to world space
  return TangentX * H.x + TangentY * H.y + N * H.z;
}

vec3 SpecularIBL( vec3 SpecularColor, float Roughness, vec3 N, vec3 V ) {
  vec3 SpecularLighting = vec3(0);
  for( int i = 0; i < u_NumSamples; i++ ) {
    vec2 Xi = vec2(u_Rand[i*2], u_Rand[i*2+1]);
    vec3 H = ImportanceSampleGGX( Xi, Roughness, N );
 
    vec3 L = 2 * dot( V, H ) * H - V;
    float NoV = clamp( dot( N, V ), 0, 1 );
    float NoL = clamp( dot( N, L ), 0, 1 );
    float NoH = clamp( dot( N, H ), 0, 1 );
    float VoH = clamp( dot( V, H ), 0, 1 );
 
    if( NoL > 0 ) {
       vec3 SampleColor = pow(texture( u_skybox, L, 0 ).rgb, vec3(2.2));
       float G = G_Smith( Roughness, NoV, NoL );
       float Fc = pow( 1 - VoH, 5 );
       vec3 F = (1 - Fc) * SpecularColor + Fc;
       // Incident light = SampleColor * NoL
       // Microfacet specular = D*G*F / (4*NoL*NoV)
       // pdf = D * NoH / (4 * VoH)
       SpecularLighting += SampleColor * F * G * VoH / (NoH * NoV);
    }
  }
  return SpecularLighting / float(u_NumSamples);
}
void main() {
  vec3 V = normalize(u_eyePos - v_worldPos);
  vec3 N = normalize(v_normal);
  FragColor = vec4( SpecularIBL(vec3(1.0, 0.6, 0.6), u_Roughness, N, V), 1.0);
}
 

Looking at Disney's BRDF exporer, I see that the GGX BRDF correctly scales back the lobe based on roughness, using the D (distribution) part of the microfacet model. But within the comments, I see that the PDF of the GGX importance sampling factors the D term in, which cancels it out of the brdf.

 

As a test, I clamped the HDR probe to a max value of 1, which really shows how the conservation is out of whack. I assume I'm missing something in my BRDF to account for this, but the weird integration with the importance sampling distribution is making it difficult for me to piece apart what's going on.

 

Based on my searches, there are several members here who have successfully implemented this approach. Is there some step I'm blatantly missing? Note that I'm just doing the full importance sampling approach, not the cube map approximation.

 

Thanks!

Zach.

Edited by ZBethel

Share this post


Link to post
Share on other sites
ZachBethel    921

Hah,

 

I found the problem. Turns out I wasn't clamping the probe correctly, so there were pixels that had brightness values of 1000+. Oops!

 

I am having some trouble using these probes at their native brightness. Anytime there is a bright spot that is 1000 times brighter than the rest of the scene, I end up getting a super blown out image when the roughness is high. The only way I can combat it is to crank the exposure way down, but this doesn't seem right to me.

 

I get better results by clamping the probe brightness to something like 11. That way I can view the object without cranking the exposure way down. Is that pretty normal for HDR rendering?

Share this post


Link to post
Share on other sites
Digitalfragment    1504

Isn't that fairly realistic to expect though, a light 1000 times brighter in a single direction should have a pretty noticeable impact. 1000 times the impact in fact. When processing the cubemaps though, are you taking the solid angle of each pixel into consideration? A single pixel's light is scaled by the area on the sphere it is seen from (without this, doubling the resolution of the cube map could effectively make it 4 times brighter, etc)

Share this post


Link to post
Share on other sites
ZachBethel    921
Sure, I suppose it does. It just looks odd when my exposure level is .01 to map the irradiance of the sphere at full roughness back into tge 0-1 range. The background gets super dark.

After playing around for a while, dont *think* theres anything wrong with the calculations. I'm just not used to working in HDR lighting environments.

To be clear, I've been doing progressive rendering with importance sampling, not the cube mip map approach. I shouldnt have to scale by solid angle when sampling from a cube map, should I? It's only necessary when generatung one cube map from a lower mip level.

Share this post


Link to post
Share on other sites
ZachBethel    921
So the scene is tonemapped (not sure which operator off the top of my head), but I've tried several with very similar results.

I definitely am dividing by the number of samples.

After fixing my clamping, maxing out at 1 seems to properly diffuse the light. When I let it stay at its native brightness (1800 at the bright spots, < 1 in the indirect areas), thats when I get extremes at high roughness.

Ive got the specular albedo set to 1, so it shouldnt be losing or gaining energy.

EDIT: I see that you were asking what the current fragment value is. Thats an excellent question! I've tried hooking up opengl shader profilers without success. :( that's a good debugging suggestion. Thanks.

Share this post


Link to post
Share on other sites
Hodgman    51223

You can just use a linear tone-mapper, like result = saturate( input * exposure ), and then tweak the exposure value until they drop below being saturated white.

e.g. if you have to reduce it to 0.1 before they stop being over-white, then you know they're ~10 intensity wink.png

 

Clamping your cube-map samples at a max of 1.0 doesn't seem to make any sense in physical terms... So I don't think that's a real PBR solution.

Share this post


Link to post
Share on other sites
ZachBethel    921

Clamping your cube-map samples at a max of 1.0 doesn't seem to make any sense in physical terms... So I don't think that's a real PBR solution.

 

 

Yes, you are very right, it's not a solution; it's just me playing around trying to figure things out. smile.png

 

 

You can just use a linear tone-mapper, like result = saturate( input * exposure ), and then tweak the exposure value until they drop below being saturated white.

 

 

Ah, yes, that is a very easy solution. rolleyes.gif Forgive my slowness, I've been hanging out with family all day (it's labor day in the States).

 

I simplified things to use uniform sampling of the hemisphere with a lambert brdf (equation given by Rory in the post below):

 

http://www.rorydriscoll.com/2009/01/07/better-sampling/

 
  vec3 OutputColor = vec3(0);
  float OutputWeight = 0;
  for( int i = 0; i < u_NumSamples; i++ ) {
    vec2 Xi = vec2(u_Rand[i*2], u_Rand[i*2+1]);
    vec3 H = UniformSample(Xi, N);
 
    vec3 L = 2 * dot( V, H ) * H - V;
    float NoL = clamp( dot( N, L ), 0, 1 );
 
    if( NoL > 0 )
    {
      OutputColor += pow(texture( u_skybox, L, 0 ).rgb, vec3(2.2)) * NoL;
      OutputWeight += 0.5;
    }
  }
  return OutputColor / OutputWeight;
}

With this BRDF, and c (the albedo) at (1,1,1), I have to crank my exposure down to 0.001 before it desaturates. Now, given what I'm seeing on blogs like this: http://www.marmoset.co/toolbag/learn/pbr-practice , an albedo value of all 1's is probably not super realistic (it's basically zero energy loss), but it still feels extreme. See the attached image.

 

That basically means that with the surface is reflecting at 1000 in all directions. I suppose this could make some sense given that there are effectively area lights in the probe that shine at 1800. When the exposure is scaled that low, those area lights are still at full saturation. If I think about them as analytical point light sources shining at ~1.8, I suppose I should expect my diffuse surface to reflect 1.8 / PI back at the viewer. I'm seeing closer to 1, but it's still in the ballpark.

Share this post


Link to post
Share on other sites
ZachBethel    921

Interestingly, when I switch to the Uffizi probe (which has a max luminance of 13 units instead of 1800), I get much better results (see attached).

 

I suspect that the stpeters probe has such an incredibly high range of values that you basically have to tonemap it to be super dark for diffuse materials.

 

 

Share this post


Link to post
Share on other sites
MJP    19754

Yeah, it definitely looks like you've got a bug somewhere. For comparison, here's some images from my ground-truth renderer, showing roughness values starting at 0.01 and ending with 1.0:

 

[attachment=23446:StPeters_Small.png]

 

[attachment=23447:Uffizi_Small.png]

 

FYI these are taken with an exposure of -2.5, which is a linear exposure of 0.176. It also has filmic tone mapping applied after exposure, followed by gamma correction.

Edited by MJP

Share this post


Link to post
Share on other sites
ZachBethel    921

I believe I found the problem. I was gamma correcting the HDR input texture by performing a pow(value, 2.2). That definitely isn't going to do what I want for values  > 1.

 

Question: are these HDR probes gamma corrected already? When I leave off the pow(2.2) factor but use an SRGB default framebuffer, the colors of the probe get desaturated (see attached).

Share this post


Link to post
Share on other sites
ZachBethel    921

Another problem I'm having is that the G term in the brdf is going to infinity at the edges of my model. (if I use an implicit G term it goes away).

 

[attachment=23458:Black2.png]

 

I suspect this might be because normals being smoothed incorrectly across grazing angles. I assume this is a common problem with using the microfacet brdf model. I could use clamping to fix the problem, but how is this typically addressed in production scenarios?

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this