Jump to content
  • Advertisement
Sign in to follow this  
lipsryme

IBL Problem with consistency using GGX / Anisotropy

This topic is 2004 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hey guys I'm currently in the process of building a new material / shading pipeline and have come across this specific problem.

I've switched from a blinn-phong to a GGX based shading model that also supports anisotropy. Before now I've been using the modified AMD CubeMapGen from sebastien lagarde's blog (http://seblagarde.wordpress.com/2012/06/10/amd-cubemapgen-for-physically-based-rendering/) to prefilter radiance environment maps and store each filter size in the mipmap levels. The problem is when switching to GGX or even an anisotropic highlight the specular just doesn't fit anymore (at all).

 

So the question is how do you get the environment map to be consistent with your shading model ?

@MJP I know you guys also use an anisotropic and cloth shading model, how do you handle indirect reflections / environment maps ?

Edited by lipsryme

Share this post


Link to post
Share on other sites
Advertisement
Do you know some book or paper that teaches you how this works ?
I know about radiometry but The only thing I know about this process of filtering an environment map is the general idea of applying a BRDF convolution on the image.

Also what about more complex brdfs for clothing (ashikmin) or anisotropic ggx?
Do you just use isotropic reflections as an approximation?

Share this post


Link to post
Share on other sites

The best you can do with a pre-convolved cubemap is integrate the environment with an isotropic distribution  to get the reflection when V == N, which is the head-on angle. It will give incorrect results as you get to crazing angles, so you won't get that long, vertical "streaky" look that's characteristic of microfacet specular models. If you apply fresnel to the cubemap result you can also get reflections with rather high intensity, and so you have to pursue approximations like the ones proposed on those course notes in order to keep the fresnel from blowing out. It's possible to approximate the streaky reflections with multiple samples from the cubemap if you're willing to take the hit, and you can also use multiple samples along the tangent direction in order to approximate anisotropic reflections

 

For our cloth BRDF we have a duplicate set of our cubemaps that are convolved with the inverted Gaussian distribution used in that BRDF. It's just like the GGX cubemaps where it gets you the correct result when V == N, but at grazing angles.

Share this post


Link to post
Share on other sites

In layman's terms you need to treat cube map as a lookup table and precompute there some data by using a source cubemap. Let's start from simple Lambert diffuse to see how does it work.

 

For Lambert diffuse we want to precompute lighting per normal direction and store it in a cubemap. To do that we need to solve a simple integral: 

[attachment=20867:int.gif]

Which in our case means: for every texel of the destination cubemap, calculate some value E. E is calculated by iterating over all source cubemap texels and summing their irradiance E. Where E = SourceCubemap(l) * ( n dot l ) * SourceCubemapTexelSolidAngle. This operation is called cosine filter in AMD CubeMapGen.

 

For simple Phong model (no fresnel etc.) we can precompute reflection in a similar way and store it in a destination cubemap. Unfortunately even after just adding Fresnel or when using Blinn-Phong there are too many input variables and exact results don't fit into a single cubemap. It's time to do some kind of approximations, start using more storage or take more than one sample.

Edited by Krzysztof Narkowicz

Share this post


Link to post
Share on other sites

Okay I've been looking over the source code of the cubemapgen and the only line that resembles the distribution function resides in a function called "ProcessFilterExtents" and looks like the following:

// Here we decide if we use a Phong/Blinn or a Phong/Blinn BRDF.
// Phong/Blinn BRDF is just the Phong/Blinn model multiply by the cosine of the lambert law
// so just adding one to specularpower do the trick.					   
weight *= pow(tapDotProd, (a_SpecularPower + (float32)IsPhongBRDF));

So if I understand correctly the loop that encloses this line "integrates" over the hemisphere. tapDotProd is something like R * V ? How do they handle the blinn case with N * H cause I'm not seeing it anywhere. tapDotProd seems to be calculated as the dot product between the current cubemap pixel position and its center tap position. Is the only thing defining the distribution function the specular power here? Can you explain this approximation of the direction vector to me ?

 

edit: I think I get it, V is a known variable to us here as it is just every direction we loop over the cubemap and R would be the lookup vector into this cubemap image, right ?

edit2: After looking at brian karis's code samples it occurs to me that he is doing this on the GPU in a pixel/compute shader and that I might just do that too :)

To prefilter the envmap he's got the following code:

float3 PrefilterEnvMap(float Roughness, float3 R)
{
   float3 N = R;
   float3 V = R;
  
   float3 PrefilteredColor = 0;
   const uint NumSamples = 1024;
   for (uint i = 0; i < NumSamples; i++)
   {
      float2 Xi = Hammersley(i, NumSamples);
      float3 H = ImportanceSampleGGX(Xi, Roughness, N);
      float3 L = 2 * dot(V, H) * H - V;
      float NoL = saturate(dot(N, L));
  
      if (NoL > 0)
      {
         PrefilteredColor += EnvMap.SampleLevel(EnvMapSampler, L, 0).rgb * NoL;
         TotalWeight += NoL;

       }
     }

     return PrefilteredColor / TotalWeight;
}


I've got a few questions about this...

 

1. What does the function hammersley do ?

2. He's sampling the environment map here...is that a TextureCube ? Or is this function being run for each cube face as a Texture2D ?

3. The input to this is the reflection vector R. How would it be calculated in this context ? I imagine similar to the direction vector in the AMD cubemapgen ?

Edited by lipsryme

Share this post


Link to post
Share on other sites

First of all that is one awesome code sample (also the links are nice!) you got there, can't thank you enough! This clears up lots of things for me smile.png

Still got some questions on that:

 

1. Why encode to logLUV and not just write to an FP16 target ?

2. Haven't used the geometry shader duplication method yet for rendering a cubemap. Could you explain the VS and GS a little ? Or do you have a link to an article that explains it ?

3. Could I also generate a diffuse irradiance environment map using the above sample but removing the importance sampling bit and just accumulate nDotL ?

 

Update: I just finished implementing a test that applies it to a single mip level based on your example but I'm a little confused on how the Roughness is calculated.

For example: Is your first lod level 0 or 1 ? If you only had an lod count of 1 wouldn't that give you a zero divide ? How many LODs do you use ?  I assume it ranges from 0 (perfect mirror) to 1 (highest roughness) ? Should the first miplevel be the original cube map (aka no convolution = perfect mirror) ?

Shouldn't this line be: float Roughness = (float)g_CubeLod / (float)(g_CubeLodCount); ?

Edited by lipsryme

Share this post


Link to post
Share on other sites

*Bump*

 

I feel like I'm getting a little closer but there's still something wrong so I made a quick video showing how the filtered mip levels look like:

 

Update: Fixed the lookup vector. CubeSize has to be the size of the original cubemap (not each miplevel size, which was what I had before). Still got the problem with the randomly distributed dots all over it.

 

Does anyone have an idea ?

Edited by lipsryme

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!