• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.
Sign in to follow this  
Followers 0
lipsryme

IBL Problem with consistency using GGX / Anisotropy

22 posts in this topic

Hey guys I'm currently in the process of building a new material / shading pipeline and have come across this specific problem.

I've switched from a blinn-phong to a GGX based shading model that also supports anisotropy. Before now I've been using the modified AMD CubeMapGen from sebastien lagarde's blog (http://seblagarde.wordpress.com/2012/06/10/amd-cubemapgen-for-physically-based-rendering/) to prefilter radiance environment maps and store each filter size in the mipmap levels. The problem is when switching to GGX or even an anisotropic highlight the specular just doesn't fit anymore (at all).

 

So the question is how do you get the environment map to be consistent with your shading model ?

@MJP I know you guys also use an anisotropic and cloth shading model, how do you handle indirect reflections / environment maps ?

Edited by lipsryme
0

Share this post


Link to post
Share on other sites
Do you know some book or paper that teaches you how this works ?
I know about radiometry but The only thing I know about this process of filtering an environment map is the general idea of applying a BRDF convolution on the image.

Also what about more complex brdfs for clothing (ashikmin) or anisotropic ggx?
Do you just use isotropic reflections as an approximation?
0

Share this post


Link to post
Share on other sites

In layman's terms you need to treat cube map as a lookup table and precompute there some data by using a source cubemap. Let's start from simple Lambert diffuse to see how does it work.

 

For Lambert diffuse we want to precompute lighting per normal direction and store it in a cubemap. To do that we need to solve a simple integral: 

[attachment=20867:int.gif]

Which in our case means: for every texel of the destination cubemap, calculate some value E. E is calculated by iterating over all source cubemap texels and summing their irradiance E. Where E = SourceCubemap(l) * ( n dot l ) * SourceCubemapTexelSolidAngle. This operation is called cosine filter in AMD CubeMapGen.

 

For simple Phong model (no fresnel etc.) we can precompute reflection in a similar way and store it in a destination cubemap. Unfortunately even after just adding Fresnel or when using Blinn-Phong there are too many input variables and exact results don't fit into a single cubemap. It's time to do some kind of approximations, start using more storage or take more than one sample.

Edited by Krzysztof Narkowicz
2

Share this post


Link to post
Share on other sites

Okay I've been looking over the source code of the cubemapgen and the only line that resembles the distribution function resides in a function called "ProcessFilterExtents" and looks like the following:

// Here we decide if we use a Phong/Blinn or a Phong/Blinn BRDF.
// Phong/Blinn BRDF is just the Phong/Blinn model multiply by the cosine of the lambert law
// so just adding one to specularpower do the trick.					   
weight *= pow(tapDotProd, (a_SpecularPower + (float32)IsPhongBRDF));

So if I understand correctly the loop that encloses this line "integrates" over the hemisphere. tapDotProd is something like R * V ? How do they handle the blinn case with N * H cause I'm not seeing it anywhere. tapDotProd seems to be calculated as the dot product between the current cubemap pixel position and its center tap position. Is the only thing defining the distribution function the specular power here? Can you explain this approximation of the direction vector to me ?

 

edit: I think I get it, V is a known variable to us here as it is just every direction we loop over the cubemap and R would be the lookup vector into this cubemap image, right ?

edit2: After looking at brian karis's code samples it occurs to me that he is doing this on the GPU in a pixel/compute shader and that I might just do that too :)

To prefilter the envmap he's got the following code:

float3 PrefilterEnvMap(float Roughness, float3 R)
{
   float3 N = R;
   float3 V = R;
  
   float3 PrefilteredColor = 0;
   const uint NumSamples = 1024;
   for (uint i = 0; i < NumSamples; i++)
   {
      float2 Xi = Hammersley(i, NumSamples);
      float3 H = ImportanceSampleGGX(Xi, Roughness, N);
      float3 L = 2 * dot(V, H) * H - V;
      float NoL = saturate(dot(N, L));
  
      if (NoL > 0)
      {
         PrefilteredColor += EnvMap.SampleLevel(EnvMapSampler, L, 0).rgb * NoL;
         TotalWeight += NoL;

       }
     }

     return PrefilteredColor / TotalWeight;
}


I've got a few questions about this...

 

1. What does the function hammersley do ?

2. He's sampling the environment map here...is that a TextureCube ? Or is this function being run for each cube face as a Texture2D ?

3. The input to this is the reflection vector R. How would it be calculated in this context ? I imagine similar to the direction vector in the AMD cubemapgen ?

Edited by lipsryme
0

Share this post


Link to post
Share on other sites

First of all that is one awesome code sample (also the links are nice!) you got there, can't thank you enough! This clears up lots of things for me smile.png

Still got some questions on that:

 

1. Why encode to logLUV and not just write to an FP16 target ?

2. Haven't used the geometry shader duplication method yet for rendering a cubemap. Could you explain the VS and GS a little ? Or do you have a link to an article that explains it ?

3. Could I also generate a diffuse irradiance environment map using the above sample but removing the importance sampling bit and just accumulate nDotL ?

 

Update: I just finished implementing a test that applies it to a single mip level based on your example but I'm a little confused on how the Roughness is calculated.

For example: Is your first lod level 0 or 1 ? If you only had an lod count of 1 wouldn't that give you a zero divide ? How many LODs do you use ?  I assume it ranges from 0 (perfect mirror) to 1 (highest roughness) ? Should the first miplevel be the original cube map (aka no convolution = perfect mirror) ?

Shouldn't this line be: float Roughness = (float)g_CubeLod / (float)(g_CubeLodCount); ?

Edited by lipsryme
0

Share this post


Link to post
Share on other sites

*Bump*

 

I feel like I'm getting a little closer but there's still something wrong so I made a quick video showing how the filtered mip levels look like:

https://www.youtube.com/watch?v=1og67CBpoU0&feature=youtu.be

 

Update: Fixed the lookup vector. CubeSize has to be the size of the original cubemap (not each miplevel size, which was what I had before). Still got the problem with the randomly distributed dots all over it.

https://www.youtube.com/watch?v=fgVdQj5VFOM&feature=youtu.be

 

Does anyone have an idea ?

Edited by lipsryme
0

Share this post


Link to post
Share on other sites

I just use a compute shader to do cubemap preconvolution. It's generally less of a hassle to set up compared to using a pixel shader, since you don't have have to set up any rendering state.

 

You can certainly generate a diffuse irradiance map by directly convolving the cubemap, but it's a lot faster to project onto 3rd-order spherical harmonics. Projecting onto SH is essentially O(N), and you can then compute diffuse irradiance with an SH dot product. Cubemap convolution is essentially O(N^2). 

2

Share this post


Link to post
Share on other sites

Well regarding the high frequency dots I've decided to just use a higher sample count which works perfectly (for the cost of a few secs longer waiting time ofc smile.png ).

 

Diffuse Irradiance Map: I've never done this myself before, maybe setting the SH technique aside for a minute...how does the convolution look like ? Like what would I need to do differently than what I'm doing right now for the env map ? I guess convolve with a "cosine kernel" but I don't seem to be good at figuring out the math behind that tongue.png. For example when computing NdotL (or cosTheta_i) where does the vector L come from ? (I have N from the cubemap direction)

 

EnvBRDF / AmbientBRDF: After having managed to filter the environment map for the first part of the equation I'm having an issue computing the 2D texture that contains the ambientBRDF. The code for that can be seen here:

https://www.dropbox.com/s/cdl39ll6ioq2zm4/shot_140425_002637.png

My Problem is the resulting texture looks completely different/wrong than the one shown in his paper.

I think the problem has to do with the vector N as input to the ImportanceSampleGGX function as I have no idea how this vector is computed during the 2D preprocess. Any ideas ?

 

Inverse Gaussian for cloth brdf: I'm having difficulties seeing what exactly the NDF for the importance sampling is and how that was derived.

Can anyone help me with that ? e.g. this is the code for GGX (taken from Brian Karis's paper):

float3 ImportanceSampleGGX(float2 Xi, float Roughness, float3 N)
{
	float a = Roughness * Roughness;

	float Phi = 2 * PI * Xi.x;
	float CosTheta = sqrt((1 - Xi.y) / (1 + (a*a - 1) * Xi.y));
	float SinTheta = sqrt(1 - CosTheta * CosTheta);

	float3 H;
	H.x = SinTheta * cos(Phi);
	H.y = SinTheta * sin(Phi);
	H.z = CosTheta;

	float3 UpVector = abs(N.z) < 0.999 ? float3(0, 0, 1) : float3(1, 0, 0);
	float3 TangentX = normalize(cross(UpVector, N));
	float3 TangentY = cross(N, TangentX);

	// Tangent to world space
	return TangentX * H.x + TangentY * H.y + N * H.z;
}
Edited by lipsryme
0

Share this post


Link to post
Share on other sites

 

Diffuse Irradiance Map: I've never done this myself before, maybe setting the SH technique aside for a minute...how does the convolution look like ? Like what would I need to do differently than what I'm doing right now for the env map ? I guess convolve with a "cosine kernel" but I don't seem to be good at figuring out the math behind that tongue.png. For example when computing NdotL (or cosTheta_i) where does the vector L come from ? (I have N from the cubemap direction)

N & L are the opposing input/output directions, N being the direction of the output direction being resolved and L being the input direction of the source light data.

 

0

Share this post


Link to post
Share on other sites

 

 

Diffuse Irradiance Map: I've never done this myself before, maybe setting the SH technique aside for a minute...how does the convolution look like ? Like what would I need to do differently than what I'm doing right now for the env map ? I guess convolve with a "cosine kernel" but I don't seem to be good at figuring out the math behind that tongue.png. For example when computing NdotL (or cosTheta_i) where does the vector L come from ? (I have N from the cubemap direction)

N & L are the opposing input/output directions, N being the direction of the output direction being resolved and L being the input direction of the source light data.

I'm afraid I don't quite understand what you mean by that unsure.png ...

How do I compute the L vector ?

From what I could decipher from this article http://http.developer.nvidia.com/GPUGems2/gpugems2_chapter10.html

The equation for irradiance is this:

ch10_eqn001.jpg

 

Which I think translates into: Irradiance= diff color * saturate(dot(N, L)) * EnvMap.rgb where the diff color isn't important for baking the cubemap and will be applied when sampling from it in realtime using the normal as the lookup vector, j would be the index for each "directional light" with d being our light vector L.

The article also says:

 

In this context, an environment map with k texels can be thought of as a simple method of storing the intensities of k directional lights, with the light direction implied from the texel location.

 

But how is this 3 component light vector derived from a texel location ?

 

This is how my shader conceptually would look like right now:

float3 PrefilterIrradianceEnvMap(float3 N)
{
	float3 PrefilteredColor = 0;

	for (uint i = 0; i < InputCubeWidth*InputCubeHeight; i++)
	{
		float3 L = ?
		float3 I = EnvMap.SampleLevel(EnvMapSampler, L, 0).rgb;
		PrefilteredColor += saturate(dot(N, L)) * I;
	}

	return PrefilteredColor;
}

float3 PS_Irradiance(GSO input) : SV_TARGET0
{
        float3 N = fix_cube_lookup_for_lod(normalize(input.Normal), CubeSize, CubeLOD);
	return PrefilterIrradianceEnvMap(N);
}
Edited by lipsryme
0

Share this post


Link to post
Share on other sites

Correct way would be to compute integral mentioned in my previous post. You can also approximate it using Monte Carlo method in order to develop intuition how it works. Basically for every destination cubemap texel (which corresponds to some directional n) take x random directions, sample source env map and treat them like a directional light. This is very similar to GGX stuff you are already doing.

for every destination texel (direction n)
    sum = 0
    for x random directions l
        sum += saturate( dot( n, l ) ) * sampleEnvMap( l );
    sum = sum / ( x * PI )

For better quality you can use importance sampling by biasing random distribution. Some directions are more "important" - for example you don't care about random light vectors which are located in different hemisphere than current normal vector.

 

1

Share this post


Link to post
Share on other sites

L is the direction of the pixel being sampled from the cubemap in that case. You sample every pixel of the cubemap (or a subset when using importance sampling), so the direction of the pixel is L

0

Share this post


Link to post
Share on other sites

The thing that I can't seem to figure out is how this direction of the pixel (L) is calculated ?

 

Does it even make sense to use importance sampling for an irradiance environment map because of the nature of it being very low frequency ?

 

P.S. Did you implement the second part of the equation that models the environment brdf (GGX)? Since this also doesn't seem to work out as it should (see my second last post). The resulting texture looks like this: https://www.dropbox.com/s/waya105re6ls4vl/shot_140427_163858.png (as you can see the green channel is just 0)

Edited by lipsryme
0

Share this post


Link to post
Share on other sites

The thing that I can't seem to figure out is how this direction of the pixel (L) is calculated ?


That depends how do you want to integrate:

1. If you want to calculate analytically (loop over all source cubemap texels), then calculate direction from cube map face ID and cube map texel coordinates. That direction is a vector from cubemap origin to current cubemap texel origin. For example for +X face:
dir = normalize( float3( 1, ( ( texelX + 0.5 ) / cubeMapSize ) * 2 - 1, ( ( texelY + 0.5 ) / cubeMapSize ) * 2 - 1 ) )
2. If you want to approximate, then just generate some random vectors by using texture lookup or some rand shader function.

First approach is a bit more complicated because you additionally need to weight samples by solid angle. Cube map corners have smaller solid angle and should influence the result less. Solid angle can be calculated by projecting texel onto unit sphere and then calculating it's area on the sphere. BTW this is something that AMD CubeMapGen does.

Does it even make sense to use importance sampling for an irradiance environment map because of the nature of it being very low frequency ?


Frequency doesn't matter here. The idea of importance sampling is to bias random distribution according to how much given sample influences result. For example we can skip l directions which are in different hemisphere ( dot( n, l ) <= 0 ) and we can bias towards directions with higher weight ( larger dot( n, l ) ).
1

Share this post


Link to post
Share on other sites

The thing that I can't seem to figure out is how this direction of the pixel (L) is calculated ?

 

Thats what the purpose of UVandIndexToBoxCoord in my shaders was, it takes the uv coordinate and cubemap face index and returned the direction for it.

edit: missed this question:


P.S. Did you implement the second part of the equation that models the environment brdf (GGX)? Since this also doesn't seem to work out as it should (see my second last post). The resulting texture looks like this: https://www.dropbox.com/s/waya105re6ls4vl/shot_140427_163858.png (as you can see the green channel is just 0)

I did have a similar issue, when trying to replicate Epic's model, I can't remember exactly what it was but here is my current sample for producing that texture:
 
struct vs_out
{
float4 pos : SV_POSITION;
float2 uv : TEXCOORD0;
};
 
void vs_main(out vs_out o, uint id : SV_VERTEXID)
{
o.uv = float2((id << 1) & 2, id & 2);
o.pos = float4(o.uv * float2(2,-2) + float2(-1,1), 0, 1);
//o.uv = (o.pos.xy * float2(0.5,-0.5) + 0.5) * 4;
//o.uv.y = 1 - o.uv.y;
}
 
struct ps_in
{
float4 pos : SV_POSITION;
float2 uv : TEXCOORD0;
};
 
// http://holger.dammertz.org/stuff/notes_HammersleyOnHemisphere.html
float radicalInverse_VdC(uint bits) {
     bits = (bits << 16u) | (bits >> 16u);
     bits = ((bits & 0x55555555u) << 1u) | ((bits & 0xAAAAAAAAu) >> 1u);
     bits = ((bits & 0x33333333u) << 2u) | ((bits & 0xCCCCCCCCu) >> 2u);
     bits = ((bits & 0x0F0F0F0Fu) << 4u) | ((bits & 0xF0F0F0F0u) >> 4u);
     bits = ((bits & 0x00FF00FFu) << 8u) | ((bits & 0xFF00FF00u) >> 8u);
     return float(bits) * 2.3283064365386963e-10; // / 0x100000000
 }
 // http://holger.dammertz.org/stuff/notes_HammersleyOnHemisphere.html
 float2 Hammersley(uint i, uint N) {
     return float2(float(i)/float(N), radicalInverse_VdC(i));
 }
 
 static const float PI = 3.1415926535897932384626433832795;
 
// Image-Based Lighting
// http://www.unrealengine.com/files/downloads/2013SiggraphPresentationsNotes.pdf
float3 ImportanceSampleGGX( float2 Xi, float Roughness, float3 N )
{
float a = Roughness * Roughness;
float Phi = 2 * PI * Xi.x;
float CosTheta = sqrt( (1 - Xi.y) / ( 1 + (a*a - 1) * Xi.y ) );
float SinTheta = sqrt( 1 - CosTheta * CosTheta );
float3 H;
H.x = SinTheta * cos( Phi );
H.y = SinTheta * sin( Phi );
H.z = CosTheta;
float3 UpVector = abs(N.z) < 0.999 ? float3(0,0,1) : float3(1,0,0);
float3 TangentX = normalize( cross( UpVector, N ) );
float3 TangentY = cross( N, TangentX );
// Tangent to world space
return TangentX * H.x + TangentY * H.y + N * H.z;
}
 
// http://graphicrants.blogspot.com.au/2013/08/specular-brdf-reference.html
float GGX(float nDotV, float a)
{
float aa = a*a;
float oneMinusAa = 1 - aa;
float nDotV2 = 2 * nDotV;
float root = aa + oneMinusAa * nDotV * nDotV;
return nDotV2 / (nDotV + sqrt(root));
}
 
// http://graphicrants.blogspot.com.au/2013/08/specular-brdf-reference.html
float G_Smith(float a, float nDotV, float nDotL)
{
return GGX(nDotL,a) * GGX(nDotV,a);
}
 
// Environment BRDF
// http://www.unrealengine.com/files/downloads/2013SiggraphPresentationsNotes.pdf
float2 IntegrateBRDF( float Roughness, float NoV )
{
float3 V;
V.x = sqrt( 1.0f - NoV * NoV ); // sin
V.y = 0;
V.z = NoV;
// cos
float A = 0;
float B = 0;
const uint NumSamples = 1024;
[loop]
for( uint i = 0; i < NumSamples; i++ )
{
float2 Xi = Hammersley( i, NumSamples );
float3 H = ImportanceSampleGGX( Xi, Roughness, float3(0,0,1) );
float3 L = 2 * dot( V, H ) * H - V;
float NoL = saturate( L.z );
float NoH = saturate( H.z );
float VoH = saturate( dot( V, H ) );
[branch]
if( NoL > 0 )
{
float G = G_Smith( Roughness, NoV, NoL );
float G_Vis = G * VoH / (NoH * NoV);
float Fc = pow( 1 - VoH, 5 );
A += (1 - Fc) * G_Vis;
B += Fc * G_Vis;
}
}
return float2( A, B ) / NumSamples;
}
 
// Environment BRDF
// http://www.unrealengine.com/files/downloads/2013SiggraphPresentationsNotes.pdf
float4 ps_main(in ps_in i) : SV_TARGET0
{
float2 uv = i.uv;
float nDotV = uv.x;
float Roughness = uv.y;
 
float2 integral = IntegrateBRDF(Roughness, nDotV);
 
return float4(integral, 0, 1);
}
 
Edited by Digitalfragment
2

Share this post


Link to post
Share on other sites

Ugh sorry it's so confusing tongue.png

So the light vector L is the direction vector that I get as input from the GS to the PS. I guess the naming scheme was messing with me there.

It makes sense, since when I output it I get the directions from the center towards the cube pixels.

What I'm trying to find then is the surface normal N. I guess it is also somehow calculated from the texture coordinates ? Is it just the face direction ?

Edited by lipsryme
0

Share this post


Link to post
Share on other sites

Depends on which way around your looking at it. In my case, i'm outputting a cubemap thats sampled by Normal, so the direction to the pixel that im outputting is N (which is the "UVandIndexToBoxCoord" wrapped direction, and so L is the direction of the pixel in the *source* cubemap being sampled. In my specular sample, L was calculated using importance sampling around N, but for a complete diffuse convolve you would need to sample the entire hemisphere around N.

For this reason, id still suggest using hammersley to generate a set of points, but dont use the GGX distribution, instead use a hemispherical distribution (or just use GGX but with a roughness of 1). Those are your L vectors that you sample the source with.

1

Share this post


Link to post
Share on other sites

EnvBRDF: It works smile.png Can you tell me why the input vector to importanceSampleGGX is float3(0, 0, 1) ? This is what I couldn't figure out before. Also I changed the Smith term to the Schlick approximation that they're using:

// http://graphicrants.blogspot.com.au/2013/08/specular-brdf-reference.html
float GGX(float NdotV, float a)
{
	float k = (a * a) / 2;
	return NdotV / (NdotV * (1.0f - k) + k);
}

I still find the part about his G term confusing in the paper. He's talking about remapping it do be (a+1)² / 2. Tried that but the texture didn't produce correct values so I guess that's not what he's using. In another sentence he talks about using a/2 which is what I'm using here and this gives me a texture pretty much exactly the same as the one seen in his paper (just the Y-axis rotated but that doesn't matter).

 

IrradianceMap: I've compared using the ImportanceSampleGGX with roughness = 1.0f to the output using AMD CubeMapGen and there seem to be some minor differences. Is it "safe" to use this for the irradiance map ? or do you recommend using the "correct" way? Then again I have no idea how to compute the hemispherical distribution.

 

Is there any book / paper / link  that explains how the importance sample distributions are calculated ?

Edited by lipsryme
0

Share this post


Link to post
Share on other sites

There isn't really a correct way; Physically Based Rendering still isn't physically accurate rendering, no matter which way you go. I'm using the roughness 1 trick, and our artists are happy with that, but the "correct" way is going to yield better results. In the "correct" sense you shouldn't importance sample and instead use every pixel in the source cubemap - so that way as long as it shows up in the source it is factored in.

 

 

As far as links etc on importance sampling, I did google searches for "hammersley distribution sampling" and found out most of it that way.

0

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now
Sign in to follow this  
Followers 0