IBL Problem with consistency using GGX / Anisotropy

Started by
21 comments, last by Digitalfragment 9 years, 11 months ago

I just use a compute shader to do cubemap preconvolution. It's generally less of a hassle to set up compared to using a pixel shader, since you don't have have to set up any rendering state.

You can certainly generate a diffuse irradiance map by directly convolving the cubemap, but it's a lot faster to project onto 3rd-order spherical harmonics. Projecting onto SH is essentially O(N), and you can then compute diffuse irradiance with an SH dot product. Cubemap convolution is essentially O(N^2).

Advertisement

Well regarding the high frequency dots I've decided to just use a higher sample count which works perfectly (for the cost of a few secs longer waiting time ofc smile.png ).

Diffuse Irradiance Map: I've never done this myself before, maybe setting the SH technique aside for a minute...how does the convolution look like ? Like what would I need to do differently than what I'm doing right now for the env map ? I guess convolve with a "cosine kernel" but I don't seem to be good at figuring out the math behind that tongue.png. For example when computing NdotL (or cosTheta_i) where does the vector L come from ? (I have N from the cubemap direction)

EnvBRDF / AmbientBRDF: After having managed to filter the environment map for the first part of the equation I'm having an issue computing the 2D texture that contains the ambientBRDF. The code for that can be seen here:

https://www.dropbox.com/s/cdl39ll6ioq2zm4/shot_140425_002637.png

My Problem is the resulting texture looks completely different/wrong than the one shown in his paper.

I think the problem has to do with the vector N as input to the ImportanceSampleGGX function as I have no idea how this vector is computed during the 2D preprocess. Any ideas ?

Inverse Gaussian for cloth brdf: I'm having difficulties seeing what exactly the NDF for the importance sampling is and how that was derived.

Can anyone help me with that ? e.g. this is the code for GGX (taken from Brian Karis's paper):


float3 ImportanceSampleGGX(float2 Xi, float Roughness, float3 N)
{
	float a = Roughness * Roughness;

	float Phi = 2 * PI * Xi.x;
	float CosTheta = sqrt((1 - Xi.y) / (1 + (a*a - 1) * Xi.y));
	float SinTheta = sqrt(1 - CosTheta * CosTheta);

	float3 H;
	H.x = SinTheta * cos(Phi);
	H.y = SinTheta * sin(Phi);
	H.z = CosTheta;

	float3 UpVector = abs(N.z) < 0.999 ? float3(0, 0, 1) : float3(1, 0, 0);
	float3 TangentX = normalize(cross(UpVector, N));
	float3 TangentY = cross(N, TangentX);

	// Tangent to world space
	return TangentX * H.x + TangentY * H.y + N * H.z;
}

Diffuse Irradiance Map: I've never done this myself before, maybe setting the SH technique aside for a minute...how does the convolution look like ? Like what would I need to do differently than what I'm doing right now for the env map ? I guess convolve with a "cosine kernel" but I don't seem to be good at figuring out the math behind that tongue.png. For example when computing NdotL (or cosTheta_i) where does the vector L come from ? (I have N from the cubemap direction)

N & L are the opposing input/output directions, N being the direction of the output direction being resolved and L being the input direction of the source light data.

Diffuse Irradiance Map: I've never done this myself before, maybe setting the SH technique aside for a minute...how does the convolution look like ? Like what would I need to do differently than what I'm doing right now for the env map ? I guess convolve with a "cosine kernel" but I don't seem to be good at figuring out the math behind that tongue.png. For example when computing NdotL (or cosTheta_i) where does the vector L come from ? (I have N from the cubemap direction)

N & L are the opposing input/output directions, N being the direction of the output direction being resolved and L being the input direction of the source light data.

I'm afraid I don't quite understand what you mean by that unsure.png ...

How do I compute the L vector ?

From what I could decipher from this article http://http.developer.nvidia.com/GPUGems2/gpugems2_chapter10.html

The equation for irradiance is this:

ch10_eqn001.jpg

Which I think translates into: Irradiance= diff color * saturate(dot(N, L)) * EnvMap.rgb where the diff color isn't important for baking the cubemap and will be applied when sampling from it in realtime using the normal as the lookup vector, j would be the index for each "directional light" with d being our light vector L.

The article also says:

In this context, an environment map with k texels can be thought of as a simple method of storing the intensities of k directional lights, with the light direction implied from the texel location.

But how is this 3 component light vector derived from a texel location ?

This is how my shader conceptually would look like right now:


float3 PrefilterIrradianceEnvMap(float3 N)
{
	float3 PrefilteredColor = 0;

	for (uint i = 0; i < InputCubeWidth*InputCubeHeight; i++)
	{
		float3 L = ?
		float3 I = EnvMap.SampleLevel(EnvMapSampler, L, 0).rgb;
		PrefilteredColor += saturate(dot(N, L)) * I;
	}

	return PrefilteredColor;
}

float3 PS_Irradiance(GSO input) : SV_TARGET0
{
        float3 N = fix_cube_lookup_for_lod(normalize(input.Normal), CubeSize, CubeLOD);
	return PrefilterIrradianceEnvMap(N);
}

Correct way would be to compute integral mentioned in my previous post. You can also approximate it using Monte Carlo method in order to develop intuition how it works. Basically for every destination cubemap texel (which corresponds to some directional n) take x random directions, sample source env map and treat them like a directional light. This is very similar to GGX stuff you are already doing.


for every destination texel (direction n)
    sum = 0
    for x random directions l
        sum += saturate( dot( n, l ) ) * sampleEnvMap( l );
    sum = sum / ( x * PI )

For better quality you can use importance sampling by biasing random distribution. Some directions are more "important" - for example you don't care about random light vectors which are located in different hemisphere than current normal vector.

L is the direction of the pixel being sampled from the cubemap in that case. You sample every pixel of the cubemap (or a subset when using importance sampling), so the direction of the pixel is L

The thing that I can't seem to figure out is how this direction of the pixel (L) is calculated ?

Does it even make sense to use importance sampling for an irradiance environment map because of the nature of it being very low frequency ?

P.S. Did you implement the second part of the equation that models the environment brdf (GGX)? Since this also doesn't seem to work out as it should (see my second last post). The resulting texture looks like this: https://www.dropbox.com/s/waya105re6ls4vl/shot_140427_163858.png (as you can see the green channel is just 0)

The thing that I can't seem to figure out is how this direction of the pixel (L) is calculated ?


That depends how do you want to integrate:

1. If you want to calculate analytically (loop over all source cubemap texels), then calculate direction from cube map face ID and cube map texel coordinates. That direction is a vector from cubemap origin to current cubemap texel origin. For example for +X face:
dir = normalize( float3( 1, ( ( texelX + 0.5 ) / cubeMapSize ) * 2 - 1, ( ( texelY + 0.5 ) / cubeMapSize ) * 2 - 1 ) )
2. If you want to approximate, then just generate some random vectors by using texture lookup or some rand shader function.

First approach is a bit more complicated because you additionally need to weight samples by solid angle. Cube map corners have smaller solid angle and should influence the result less. Solid angle can be calculated by projecting texel onto unit sphere and then calculating it's area on the sphere. BTW this is something that AMD CubeMapGen does.

Does it even make sense to use importance sampling for an irradiance environment map because of the nature of it being very low frequency ?


Frequency doesn't matter here. The idea of importance sampling is to bias random distribution according to how much given sample influences result. For example we can skip l directions which are in different hemisphere ( dot( n, l ) <= 0 ) and we can bias towards directions with higher weight ( larger dot( n, l ) ).

The thing that I can't seem to figure out is how this direction of the pixel (L) is calculated ?

Thats what the purpose of UVandIndexToBoxCoord in my shaders was, it takes the uv coordinate and cubemap face index and returned the direction for it.

edit: missed this question:


P.S. Did you implement the second part of the equation that models the environment brdf (GGX)? Since this also doesn't seem to work out as it should (see my second last post). The resulting texture looks like this: https://www.dropbox.com/s/waya105re6ls4vl/shot_140427_163858.png (as you can see the green channel is just 0)

I did have a similar issue, when trying to replicate Epic's model, I can't remember exactly what it was but here is my current sample for producing that texture:
 
struct vs_out
{
float4 pos : SV_POSITION;
float2 uv : TEXCOORD0;
};
 
void vs_main(out vs_out o, uint id : SV_VERTEXID)
{
o.uv = float2((id << 1) & 2, id & 2);
o.pos = float4(o.uv * float2(2,-2) + float2(-1,1), 0, 1);
//o.uv = (o.pos.xy * float2(0.5,-0.5) + 0.5) * 4;
//o.uv.y = 1 - o.uv.y;
}
 
struct ps_in
{
float4 pos : SV_POSITION;
float2 uv : TEXCOORD0;
};
 
// http://holger.dammertz.org/stuff/notes_HammersleyOnHemisphere.html
float radicalInverse_VdC(uint bits) {
     bits = (bits << 16u) | (bits >> 16u);
     bits = ((bits & 0x55555555u) << 1u) | ((bits & 0xAAAAAAAAu) >> 1u);
     bits = ((bits & 0x33333333u) << 2u) | ((bits & 0xCCCCCCCCu) >> 2u);
     bits = ((bits & 0x0F0F0F0Fu) << 4u) | ((bits & 0xF0F0F0F0u) >> 4u);
     bits = ((bits & 0x00FF00FFu) << 8u) | ((bits & 0xFF00FF00u) >> 8u);
     return float(bits) * 2.3283064365386963e-10; // / 0x100000000
 }
 // http://holger.dammertz.org/stuff/notes_HammersleyOnHemisphere.html
 float2 Hammersley(uint i, uint N) {
     return float2(float(i)/float(N), radicalInverse_VdC(i));
 }
 
 static const float PI = 3.1415926535897932384626433832795;
 
// Image-Based Lighting
// http://www.unrealengine.com/files/downloads/2013SiggraphPresentationsNotes.pdf
float3 ImportanceSampleGGX( float2 Xi, float Roughness, float3 N )
{
float a = Roughness * Roughness;
float Phi = 2 * PI * Xi.x;
float CosTheta = sqrt( (1 - Xi.y) / ( 1 + (a*a - 1) * Xi.y ) );
float SinTheta = sqrt( 1 - CosTheta * CosTheta );
float3 H;
H.x = SinTheta * cos( Phi );
H.y = SinTheta * sin( Phi );
H.z = CosTheta;
float3 UpVector = abs(N.z) < 0.999 ? float3(0,0,1) : float3(1,0,0);
float3 TangentX = normalize( cross( UpVector, N ) );
float3 TangentY = cross( N, TangentX );
// Tangent to world space
return TangentX * H.x + TangentY * H.y + N * H.z;
}
 
// http://graphicrants.blogspot.com.au/2013/08/specular-brdf-reference.html
float GGX(float nDotV, float a)
{
float aa = a*a;
float oneMinusAa = 1 - aa;
float nDotV2 = 2 * nDotV;
float root = aa + oneMinusAa * nDotV * nDotV;
return nDotV2 / (nDotV + sqrt(root));
}
 
// http://graphicrants.blogspot.com.au/2013/08/specular-brdf-reference.html
float G_Smith(float a, float nDotV, float nDotL)
{
return GGX(nDotL,a) * GGX(nDotV,a);
}
 
// Environment BRDF
// http://www.unrealengine.com/files/downloads/2013SiggraphPresentationsNotes.pdf
float2 IntegrateBRDF( float Roughness, float NoV )
{
float3 V;
V.x = sqrt( 1.0f - NoV * NoV ); // sin
V.y = 0;
V.z = NoV;
// cos
float A = 0;
float B = 0;
const uint NumSamples = 1024;
[loop]
for( uint i = 0; i < NumSamples; i++ )
{
float2 Xi = Hammersley( i, NumSamples );
float3 H = ImportanceSampleGGX( Xi, Roughness, float3(0,0,1) );
float3 L = 2 * dot( V, H ) * H - V;
float NoL = saturate( L.z );
float NoH = saturate( H.z );
float VoH = saturate( dot( V, H ) );
[branch]
if( NoL > 0 )
{
float G = G_Smith( Roughness, NoV, NoL );
float G_Vis = G * VoH / (NoH * NoV);
float Fc = pow( 1 - VoH, 5 );
A += (1 - Fc) * G_Vis;
B += Fc * G_Vis;
}
}
return float2( A, B ) / NumSamples;
}
 
// Environment BRDF
// http://www.unrealengine.com/files/downloads/2013SiggraphPresentationsNotes.pdf
float4 ps_main(in ps_in i) : SV_TARGET0
{
float2 uv = i.uv;
float nDotV = uv.x;
float Roughness = uv.y;
 
float2 integral = IntegrateBRDF(Roughness, nDotV);
 
return float4(integral, 0, 1);
}
 

Ugh sorry it's so confusing tongue.png

So the light vector L is the direction vector that I get as input from the GS to the PS. I guess the naming scheme was messing with me there.

It makes sense, since when I output it I get the directions from the center towards the cube pixels.

What I'm trying to find then is the surface normal N. I guess it is also somehow calculated from the texture coordinates ? Is it just the face direction ?

This topic is closed to new replies.

Advertisement