Jump to content

  • Log In with Google      Sign In   
  • Create Account


IBL Problem with consistency using GGX / Anisotropy


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
22 replies to this topic

#1 lipsryme   Members   -  Reputation: 1002

Like
0Likes
Like

Posted 11 April 2014 - 04:36 AM

Hey guys I'm currently in the process of building a new material / shading pipeline and have come across this specific problem.

I've switched from a blinn-phong to a GGX based shading model that also supports anisotropy. Before now I've been using the modified AMD CubeMapGen from sebastien lagarde's blog (http://seblagarde.wordpress.com/2012/06/10/amd-cubemapgen-for-physically-based-rendering/) to prefilter radiance environment maps and store each filter size in the mipmap levels. The problem is when switching to GGX or even an anisotropic highlight the specular just doesn't fit anymore (at all).

 

So the question is how do you get the environment map to be consistent with your shading model ?

@MJP I know you guys also use an anisotropic and cloth shading model, how do you handle indirect reflections / environment maps ?


Edited by lipsryme, 11 April 2014 - 04:39 AM.


Sponsor:

#2 Krzysztof Narkowicz   Members   -  Reputation: 1103

Like
3Likes
Like

Posted 11 April 2014 - 02:01 PM

Modified AMD CubeMapGen generates cubemaps using Phong shading model. For more complicated models check out Black Ops And UE4 presentations from Siggraph 2013:

http://blog.selfshadow.com/publications/s2013-shading-course/lazarov/s2013_pbs_black_ops_2_notes.pdf

http://blog.selfshadow.com/publications/s2013-shading-course/karis/s2013_pbs_epic_notes_v2.pdf


blog | twitter | "Don't have any friends? Still a virgin? Programming is for you!"


#3 lipsryme   Members   -  Reputation: 1002

Like
0Likes
Like

Posted 13 April 2014 - 08:29 AM

Do you know some book or paper that teaches you how this works ?
I know about radiometry but The only thing I know about this process of filtering an environment map is the general idea of applying a BRDF convolution on the image.

Also what about more complex brdfs for clothing (ashikmin) or anisotropic ggx?
Do you just use isotropic reflections as an approximation?

#4 MJP   Moderators   -  Reputation: 10547

Like
4Likes
Like

Posted 13 April 2014 - 01:22 PM

The best you can do with a pre-convolved cubemap is integrate the environment with an isotropic distribution  to get the reflection when V == N, which is the head-on angle. It will give incorrect results as you get to crazing angles, so you won't get that long, vertical "streaky" look that's characteristic of microfacet specular models. If you apply fresnel to the cubemap result you can also get reflections with rather high intensity, and so you have to pursue approximations like the ones proposed on those course notes in order to keep the fresnel from blowing out. It's possible to approximate the streaky reflections with multiple samples from the cubemap if you're willing to take the hit, and you can also use multiple samples along the tangent direction in order to approximate anisotropic reflections

 

For our cloth BRDF we have a duplicate set of our cubemaps that are convolved with the inverted Gaussian distribution used in that BRDF. It's just like the GGX cubemaps where it gets you the correct result when V == N, but at grazing angles.



#5 Krzysztof Narkowicz   Members   -  Reputation: 1103

Like
2Likes
Like

Posted 13 April 2014 - 02:30 PM

In layman's terms you need to treat cube map as a lookup table and precompute there some data by using a source cubemap. Let's start from simple Lambert diffuse to see how does it work.

 

For Lambert diffuse we want to precompute lighting per normal direction and store it in a cubemap. To do that we need to solve a simple integral: 

int.gif

Which in our case means: for every texel of the destination cubemap, calculate some value E. E is calculated by iterating over all source cubemap texels and summing their irradiance E. Where E = SourceCubemap(l) * ( n dot l ) * SourceCubemapTexelSolidAngle. This operation is called cosine filter in AMD CubeMapGen.

 

For simple Phong model (no fresnel etc.) we can precompute reflection in a similar way and store it in a destination cubemap. Unfortunately even after just adding Fresnel or when using Blinn-Phong there are too many input variables and exact results don't fit into a single cubemap. It's time to do some kind of approximations, start using more storage or take more than one sample.


Edited by Krzysztof Narkowicz, 13 April 2014 - 02:32 PM.

blog | twitter | "Don't have any friends? Still a virgin? Programming is for you!"


#6 lipsryme   Members   -  Reputation: 1002

Like
0Likes
Like

Posted 15 April 2014 - 11:52 AM

Okay I've been looking over the source code of the cubemapgen and the only line that resembles the distribution function resides in a function called "ProcessFilterExtents" and looks like the following:

// Here we decide if we use a Phong/Blinn or a Phong/Blinn BRDF.
// Phong/Blinn BRDF is just the Phong/Blinn model multiply by the cosine of the lambert law
// so just adding one to specularpower do the trick.					   
weight *= pow(tapDotProd, (a_SpecularPower + (float32)IsPhongBRDF));

So if I understand correctly the loop that encloses this line "integrates" over the hemisphere. tapDotProd is something like R * V ? How do they handle the blinn case with N * H cause I'm not seeing it anywhere. tapDotProd seems to be calculated as the dot product between the current cubemap pixel position and its center tap position. Is the only thing defining the distribution function the specular power here? Can you explain this approximation of the direction vector to me ?

 

edit: I think I get it, V is a known variable to us here as it is just every direction we loop over the cubemap and R would be the lookup vector into this cubemap image, right ?

edit2: After looking at brian karis's code samples it occurs to me that he is doing this on the GPU in a pixel/compute shader and that I might just do that too :)

To prefilter the envmap he's got the following code:

float3 PrefilterEnvMap(float Roughness, float3 R)
{
   float3 N = R;
   float3 V = R;
  
   float3 PrefilteredColor = 0;
   const uint NumSamples = 1024;
   for (uint i = 0; i < NumSamples; i++)
   {
      float2 Xi = Hammersley(i, NumSamples);
      float3 H = ImportanceSampleGGX(Xi, Roughness, N);
      float3 L = 2 * dot(V, H) * H - V;
      float NoL = saturate(dot(N, L));
  
      if (NoL > 0)
      {
         PrefilteredColor += EnvMap.SampleLevel(EnvMapSampler, L, 0).rgb * NoL;
         TotalWeight += NoL;

       }
     }

     return PrefilteredColor / TotalWeight;
}


I've got a few questions about this...

 

1. What does the function hammersley do ?

2. He's sampling the environment map here...is that a TextureCube ? Or is this function being run for each cube face as a Texture2D ?

3. The input to this is the reflection vector R. How would it be calculated in this context ? I imagine similar to the direction vector in the AMD cubemapgen ?


Edited by lipsryme, 15 April 2014 - 01:36 PM.


#7 Digitalfragment   Members   -  Reputation: 768

Like
4Likes
Like

Posted 17 April 2014 - 12:56 AM

1. What does the function hammersley do ?
2. He's sampling the environment map here...is that a TextureCube ? Or is this function being run for each cube face as a Texture2D ?
3. The input to this is the reflection vector R. How would it be calculated in this context ? I imagine similar to the direction vector in the AMD cubemapgen ?

 

 

1. hammersley generates psuedo random, fairly well spaced 2-D coordinates, that the GGX importance sample function then gathers into a region thats going to contribute the most for the given roughness.
 

2. Its a cubemap, for a rough surface an entire hemisphere is required. The nice thing about using a cubemap as an input is that its easy to render one in realtime.

3. The function is run for every pixel of a cubemap rendertarget, its convolving the environment for all directions.

Heres my attempt at implementing this entire process as presented by Epic at Siggraph last year (if anyone can point out errors, that would be awesome).

It uses SV_VERTEXID to generate fullscreen quads, and a GS with SV_RENDERTARGETARRAYINDEX to output to all 6 faces of a cubemap rendertarget simultaneously.
 

 
struct vs_out
{
float4 pos : SV_POSITION;
float2 uv : TEXCOORD0;
};
 
void vs_main(out vs_out o, uint id : SV_VERTEXID)
{
o.uv = float2((id << 1) & 2, id & 2);
o.pos = float4(o.uv * float2(2,-2) + float2(-1,1), 0, 1);
//o.uv = (o.pos.xy * float2(0.5,-0.5) + 0.5) * 4;
//o.uv.y = 1 - o.uv.y;
}
 
struct ps_in
{
float4 pos : SV_POSITION;
float3 nrm : TEXCOORD0;
uint face : SV_RENDERTARGETARRAYINDEX;
};
 
float3 UvAndIndexToBoxCoord(float2 uv, uint face)
{
float3 n = float3(0,0,0);
float3 t = float3(0,0,0);
 
if (face == 0) // posx (red)
{
n = float3(1,0,0);
t = float3(0,1,0);
}
else if (face == 1) // negx (cyan)
{
n = float3(-1,0,0);
t = float3(0,1,0);
}
else if (face == 2) // posy (green)
{
n = float3(0,-1,0);
t = float3(0,0,-1);
}
else if (face == 3) // negy (magenta)
{
n = float3(0,1,0);
t = float3(0,0,1);
}
else if (face == 4) // posz (blue)
{
n = float3(0,0,-1);
t = float3(0,1,0);
}
else // if (i.face == 5) // negz (yellow)
{
n = float3(0,0,1);
t = float3(0,1,0);
}
 
float3 x = cross(n, t);
 
uv = uv * 2 - 1;
 
n = n + t*uv.y + x*uv.x;
n.y *= -1;
n.z *= -1;
return n;
}
 
[maxvertexcount(18)]
void gs_main(triangle vs_out input[3], inout TriangleStream<ps_in> output)
{
for( int f = 0; f < 6; ++f )
{
for( int v = 0; v < 3; ++v )
{
ps_in o;
o.pos = input[v].pos;
o.nrm = UvAndIndexToBoxCoord(input[v].uv, f);
o.face = f;
output.Append(o);
}
output.RestartStrip();
}
}
 
SamplerState g_samCube
{
    Filter = MIN_MAG_MIP_LINEAR;
    AddressU = Clamp;
    AddressV = Clamp;
};
TextureCube g_txEnvMap : register(t0);
 
cbuffer mip : register(b0)
{
float g_CubeSize;
float g_CubeLod;
float g_CubeLodCount;
};
 
 
// http://holger.dammertz.org/stuff/notes_HammersleyOnHemisphere.html
float radicalInverse_VdC(uint bits) {
     bits = (bits << 16u) | (bits >> 16u);
     bits = ((bits & 0x55555555u) << 1u) | ((bits & 0xAAAAAAAAu) >> 1u);
     bits = ((bits & 0x33333333u) << 2u) | ((bits & 0xCCCCCCCCu) >> 2u);
     bits = ((bits & 0x0F0F0F0Fu) << 4u) | ((bits & 0xF0F0F0F0u) >> 4u);
     bits = ((bits & 0x00FF00FFu) << 8u) | ((bits & 0xFF00FF00u) >> 8u);
     return float(bits) * 2.3283064365386963e-10; // / 0x100000000
 }
 // http://holger.dammertz.org/stuff/notes_HammersleyOnHemisphere.html
 float2 Hammersley(uint i, uint N) {
     return float2(float(i)/float(N), radicalInverse_VdC(i));
 }
 
 static const float PI = 3.1415926535897932384626433832795;
 
// Image-Based Lighting
// http://www.unrealengine.com/files/downloads/2013SiggraphPresentationsNotes.pdf
float3 ImportanceSampleGGX( float2 Xi, float Roughness, float3 N )
{
float a = Roughness * Roughness;
float Phi = 2 * PI * Xi.x;
float CosTheta = sqrt( (1 - Xi.y) / ( 1 + (a*a - 1) * Xi.y ) );
float SinTheta = sqrt( 1 - CosTheta * CosTheta );
float3 H;
H.x = SinTheta * cos( Phi );
H.y = SinTheta * sin( Phi );
H.z = CosTheta;
float3 UpVector = abs(N.z) < 0.999 ? float3(0,0,1) : float3(1,0,0);
float3 TangentX = normalize( cross( UpVector, N ) );
float3 TangentY = cross( N, TangentX );
// Tangent to world space
return TangentX * H.x + TangentY * H.y + N * H.z;
}
 
// M matrix, for encoding
const static float3x3 M = float3x3(
    0.2209, 0.3390, 0.4184,
    0.1138, 0.6780, 0.7319,
    0.0102, 0.1130, 0.2969);
 
// Inverse M matrix, for decoding
const static float3x3 InverseM = float3x3(
    6.0013,    -2.700,    -1.7995,
    -1.332,    3.1029,    -5.7720,
    .3007,    -1.088,    5.6268);    
 
float4 LogLuvEncode(in float3 vRGB)
{
    float4 vResult;
    float3 Xp_Y_XYZp = mul(vRGB, M);
    Xp_Y_XYZp = max(Xp_Y_XYZp, float3(1e-6, 1e-6, 1e-6));
    vResult.xy = Xp_Y_XYZp.xy / Xp_Y_XYZp.z;
    float Le = 2 * log2(Xp_Y_XYZp.y) + 127;
    vResult.w = frac(Le);
    vResult.z = (Le - (floor(vResult.w*255.0f))/255.0f)/255.0f;
    return vResult;
}
 
float3 LogLuvDecode(in float4 vLogLuv)
{
    float Le = vLogLuv.z * 255 + vLogLuv.w;
    float3 Xp_Y_XYZp;
    Xp_Y_XYZp.y = exp2((Le - 127) / 2);
    Xp_Y_XYZp.z = Xp_Y_XYZp.y / vLogLuv.y;
    Xp_Y_XYZp.x = vLogLuv.x * Xp_Y_XYZp.z;
    float3 vRGB = mul(Xp_Y_XYZp, InverseM);
    return max(vRGB, 0);
}
 
// Ignacio Castano via http://the-witness.net/news/2012/02/seamless-cube-map-filtering/
float3 fix_cube_lookup_for_lod(float3 v, float cube_size, float lod)
{
float M = max(max(abs(v.x), abs(v.y)), abs(v.z));
float scale = 1 - exp2(lod) / cube_size;
if (abs(v.x) != M) v.x *= scale;
if (abs(v.y) != M) v.y *= scale;
if (abs(v.z) != M) v.z *= scale;
return v;
}
 
// Pre-Filtered Environment Map
// http://www.unrealengine.com/files/downloads/2013SiggraphPresentationsNotes.pdf
float4 ps_main(in ps_in i) : SV_TARGET0
{
float3 N = fix_cube_lookup_for_lod(normalize(i.nrm), g_CubeSize, g_CubeLod);
float Roughness = (float)g_CubeLod / (float)(g_CubeLodCount-1);
 
float4 totalRadiance = float4(0,0,0,0);
 
const uint C = 1024;
[loop]
for (uint j = 0; j < C; ++j)
{
float2 Xi = Hammersley(j,C);
float3 H = ImportanceSampleGGX( Xi, Roughness, N );
float3 L = 2 * dot( N, H ) * H - N; 
float nDotL = saturate(dot(L, N));
[branch]
if (nDotL > 0)
{
float4 pointRadiance = (g_txEnvMap.SampleLevel( g_samCube, L, 0 ));
totalRadiance.rgb += pointRadiance.rgb * nDotL;
totalRadiance.w += nDotL;
}
}
 
return LogLuvEncode(totalRadiance.rgb / totalRadiance.w);
}
 

Edited by Digitalfragment, 17 April 2014 - 12:58 AM.


#8 lipsryme   Members   -  Reputation: 1002

Like
0Likes
Like

Posted 17 April 2014 - 03:14 AM

First of all that is one awesome code sample (also the links are nice!) you got there, can't thank you enough! This clears up lots of things for me smile.png

Still got some questions on that:

 

1. Why encode to logLUV and not just write to an FP16 target ?

2. Haven't used the geometry shader duplication method yet for rendering a cubemap. Could you explain the VS and GS a little ? Or do you have a link to an article that explains it ?

3. Could I also generate a diffuse irradiance environment map using the above sample but removing the importance sampling bit and just accumulate nDotL ?

 

Update: I just finished implementing a test that applies it to a single mip level based on your example but I'm a little confused on how the Roughness is calculated.

For example: Is your first lod level 0 or 1 ? If you only had an lod count of 1 wouldn't that give you a zero divide ? How many LODs do you use ?  I assume it ranges from 0 (perfect mirror) to 1 (highest roughness) ? Should the first miplevel be the original cube map (aka no convolution = perfect mirror) ?

Shouldn't this line be: float Roughness = (float)g_CubeLod / (float)(g_CubeLodCount); ?


Edited by lipsryme, 19 April 2014 - 04:49 PM.


#9 lipsryme   Members   -  Reputation: 1002

Like
0Likes
Like

Posted 19 April 2014 - 03:56 PM

*Bump*

 

I feel like I'm getting a little closer but there's still something wrong so I made a quick video showing how the filtered mip levels look like:

https://www.youtube.com/watch?v=1og67CBpoU0&feature=youtu.be

 

Update: Fixed the lookup vector. CubeSize has to be the size of the original cubemap (not each miplevel size, which was what I had before). Still got the problem with the randomly distributed dots all over it.

https://www.youtube.com/watch?v=fgVdQj5VFOM&feature=youtu.be

 

Does anyone have an idea ?


Edited by lipsryme, 20 April 2014 - 03:39 AM.


#10 Digitalfragment   Members   -  Reputation: 768

Like
4Likes
Like

Posted 24 April 2014 - 05:24 PM

Thanks for the comment on the code sample, i'm glad it was helpful.
 
1. The logLUV endcoding was due to Maya 2012 not wanting to load fp16 dds files, and it was faster to implement the encode/decode than to work out Maya. (If anyone out there knows how to get Maya to load floating point dds files correctly without destroying mips too, let me know!)
 
- The quality difference between the logLUV in an RGBA8 target and the unencoded float16 wasn't bad either, and the bandwidth improvement of using 32bbp vs 64bpp is quite dramatic when still needing to target the DX9 generation of consoles.
 
2. The VS/GS trick is fun. I'll break it down into small chunks:
 
In directx 11, you can render without supplying vertex data. Instead SV_VERTEXID is automatically supplied and equal to the index of the vertex being generated. So in C++ i simply bind the shaders and call Draw(3,0) to draw a fullscreen triangle. See this link for more info on system-generated semantics:
 
 
- The reason to use a triangle and not a quad is to avoid the redundant processing of the pixel quads that run along the edge shared by the 2 triangles in the quad.
 
The GS pass is then taking this triangle and generating 6 triangles, and assigning them to output to individual textures in the texture array using SV_RENDERTARGETARRAYINDEX. This allows the C++ code to generate a single D3D11RenderTargetView for the cubemap per mip map, instead of creating an RTV for each individual face of each mip level. The GS cubemap trick was in one of the DirectX SDK samples, i can't remember which one though.
The code in the GS to work out the corner direction of the cubemap box was completely trial and error smile.png
 
An extra note on the shader, the seamles-cube-filtering concept isn't necessary when the generated file is going to be used on DX11, only DX9 as on the older generation cubemap texture filtering did not blend between cubemap faces (for example when sampling the 1x1x1 mip, you would only ever get a flat colour)
 
3. Yes definately, i hadn't gotten around to doing a diffuse irradiance map and instead hacked it to use the last mip of the probe with the vertex normal - which for our case seems good enough for now and saves generating double the number of probes at runtime.
 
 
The question about the LOD, the first is 0 which yields a roughness of 0 (perfect mirror), and (count-1) is so that the last generated mip map yields a roughness of 1 (lambert diffuse). Yes that would cause a divide by zero - but thats fine because a single mipmap level can't be both a roughness of 0 and a roughness of 1! Also, dont generate mipmaps all the way down to a 1x1, as you do need a little more precision at a roughness of 1.
 
Regarding the high frequency dots visible in your video, i can't say i ever got that problem - but i was using outdoor environment maps that tend to be a lot smoother and consistent in lighting values compared to that Grace Cathedral map. What sampler state are you using to sample your source cubemap with?

Edited by Digitalfragment, 24 April 2014 - 05:27 PM.


#11 MJP   Moderators   -  Reputation: 10547

Like
2Likes
Like

Posted 24 April 2014 - 06:46 PM

I just use a compute shader to do cubemap preconvolution. It's generally less of a hassle to set up compared to using a pixel shader, since you don't have have to set up any rendering state.

 

You can certainly generate a diffuse irradiance map by directly convolving the cubemap, but it's a lot faster to project onto 3rd-order spherical harmonics. Projecting onto SH is essentially O(N), and you can then compute diffuse irradiance with an SH dot product. Cubemap convolution is essentially O(N^2). 



#12 lipsryme   Members   -  Reputation: 1002

Like
0Likes
Like

Posted 25 April 2014 - 02:51 AM

Well regarding the high frequency dots I've decided to just use a higher sample count which works perfectly (for the cost of a few secs longer waiting time ofc smile.png ).

 

Diffuse Irradiance Map: I've never done this myself before, maybe setting the SH technique aside for a minute...how does the convolution look like ? Like what would I need to do differently than what I'm doing right now for the env map ? I guess convolve with a "cosine kernel" but I don't seem to be good at figuring out the math behind that tongue.png. For example when computing NdotL (or cosTheta_i) where does the vector L come from ? (I have N from the cubemap direction)

 

EnvBRDF / AmbientBRDF: After having managed to filter the environment map for the first part of the equation I'm having an issue computing the 2D texture that contains the ambientBRDF. The code for that can be seen here:

https://www.dropbox.com/s/cdl39ll6ioq2zm4/shot_140425_002637.png

My Problem is the resulting texture looks completely different/wrong than the one shown in his paper.

I think the problem has to do with the vector N as input to the ImportanceSampleGGX function as I have no idea how this vector is computed during the 2D preprocess. Any ideas ?

 

Inverse Gaussian for cloth brdf: I'm having difficulties seeing what exactly the NDF for the importance sampling is and how that was derived.

Can anyone help me with that ? e.g. this is the code for GGX (taken from Brian Karis's paper):

float3 ImportanceSampleGGX(float2 Xi, float Roughness, float3 N)
{
	float a = Roughness * Roughness;

	float Phi = 2 * PI * Xi.x;
	float CosTheta = sqrt((1 - Xi.y) / (1 + (a*a - 1) * Xi.y));
	float SinTheta = sqrt(1 - CosTheta * CosTheta);

	float3 H;
	H.x = SinTheta * cos(Phi);
	H.y = SinTheta * sin(Phi);
	H.z = CosTheta;

	float3 UpVector = abs(N.z) < 0.999 ? float3(0, 0, 1) : float3(1, 0, 0);
	float3 TangentX = normalize(cross(UpVector, N));
	float3 TangentY = cross(N, TangentX);

	// Tangent to world space
	return TangentX * H.x + TangentY * H.y + N * H.z;
}

Edited by lipsryme, 25 April 2014 - 03:07 AM.


#13 Digitalfragment   Members   -  Reputation: 768

Like
0Likes
Like

Posted 26 April 2014 - 05:57 AM

 

Diffuse Irradiance Map: I've never done this myself before, maybe setting the SH technique aside for a minute...how does the convolution look like ? Like what would I need to do differently than what I'm doing right now for the env map ? I guess convolve with a "cosine kernel" but I don't seem to be good at figuring out the math behind that tongue.png. For example when computing NdotL (or cosTheta_i) where does the vector L come from ? (I have N from the cubemap direction)

N & L are the opposing input/output directions, N being the direction of the output direction being resolved and L being the input direction of the source light data.

 



#14 lipsryme   Members   -  Reputation: 1002

Like
0Likes
Like

Posted 26 April 2014 - 10:59 AM

 

 

Diffuse Irradiance Map: I've never done this myself before, maybe setting the SH technique aside for a minute...how does the convolution look like ? Like what would I need to do differently than what I'm doing right now for the env map ? I guess convolve with a "cosine kernel" but I don't seem to be good at figuring out the math behind that tongue.png. For example when computing NdotL (or cosTheta_i) where does the vector L come from ? (I have N from the cubemap direction)

N & L are the opposing input/output directions, N being the direction of the output direction being resolved and L being the input direction of the source light data.

I'm afraid I don't quite understand what you mean by that unsure.png ...

How do I compute the L vector ?

From what I could decipher from this article http://http.developer.nvidia.com/GPUGems2/gpugems2_chapter10.html

The equation for irradiance is this:

ch10_eqn001.jpg

 

Which I think translates into: Irradiance= diff color * saturate(dot(N, L)) * EnvMap.rgb where the diff color isn't important for baking the cubemap and will be applied when sampling from it in realtime using the normal as the lookup vector, j would be the index for each "directional light" with d being our light vector L.

The article also says:

 

In this context, an environment map with k texels can be thought of as a simple method of storing the intensities of k directional lights, with the light direction implied from the texel location.

 

But how is this 3 component light vector derived from a texel location ?

 

This is how my shader conceptually would look like right now:

float3 PrefilterIrradianceEnvMap(float3 N)
{
	float3 PrefilteredColor = 0;

	for (uint i = 0; i < InputCubeWidth*InputCubeHeight; i++)
	{
		float3 L = ?
		float3 I = EnvMap.SampleLevel(EnvMapSampler, L, 0).rgb;
		PrefilteredColor += saturate(dot(N, L)) * I;
	}

	return PrefilteredColor;
}

float3 PS_Irradiance(GSO input) : SV_TARGET0
{
        float3 N = fix_cube_lookup_for_lod(normalize(input.Normal), CubeSize, CubeLOD);
	return PrefilterIrradianceEnvMap(N);
}

Edited by lipsryme, 26 April 2014 - 11:15 AM.


#15 Krzysztof Narkowicz   Members   -  Reputation: 1103

Like
1Likes
Like

Posted 26 April 2014 - 03:23 PM

Correct way would be to compute integral mentioned in my previous post. You can also approximate it using Monte Carlo method in order to develop intuition how it works. Basically for every destination cubemap texel (which corresponds to some directional n) take x random directions, sample source env map and treat them like a directional light. This is very similar to GGX stuff you are already doing.

for every destination texel (direction n)
    sum = 0
    for x random directions l
        sum += saturate( dot( n, l ) ) * sampleEnvMap( l );
    sum = sum / ( x * PI )

For better quality you can use importance sampling by biasing random distribution. Some directions are more "important" - for example you don't care about random light vectors which are located in different hemisphere than current normal vector.

 


blog | twitter | "Don't have any friends? Still a virgin? Programming is for you!"


#16 Digitalfragment   Members   -  Reputation: 768

Like
0Likes
Like

Posted 27 April 2014 - 08:06 AM

L is the direction of the pixel being sampled from the cubemap in that case. You sample every pixel of the cubemap (or a subset when using importance sampling), so the direction of the pixel is L



#17 lipsryme   Members   -  Reputation: 1002

Like
0Likes
Like

Posted 27 April 2014 - 08:31 AM

The thing that I can't seem to figure out is how this direction of the pixel (L) is calculated ?

 

Does it even make sense to use importance sampling for an irradiance environment map because of the nature of it being very low frequency ?

 

P.S. Did you implement the second part of the equation that models the environment brdf (GGX)? Since this also doesn't seem to work out as it should (see my second last post). The resulting texture looks like this: https://www.dropbox.com/s/waya105re6ls4vl/shot_140427_163858.png (as you can see the green channel is just 0)


Edited by lipsryme, 27 April 2014 - 08:53 AM.


#18 Krzysztof Narkowicz   Members   -  Reputation: 1103

Like
1Likes
Like

Posted 27 April 2014 - 02:43 PM

The thing that I can't seem to figure out is how this direction of the pixel (L) is calculated ?


That depends how do you want to integrate:

1. If you want to calculate analytically (loop over all source cubemap texels), then calculate direction from cube map face ID and cube map texel coordinates. That direction is a vector from cubemap origin to current cubemap texel origin. For example for +X face:
dir = normalize( float3( 1, ( ( texelX + 0.5 ) / cubeMapSize ) * 2 - 1, ( ( texelY + 0.5 ) / cubeMapSize ) * 2 - 1 ) )
2. If you want to approximate, then just generate some random vectors by using texture lookup or some rand shader function.

First approach is a bit more complicated because you additionally need to weight samples by solid angle. Cube map corners have smaller solid angle and should influence the result less. Solid angle can be calculated by projecting texel onto unit sphere and then calculating it's area on the sphere. BTW this is something that AMD CubeMapGen does.

Does it even make sense to use importance sampling for an irradiance environment map because of the nature of it being very low frequency ?


Frequency doesn't matter here. The idea of importance sampling is to bias random distribution according to how much given sample influences result. For example we can skip l directions which are in different hemisphere ( dot( n, l ) <= 0 ) and we can bias towards directions with higher weight ( larger dot( n, l ) ).

blog | twitter | "Don't have any friends? Still a virgin? Programming is for you!"


#19 Digitalfragment   Members   -  Reputation: 768

Like
2Likes
Like

Posted 27 April 2014 - 04:57 PM


The thing that I can't seem to figure out is how this direction of the pixel (L) is calculated ?

 

Thats what the purpose of UVandIndexToBoxCoord in my shaders was, it takes the uv coordinate and cubemap face index and returned the direction for it.

edit: missed this question:


P.S. Did you implement the second part of the equation that models the environment brdf (GGX)? Since this also doesn't seem to work out as it should (see my second last post). The resulting texture looks like this: https://www.dropbox.com/s/waya105re6ls4vl/shot_140427_163858.png (as you can see the green channel is just 0)

I did have a similar issue, when trying to replicate Epic's model, I can't remember exactly what it was but here is my current sample for producing that texture:
 
struct vs_out
{
float4 pos : SV_POSITION;
float2 uv : TEXCOORD0;
};
 
void vs_main(out vs_out o, uint id : SV_VERTEXID)
{
o.uv = float2((id << 1) & 2, id & 2);
o.pos = float4(o.uv * float2(2,-2) + float2(-1,1), 0, 1);
//o.uv = (o.pos.xy * float2(0.5,-0.5) + 0.5) * 4;
//o.uv.y = 1 - o.uv.y;
}
 
struct ps_in
{
float4 pos : SV_POSITION;
float2 uv : TEXCOORD0;
};
 
// http://holger.dammertz.org/stuff/notes_HammersleyOnHemisphere.html
float radicalInverse_VdC(uint bits) {
     bits = (bits << 16u) | (bits >> 16u);
     bits = ((bits & 0x55555555u) << 1u) | ((bits & 0xAAAAAAAAu) >> 1u);
     bits = ((bits & 0x33333333u) << 2u) | ((bits & 0xCCCCCCCCu) >> 2u);
     bits = ((bits & 0x0F0F0F0Fu) << 4u) | ((bits & 0xF0F0F0F0u) >> 4u);
     bits = ((bits & 0x00FF00FFu) << 8u) | ((bits & 0xFF00FF00u) >> 8u);
     return float(bits) * 2.3283064365386963e-10; // / 0x100000000
 }
 // http://holger.dammertz.org/stuff/notes_HammersleyOnHemisphere.html
 float2 Hammersley(uint i, uint N) {
     return float2(float(i)/float(N), radicalInverse_VdC(i));
 }
 
 static const float PI = 3.1415926535897932384626433832795;
 
// Image-Based Lighting
// http://www.unrealengine.com/files/downloads/2013SiggraphPresentationsNotes.pdf
float3 ImportanceSampleGGX( float2 Xi, float Roughness, float3 N )
{
float a = Roughness * Roughness;
float Phi = 2 * PI * Xi.x;
float CosTheta = sqrt( (1 - Xi.y) / ( 1 + (a*a - 1) * Xi.y ) );
float SinTheta = sqrt( 1 - CosTheta * CosTheta );
float3 H;
H.x = SinTheta * cos( Phi );
H.y = SinTheta * sin( Phi );
H.z = CosTheta;
float3 UpVector = abs(N.z) < 0.999 ? float3(0,0,1) : float3(1,0,0);
float3 TangentX = normalize( cross( UpVector, N ) );
float3 TangentY = cross( N, TangentX );
// Tangent to world space
return TangentX * H.x + TangentY * H.y + N * H.z;
}
 
// http://graphicrants.blogspot.com.au/2013/08/specular-brdf-reference.html
float GGX(float nDotV, float a)
{
float aa = a*a;
float oneMinusAa = 1 - aa;
float nDotV2 = 2 * nDotV;
float root = aa + oneMinusAa * nDotV * nDotV;
return nDotV2 / (nDotV + sqrt(root));
}
 
// http://graphicrants.blogspot.com.au/2013/08/specular-brdf-reference.html
float G_Smith(float a, float nDotV, float nDotL)
{
return GGX(nDotL,a) * GGX(nDotV,a);
}
 
// Environment BRDF
// http://www.unrealengine.com/files/downloads/2013SiggraphPresentationsNotes.pdf
float2 IntegrateBRDF( float Roughness, float NoV )
{
float3 V;
V.x = sqrt( 1.0f - NoV * NoV ); // sin
V.y = 0;
V.z = NoV;
// cos
float A = 0;
float B = 0;
const uint NumSamples = 1024;
[loop]
for( uint i = 0; i < NumSamples; i++ )
{
float2 Xi = Hammersley( i, NumSamples );
float3 H = ImportanceSampleGGX( Xi, Roughness, float3(0,0,1) );
float3 L = 2 * dot( V, H ) * H - V;
float NoL = saturate( L.z );
float NoH = saturate( H.z );
float VoH = saturate( dot( V, H ) );
[branch]
if( NoL > 0 )
{
float G = G_Smith( Roughness, NoV, NoL );
float G_Vis = G * VoH / (NoH * NoV);
float Fc = pow( 1 - VoH, 5 );
A += (1 - Fc) * G_Vis;
B += Fc * G_Vis;
}
}
return float2( A, B ) / NumSamples;
}
 
// Environment BRDF
// http://www.unrealengine.com/files/downloads/2013SiggraphPresentationsNotes.pdf
float4 ps_main(in ps_in i) : SV_TARGET0
{
float2 uv = i.uv;
float nDotV = uv.x;
float Roughness = uv.y;
 
float2 integral = IntegrateBRDF(Roughness, nDotV);
 
return float4(integral, 0, 1);
}
 

Edited by Digitalfragment, 27 April 2014 - 05:04 PM.


#20 lipsryme   Members   -  Reputation: 1002

Like
0Likes
Like

Posted 27 April 2014 - 05:05 PM

Ugh sorry it's so confusing tongue.png

So the light vector L is the direction vector that I get as input from the GS to the PS. I guess the naming scheme was messing with me there.

It makes sense, since when I output it I get the directions from the center towards the cube pixels.

What I'm trying to find then is the surface normal N. I guess it is also somehow calculated from the texture coordinates ? Is it just the face direction ?


Edited by lipsryme, 27 April 2014 - 05:13 PM.





Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS