Jump to content

  • Log In with Google      Sign In   
  • Create Account

Banner advertising on our site currently available from just $5!


1. Learn about the promo. 2. Sign up for GDNet+. 3. Set up your advert!


Krzysztof Narkowicz

Member Since 13 Aug 2010
Offline Last Active Yesterday, 10:30 AM

#5232141 Automatic UV map generation: OpenNL LSCM doesnt work - why ?

Posted by Krzysztof Narkowicz on 01 June 2015 - 06:48 AM

When doing automatic UV unwrap, cutting mesh into charts is actually a much harder problem than calculating UV for those charts. I would recommend UVAtlas (https://uvatlas.codeplex.com/) library. It's based on quite old algorithm (iso-charts), so you could do better nowadays. Still it's production proven (e.g. Halo 3 used it) and much better than LSCM -/+ XYZ cutting. It's weakest part is UV chart packing. Tetris packing was a good idea 10 years ago, but now it's better just to do a bruteforce. Simply test every possible UV chart position in the final atlas and N rotations per chart.




#5231145 Spherical Harmonics - analytical solution

Posted by Krzysztof Narkowicz on 26 May 2015 - 04:13 PM

Transfer function cos(phi) has no azimuthal dependency, so all terms for m != 0 will vanish after integration.




#5227846 Spherical Harmonics - analytical solution

Posted by Krzysztof Narkowicz on 07 May 2015 - 03:12 PM

Eq. 19 has a small error. Instead of n!/2 it should be (n/2)!. So:

For even n: An = 2*pi*((2n + 1)/(4*pi))^0.5*(((-1)^(n/2 - 1))/((n + 2)*(n - 1)))*(n!/(2^n*((n/2)!)^2))
LambdaL = (4*pi)/(2*l+1))^0.5

A0 * Lambda0 ~= 0.886227 * 3.54491 = 3.14159




#5227840 Spherical Harmonics - analytical solution

Posted by Krzysztof Narkowicz on 07 May 2015 - 02:12 PM

You should additionally multiply An by a normalization term (4*pi)/(2*l+1))^0.5. See eq. 23-24.




#5227364 Spherical Harmonics - analytical solution

Posted by Krzysztof Narkowicz on 05 May 2015 - 01:35 PM

For Lambertian BRDF transfer function is cos(theta) and directional light is projected by evaluating the SH basis function in the light's direction and scaling results by light's color. Those coefficients from the ShaderX3 article are derived from the spherical harmonic basis functions (Yml).




#5227251 Spherical Harmonics - analytical solution

Posted by Krzysztof Narkowicz on 05 May 2015 - 01:42 AM

Oliveira uses numerical integration, because in his tutorial he projects a light probe and shadowed lights into SH basis. Both usually don't have an analytical form, so he can't leverage orthogonality of SH basis and solve that analytically.




#5199320 Transparent shadow casters with 'Translucency Map'

Posted by Krzysztof Narkowicz on 20 December 2014 - 03:41 PM

Crytek's approach has only one drawback - transparent surfaces can't cast shadows onto other transparent surfaces.

It works as follows:
1. Draw opaque surfaces into shadowmap
2. Clear translucency map to white
3. Draw translucents into translucency map with opaque shadow map as depth buffer and depth writes disabled
4. In order to apply shadows sample shadow map and translucency map. Shadow = Min(ShadowMapDepthTest, TranslucencyMap).

So if translucent shadow caster is behind the opaque one it will be discarded by the depth test. In other case it will modify translucency map, but it won't matter as depth test result will override this value during shadow evaluation.


#5165357 OculusVR acquired and open sourced RakNet

Posted by Krzysztof Narkowicz on 07 July 2014 - 03:07 PM

OculusVR just open sourced RakNet - a C++ multiplatform network library used for example by Unity or Gamebryo. It's totally free even for commercial usage (BSD license and all patents granted). It's also a great reference for writing your own multiplayer transport layer.

https://github.com/OculusVR/RakNet




#5165350 Grass Rendering Questions

Posted by Krzysztof Narkowicz on 07 July 2014 - 02:41 PM

You can use pixel shader input VFACE / SV_IsFrontFace / gl_FrontFacing to check if current pixel belongs to a front facing or to a back facing triangle. It it's back facing then just flip normal and shade as usually.




#5163632 UE4 IBL / shading confusion

Posted by Krzysztof Narkowicz on 29 June 2014 - 09:11 AM

Metallic works like Tessellator wrote, but it's also has an additional specular parameter (which is equal 0.5 by default). Mentioned specular parameter is driven by a cavity map (single channel).

BaseColor = Cavity * OldBaseColor
Specular = Cavity * 0.5

DiffuseColor = lerp( BaseColor, 0, Metallic )
SpecularColor = lerp( 0.08 * Specular, BaseColor, Metallic )

https://forums.unrealengine.com/archive/index.php/t-3869.html

https://docs.unrealengine.com/latest/INT/Engine/Rendering/Materials/PhysicallyBased/index.html

 

As for split sum aproximation try comparing with an reference shader, which takes hundreds of samples per pixel in real time.




#5156343 Doing local fog (again)

Posted by Krzysztof Narkowicz on 27 May 2014 - 02:47 PM

For an interesting general volumetric fog solution check out Assasin Creed 4 presentations by Bart Wroński: http://bartwronski.com/publications/




#5149932 IBL Problem with consistency using GGX / Anisotropy

Posted by Krzysztof Narkowicz on 27 April 2014 - 02:43 PM

The thing that I can't seem to figure out is how this direction of the pixel (L) is calculated ?


That depends how do you want to integrate:

1. If you want to calculate analytically (loop over all source cubemap texels), then calculate direction from cube map face ID and cube map texel coordinates. That direction is a vector from cubemap origin to current cubemap texel origin. For example for +X face:
dir = normalize( float3( 1, ( ( texelX + 0.5 ) / cubeMapSize ) * 2 - 1, ( ( texelY + 0.5 ) / cubeMapSize ) * 2 - 1 ) )
2. If you want to approximate, then just generate some random vectors by using texture lookup or some rand shader function.

First approach is a bit more complicated because you additionally need to weight samples by solid angle. Cube map corners have smaller solid angle and should influence the result less. Solid angle can be calculated by projecting texel onto unit sphere and then calculating it's area on the sphere. BTW this is something that AMD CubeMapGen does.

Does it even make sense to use importance sampling for an irradiance environment map because of the nature of it being very low frequency ?


Frequency doesn't matter here. The idea of importance sampling is to bias random distribution according to how much given sample influences result. For example we can skip l directions which are in different hemisphere ( dot( n, l ) <= 0 ) and we can bias towards directions with higher weight ( larger dot( n, l ) ).


#5149713 IBL Problem with consistency using GGX / Anisotropy

Posted by Krzysztof Narkowicz on 26 April 2014 - 03:23 PM

Correct way would be to compute integral mentioned in my previous post. You can also approximate it using Monte Carlo method in order to develop intuition how it works. Basically for every destination cubemap texel (which corresponds to some directional n) take x random directions, sample source env map and treat them like a directional light. This is very similar to GGX stuff you are already doing.

for every destination texel (direction n)
    sum = 0
    for x random directions l
        sum += saturate( dot( n, l ) ) * sampleEnvMap( l );
    sum = sum / ( x * PI )

For better quality you can use importance sampling by biasing random distribution. Some directions are more "important" - for example you don't care about random light vectors which are located in different hemisphere than current normal vector.

 




#5146978 where to start Physical based shading ?

Posted by Krzysztof Narkowicz on 14 April 2014 - 02:32 PM

It's very similar for Cook-Torrance. Check out great sample by MJP which implements various specular AA techniques with Cook-torrance: https://mjp.codeplex.com/releases/view/109905




#5146764 IBL Problem with consistency using GGX / Anisotropy

Posted by Krzysztof Narkowicz on 13 April 2014 - 02:30 PM

In layman's terms you need to treat cube map as a lookup table and precompute there some data by using a source cubemap. Let's start from simple Lambert diffuse to see how does it work.

 

For Lambert diffuse we want to precompute lighting per normal direction and store it in a cubemap. To do that we need to solve a simple integral: 

int.gif

Which in our case means: for every texel of the destination cubemap, calculate some value E. E is calculated by iterating over all source cubemap texels and summing their irradiance E. Where E = SourceCubemap(l) * ( n dot l ) * SourceCubemapTexelSolidAngle. This operation is called cosine filter in AMD CubeMapGen.

 

For simple Phong model (no fresnel etc.) we can precompute reflection in a similar way and store it in a destination cubemap. Unfortunately even after just adding Fresnel or when using Blinn-Phong there are too many input variables and exact results don't fit into a single cubemap. It's time to do some kind of approximations, start using more storage or take more than one sample.






PARTNERS