Sign in to follow this  

Percentage-closer Soft Shadows implementation

Recommended Posts

 am trying to implement the Percentage-closer Soft Shadows (PCSS) from NVidia inside Unity but I am facing some issue and I don't know where they come from and then, I do not know how to solve them...

Here is my current setup.

I using an orthographic camera to calculate my shadowmap here are the different steps and some pseudo-code.

// Setup camera
_shadowCamera.clearFlags = CameraClearFlags.Depth;
_shadowCamera.orthographic= true;

// Setup a render texture to output the shadowmap from the camera
RenderTexture _shadowTexture = new RenderTexture((int)_shadowMapSize, _shadowMapSize, 16, RenderTextureFormat.Shadowmap, RenderTextureReadWrite.Linear);

// Render scene using a replacement shader. This is only used to output 
_shadowCamera.SetReplacementShader(_shadowMapShader, "RenderType");
_shadowCamera.Render();

// Set the camera positions and matrix
_radius = _bounds;
Vector3 targetPos = _target.transform.position;
Vector3 lightDir = _light.transform.forward;
Quaternion lightRot = _light.transform.rotation;

_shadowCamera.transform.position = targetPos - lightDir * _radius;
_shadowCamera.transform.rotation = lightRot;
_shadowCamera.orthographicSize = _radius;

_shadowCamera.farClipPlane = _radius * 2.0f;

Matrix4x4 shadowViewMatrix = _shadowCamera.worldToCameraMatrix;
Matrix4x4 shadowProjectionMatrix = GL.GetGPUProjectionMatrix(_shadowCamera.projectionMatrix, false);

Matrix4x4 shadowBiasMatrix = Matrix4x4.identity;
shadowBiasMatrix.SetRow(0, new Vector4(0.5f, 0.0f, 0.0f, 0.5f));
shadowBiasMatrix.SetRow(1, new Vector4(0.0f, 0.5f, 0.0f, 0.5f));
shadowBiasMatrix.SetRow(2, new Vector4(0.0f, 0.0f, 1.0f, 0.0f));
shadowBiasMatrix.SetRow(3, new Vector4(0.0f, 0.0f, 0.0f, 1.0f));

_shadowMatrix = shadowBiasMatrix * shadowProjectionMatrix * shadowViewMatrix;

// Transfering data to shader
_material.SetMatrix("_ShadowMatrix", _shadowMatrix);
_material.SetTexture("_ShadowTexture", _shadowTexture);
_material.SetTexture("u_PointSampler", _pointSampler);
_material.SetFloat("u_NearPlane", _shadowCamera.nearClipPlane);
_material.SetFloat("u_LightWorldSize", _lightWorldSize);
_material.SetFloat("u_LightFrustrumWidth", _lightFrustrumWidth); 

In my shader, I am simply doing the blocker search part and here is also some pseudo-code.

Nothing really different from nvidia code.

#define BLOCKER_SEARCH_NUM_SAMPLES 16
#define NEAR_PLANE u_NearPlane
#define LIGHT_WORLD_SIZE u_LightWorldSize
#define LIGHT_FRUSTUM_WIDTH u_LightFrustrumWidth
#define LIGHT_SIZE_UV (LIGHT_WORLD_SIZE / LIGHT_FRUSTUM_WIDTH) 

uniform Texture2D               _ShadowTexture;
uniform SamplerComparisonState  sampler_ShadowTexture;
uniform Texture2D               u_PointSampler;
uniform SamplerState            sampleru_PointSampler;

half4 coords = mul(_ShadowMatrix, float4(worldPos.xyz, 1.f));
float2 uv = coords.xy;
float zReceiver = coords.z;
float searchWidth = LIGHT_SIZE_UV * (zReceiver - NEAR_PLANE) / zReceiver;
float blockerSum = u_PointSampler.Sample(sampleru_PointSampler, float2(0, 0)).a;
float numBlockers = 0;

for (int i = 0; i < BLOCKER_SEARCH_NUM_SAMPLES; ++i)
{
    float shadowMapDepth = _ShadowTexture.Sample(sampleru_PointSampler, uv.xy + poissonDisk[i] * searchWidth).r;
    if (shadowMapDepth < zReceiver)
    {
        blockerSum += shadowMapDepth;
        numBlockers++;
    }
}
float avgBlockerDepth = blockerSum / numBlockers;
return avgBlockerDepth; 

Here is an example of my issue. As you can see on the right the shadowing seems correct but if you move the cylinder, you can see on the left, the penumbra is not computed correctly.

 

 

uBosM.jpg

 

 

As I said I don't know what I am doing wrong, I suppose that this comes from the matrix or maybe the depth but there might be some other problems.

Any help is welcome, Thanks !

Edited by fire67

Share this post


Link to post
Share on other sites

Percentage closer soft shadows are "2-pass algorithm" (you can calculate both passes in one shader).

 

First, which is what you pasted here, is blocker search - basically you are determining how much blurring you need to do based on blocker vs. receiver depth difference. Second pass is actual blur and shadow map comparison. Can you paste your whole shader here?

Share this post


Link to post
Share on other sites

Thanks for the answer. I've rebuilt my system and I better understand the whole thing and I also think that I know where the issue comes from.

Let's focus on the near plane value, the shadow map and the zReceiver which is the z projection coordinate.

The shadow map and the zReceiver are calculated within the light view space as you can see in the following screenshot.

 

[attachment=34199:depth.jpg]

 

Now let's see how the searchWidth is calculated and some screenshots with different NEAR_PLANE values.

float searchWidth = LIGHT_SIZE_UV * (zReceiver - NEAR_PLANE) / zReceiver;

In those examples only the NEAR_PLANE value is change and the camera near place remains to 0.

 

[attachment=34200:near.jpg]

 

As you can see with a NEAR_PLANE value of zero the searchWidth doesn't varies compared to other values. But I think that other value are wrong and doesn't behave right as I am using a directional light. To try fixing this I only used the zReceiver without taking in account the NEAR_PLANE value and the results are not good.

float searchWidth = LIGHT_SIZE_UV * zReceiver;

As you can see in the following screenshots, in the red circles as you move the object far according to the shadow map the width gets bigger and it shouldn't.

 

[attachment=34201:receiver.jpg]

 

The correct behaviour is that the searchWidth shouldn't be affected by the object position in the shadow map but more of its height. Hope it makes sens.

I would like know how could I calculate the seachWidth as it seems that my problem comes from its calculation. Maybe the issue comes from the shadowmap of the zReceiver but I have some doubts about it.

Share this post


Link to post
Share on other sites

I don't think the search width computation is entirely correct (maybe they have it wrong in the paper) - anyways let me try to derive a math for it here.

 

So, we are investigating point X in the scene, we have a shadow map generated from given direction (with dimensions N x M). We also have a light point L and radius r. (note, even for directional light we will have a radius) - now we need to calculate how big are we need to search for blockers inside the whole N x M shadow map.

 

This can be done actually using a simple way - by projecting into shadow map viewport using different centers of projection (in case N = M we are good with just 2 centres of projection - L and L + u * r - where is any 3D vector perpendicular to light direction ... otherwise we need either using 2 vectors that are perpendicular to each other and to light direction - I will continue with simple case on). The projection results in X_L and X_L' - subtracting these 2 and calculating magnitude of that vector gives us search radius for blocker search (note - actually you will have the distance in texture space - you would need to also multiply by the shadow map dimensions to actually get search area radius in pixels ... which is not necessary as in shaders you operate in texture space when sampling texture). Now, this sound as awful lot of math right? Well, due to nature of projections it isnt... 

 

  • Perspective (which is what NVidia did in their paper and demo)

The real answer here is - similarity of triangles. How big search radius on shadow map is necessary? You've got a right angle triangle like this:

 

     c

  ______

 |a'   /b'

 |____/

 | d /

a|  /

 | / b

 |/

 

a  - distance(light, X)

b  - distance(light + radius, X)

c  - radius

d  - ?

a' - near plane distance

 

Which is actually quite simple to solve - assuming a' = near plane - then they got it. The search goes from -d to +d (for any area light bounded by sphere with given radius).

  • Orthogonal (Which is what you're doing)

Orthogonal is a bit tricky, sketching the image as previous would look something like this:

     c

   _____

a'|     |

  |_____|

  |  d  |

a |     | b

  |     |

  |     |

  |     |

Edited by Vilem Otte

Share this post


Link to post
Share on other sites

Thank you again for your answer but I have some difficulties to see how to implement it in the current code solution :(

For directional light, the search width should be related to the distance between the occluder and the receiver no ?

Share this post


Link to post
Share on other sites

Alright here is the current result. I think the tecnoc is 100% hacky/unoptimized/no-physically based but it almost work. Maybe somebody could help me improve it ?

[attachment=34211:result.jpg]

 

Here is how I do it. I use the distance between the occluder and the receiver to modulate the size of the search width.

for (int i = 0; i < BLOCKER_SEARCH_NUM_SAMPLES; ++i)
{
	float shadowMapDepthInverted = _ShadowMap.SampleLevel(sampler_ShadowMapSampler, uv + poissonDisk[i] * searchWidth, 0);
	shadowMapDepthInverted += 1.0 - zReceiver;
	shadowMapDepthInverted = 1 - shadowMapDepthInverted;
	shadowMapDepthInverted = pow(shadowMapDepthInverted, A) * B;
	shadowMapDepthInverted = min(1.0, shadowMapDepthInverted);

	float shadowMapDepth = _ShadowMap.SampleLevel(sampler_ShadowMapSampler, uv + poissonDisk[i] * searchWidth * shadowMapDepthInverted, 0);
	blockerSum += shadowMapDepth;
	numBlockers++;
}
avgBlockerDepth = blockerSum / numBlockers;
return avgBlockerDepth;

Share this post


Link to post
Share on other sites

Alright, my apologize for short delay - I had to find some time to play with Unity (as you mentioned Unity, I didn't really want to post here my HLSL or GLSL shaders, which would use uniform buffers, etc. and could be confusing). I'm not a Unity pro user (I've woked on just few Ludum Dare games in it) - so not everything is perfect, anyways here is my result:

 

gallery_102163_892_12545.png

 

So - what did I do? I used a camera as depth map generator (orthogonal - as you did) and set up some parameters. I'm not sure I have them all correctly, so I might get back and fix some things up. To the shader:

struct appdata
{
	float4 vertex : POSITION;
};

struct v2f
{
	float4 vertex : SV_POSITION;
	float4 projCoord : TEXCOORD0;
	float4 zCoord : TEXCOORD1;
};

sampler2D _ShadowTexture;
sampler2D _RandomTexture;
float4x4 _ShadowViewMatrix;
float4x4 _ShadowProjectionMatrix;
float4x4 _ShadowBiasMatrix;
float _LightSize;
float _Offset;
float _Bias;
float _NoiseScale;
float _ShadowSize;
int _FilterWidth;

v2f vert(appdata v)
{
	float4x4 projectionMatrix = mul(_ShadowBiasMatrix, mul(_ShadowProjectionMatrix, _ShadowViewMatrix));

	v2f o;
	o.vertex = mul(UNITY_MATRIX_MVP, v.vertex);
	o.projCoord = mul(projectionMatrix, mul(unity_ObjectToWorld, v.vertex));
	o.zCoord = mul(_ShadowViewMatrix, mul(unity_ObjectToWorld, v.vertex));
	return o;
}
			
float EstimatePenumbraSize(float receiver, float2 projCoord)
{
	float avgDepth = 0.0;
	float numBlockers = 0.0;
	for (int i = -_FilterWidth; i <= _FilterWidth; i++)
	{
		for (int j = -_FilterWidth; j <= _FilterWidth; j++)
		{
			float2 offset = float2(i, j) * _LightSize * _Offset;
			float depthSample = tex2D(_ShadowTexture, projCoord + offset).x;
			if (depthSample < receiver)
			{
				avgDepth += depthSample;
				numBlockers += 1.0;
			}
		}
	}

	avgDepth /= numBlockers;

	return max((receiver - avgDepth) * _LightSize / avgDepth, 0.0);
}
			
float Filter(float receiver, float2 projCoord, float penumbraSize, float3 randomizer)
{
	float shadow = 0.0;
	float shadowSamples = 0.0;

	for (int i = -_FilterWidth; i <= _FilterWidth; i++)
	{
		for (int j = -_FilterWidth; j <= _FilterWidth; j++)
		{
			float2 offset = float2(i, j) * penumbraSize * _Offset;
			float3 rand = (tex2D(_RandomTexture, randomizer.yz + float2(i, j) * randomizer.xy) * 2.0 - 1.0) * penumbraSize * _Offset * _NoiseScale;
			float depthSample = tex2D(_ShadowTexture, projCoord + offset + rand.xy).x;
			shadow += depthSample < receiver ? 0.0 : 1.0;
			shadowSamples += 1.0;
		}
	}

	return shadow / shadowSamples;
}
			
float4 frag(v2f i) : SV_Target
{
	float lightDepth = -i.zCoord.z;
	float2 projCoord = i.projCoord.xy / i.projCoord.w;

	float tc0 = frac(projCoord.x * _ShadowSize);

	float penumbra = EstimatePenumbraSize(lightDepth - _Bias, projCoord);
	float shadow = Filter(lightDepth - _Bias, projCoord, penumbra, i.vertex.xyz * 0.01);
	return float4(shadow, shadow, shadow, 1.0);
}

I'll also link you to my Unity project here, so:

 

http://www.otte.cz/PCSS.rar

 

EDIT: For simplicity I've used NxN kernels instead of some good poisson sampling, but I hope it will be enough for you. Sadly you can't really use linear filtering here - maybe it would be possible with VSMs, so I might as well try that.

Edited by Vilem Otte

Share this post


Link to post
Share on other sites

Thank you so much for your answer !

Here is a small gif to show the result with some changes in the code (Poisson, etc.). I'll post it in the coming days.

 

[attachment=34264:unitypccs.gif]

Edited by fire67

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this