Shadow mapping is known for its comparability with rendering hardware, low implementation complexity and ability to handle any kind of geometry. However, aliasing is also a very common problem in shadow mapping.
Projection and perspective aliasing are the two main discontinuity types which deteriorate projected shadow quality. Since the introduction of shadow mapping, many clever algorithms have been developed to reduce or even completely remove shadow map aliasing. Algorithms which are targeted to remove aliasing completely are unfortunately not compatible with current GPU architecture to run in real-time and usually serve as hardware change proposals (LogPSM, Irregular Z-Buffer Technique). Some algorithms which run in real-time are focused on optimal sample re-distribution (PSM, TSM, LiSPSM, CSM) and others are serve as filtering techniques (VSM, PCF, BFSM).
Shadow Map Silhouette Revectorization (SMSR) is a filtering technique which re-approximates shadow silhouette based on MLAA implementation. SMSR consists of two main passes and a final merge pass (in total three passes). First pass searches for discontinuity information. Second pass determines discontinuity length and orientation where its translated into normalized xy-space. The xy-space is used to perform a simple linear line interpolation which eventually determines the new edge. The third and final pass merges new edges on top of the regular shadow map, resulting in a smoother shadow.
Figure 1: From left to right, revectorization process.
Figure 2: Compressed silhouette discontinuity.
Inside the projected shadow map, we find shadow discontinuity by offsetting projected coordinates by one shadow map sample in all 4 directions (left, top, right, bottom). The discontinuity is compressed into a single value per axis (red channel for horizontal and green channel for vertical discontinuity) which then is used in the second pass.
0.0 = no discontinuity
0.5 = depending on which axis, to left or to bottom
0.75 = discontinuity in on both directions.
1.0 = depending on which axis, to right or to top
Fragment Shader - First pass:
float4 color0 : COLOR0;
float Visible(sampler2D inShadowMap, float inShadowMapXY, float4 inProjectedCoordinate, int2 inOffset)
return tex2Dproj( inShadowMap, inProjectedCoordinate + float4(inOffset,0,0) * (1.0f/inShadowMapXY) ).r;
float2 Disc(sampler2D inShadowMap, float inShadowMapXY, float4 inProjectedCoordinate)
float center = Visible(inShadowMap, inShadowMapXY, inProjectedCoordinate, float2(0,0));
float right = abs(Visible(inShadowMap, inShadowMapXY, inProjectedCoordinate, int2(1,0)) - center) * center;
float left = abs(Visible(inShadowMap, inShadowMapXY, inProjectedCoordinate, int2(-1,0))- center) * center;
float top = abs(Visible(inShadowMap, inShadowMapXY, inProjectedCoordinate, int2(0,-1))- center) * center;
float bottom = abs(Visible(inShadowMap, inShadowMapXY, inProjectedCoordinate, int2(0,1)) - center) * center;
float4 disc = float4(left, right, bottom, top);
0.0f = no discontinuity
0.5f = depending on which axis, to left or to bottom
0.75f = discontinuity to on both sides.
1.0f = depending on which axis, to right or to top
float2 dxdy = 0.75f + (-disc.xz + disc.yw) * 0.25f;
// Step filters out axis where no discontinuities are found
return dxdy * step(1.0f, float2(dot(disc.xy, 1.0f), dot(disc.zw, 1.0f)));
FBO_FORWARD_IN main(float4 inPos : POSITION,
uniform sampler2D inSampler1 : TEXUNIT1, // In Shadow Map
uniform sampler2D inTexPosition : TEXUNIT2, // Buffer containing from camera-space world coordinates
float2 inUV : TEXCOORD0, // Current fragment
uniform float4x4 inMatrixShadowLightProjectionBias, // Light View Matrix
uniform float inConst0, // Bias
uniform float inConst1 // Shadow-map width & height
float4 color = float4(0,0,0,0);
float3 pos = tex2D(inTexPosition, inUV).xyz; // World position. Can be reconstructed from depth and inverse camera-projection-matrix
// Projected depth-map coordinates, between 0 and 1
float4 biasOffset = float4(0,0, inConst0, 0);
float4 projectedCoordinate = mul(inMatrixShadowLightProjectionBias, float4(pos, 1.0f)) + biasOffset;
// Clip everything outside shadow map rectangle
// How is this performance wise? can we optimize it.
if( projectedCoordinate.x >= 0.0f ||
projectedCoordinate.y >= 0.0f ||