Problem with variable depth bias for soft shadow mapping

Started by
2 comments, last by jcabeleira 14 years, 1 month ago
Hello, I've just implemented the Percentage-Closer Soft Shadowing technique and although it works great I had to set a big depth bias to compensate for the wide softness kernels that cause the usual shadowmapping artifacts of surface acne. Now I'm trying to replace the huge constant depth bias by a variable one using the technique presented here (page 40) which uses some screen-space derivatives and a jacobian matrix to find a proper bias. Unfortunately, it seems that I haven't done it right since I'm still getting lots of artifacts. So, before posting any of my code here, I'd like to ask if someone as already done this in OpenGL/GLSL and if it could share some code. Currently I'm following several references and code snippets but they're all Direct3D versions of the effect and so I believe that I'm not getting the technique right due to some Direct3D-OpenGL difference. Thanks in advance.
Advertisement
Since no one replied I guess it's better if I just post my code.
For sake of simplicity and readability, I'll just post a simplified version of the shader that shows the problem but that doesn't contain the Percentage-Closer Soft Shadowing technique:

float sampleSoftShadowMap(sampler2D shadowMap, vec4 shadowTexCoord, float uvSpaceLightSize){		vec2 texcoord= shadowTexCoord.xy/shadowTexCoord.w;		float receiverDepth= shadowTexCoord.z/shadowTexCoord.w;				//Bias.				vec2 dz_duv = 0;		vec3 duvdist_dx = dFdx(shadowTexCoord.xyz/shadowTexCoord.w);		vec3 duvdist_dy = dFdy(shadowTexCoord.xyz/shadowTexCoord.w);		dz_duv.x = duvdist_dy.y * duvdist_dx.z - duvdist_dx.y * duvdist_dy.z;		dz_duv.y = duvdist_dx.x * duvdist_dy.z - duvdist_dy.x * duvdist_dx.z;		float det = (duvdist_dx.x * duvdist_dy.y) - (duvdist_dx.y * duvdist_dy.x);		dz_duv /= det;		float kernelRadius= 2.0*uvSpaceLightSize;		float result= 0.0;				for(int i= 0; i< numSamples; ++i){					float biasedZ= receiverDepth+ dot(dz_duv, samplingVectors*kernelRadius);			float occluderDepth= texture2D(shadowMap, texcoord+ samplingVectors*kernelRadius).x*0.5+ 0.5;						if(biasedZ< occluderDepth)				result+= 1.0;		}				return result/numSamples;	}


Notice that I'm using linear depth shadowmaps instead of hardware shadowmaps so don't mind the gritty details about my shadowmapping implementation, it works just fine. The problem should be located in the bias calculation with the screen space derivatives, but I can't figure it out.
Can you post screenshots? Do you have issues with very small light sizes (and this small soft edges)?

While the receiver planar bias is pretty much the best you can do with PCF it starts becoming bad with too large lights, since the receiver no longer approximates a plane over areas that large.
I could post some screenshots but it would be useless because the only thing you'd see would be the tipical surface acne generated by shadowmapping without a bias applied. And that's my problem, besides getting wrong results, these have no visible features that can help debug the code. It's just surface acne. :(

I was hopping that someone could figure the problem just by looking at the shader code since the bias calculations are performed exactly like the document says to do. So, the intriguing part is that probably it doesn't work because of some difference between Direct3D and OpenGL implementations (like the diferent screen coordinate systems) or because I'm using this trick for cascaded shadowmaps.

This topic is closed to new replies.

Advertisement