Screen-Space Subsurface Scattering artifacts (help)

Started by
5 comments, last by lipsryme 9 years, 10 months ago

Update: I just had another look at his sample app and he has the exact same artifacts when FOLLOW SURFACE is disabled but when enabled they are completely gone unlike mine...I have no idea why. I've looked over every line of his shaders and mine does the exact same thing. Also compared texture/RT formats -> same.

And the issue is not just with the skybox...tried it with a bright white plane in the BG, same artefacts. MSAA also not the culprit.

I've implemented the screen space SSS based on jimenez sample but I'm having big issues with artifacts that seem to occur when there's bright pixels near or behind the pixel that the SSS is applied to (the skin of a character).

I've set the diffuse color to black so the artifacts can be seen clearly.

This is with "FOLLOW SURFACE off":

eDhWnks.png?1?7860

This is with "FOLLOW SURFACE" on:

WgfDH5V.png?1?4256

As you can see this makes it a little better but the artifacts are still there. (rainbow colored lines)

Any ideas ? Is this depth related ? Thing is I'm not seeing these artifacts in his sample app so this must a problem in my renderer.

Since I'm using MSAA I'm not using the stencil buffer to mask out the geometry but the separate RT where I render the specular to.

So it looks like the following:

Render Mesh (non skin shading): RT0: Color [R16G16B16A16]

RT1: (Spec) Specular.rgb / (SSS Mask) Specular.a [R16G16B16A16]

RT2: Linear Depth (input.PositionCS.w) [R32_FLOAT]

then post process that binds the spec target and rejects based on the spec.a value that is > 0.0f.

The error happens during the SSS blur passes.

Let me show you my code:


#define SSSS_N_SAMPLES 17
const static float4 kernel[] =
{
        float4(0.546002, 0.63378, 0.748867, 0),
        float4(0.00310782, 0.000131535, 3.77269e-005, -2),
        float4(0.00982943, 0.00089237, 0.000275702, -1.53125),
        float4(0.0141597, 0.00309531, 0.00106399, -1.125),
        float4(0.0211795, 0.00775237, 0.00376991, -0.78125),
        float4(0.0340081, 0.01474, 0.00871983, -0.5),
        float4(0.0559159, 0.0280422, 0.0172844, -0.28125),
        float4(0.0570283, 0.0643862, 0.0411329, -0.125),
        float4(0.0317703, 0.0640701, 0.0532821, -0.03125),
        float4(0.0317703, 0.0640701, 0.0532821, 0.03125),
        float4(0.0570283, 0.0643862, 0.0411329, 0.125),
        float4(0.0559159, 0.0280422, 0.0172844, 0.28125),
        float4(0.0340081, 0.01474, 0.00871983, 0.5),
        float4(0.0211795, 0.00775237, 0.00376991, 0.78125),
        float4(0.0141597, 0.00309531, 0.00106399, 1.125),
        float4(0.00982943, 0.00089237, 0.000275702, 1.53125),
        float4(0.00310782, 0.000131535, 3.77269e-005, 2),
};

float4 SSS_Blur(in float2 TexCoord, in float2 dir)
{
	// Fetch color of current pixel:
	float4 colorM = InputTexture0.SampleLevel(PointSampler, TexCoord, 0);

	// Use SpecularTarget alpha as mask
	if (InputTexture2.Sample(PointSampler, TexCoord).a > 0.0f)
		return colorM;
			
	// Fetch linear depth of current pixel:
	float depthM = InputTexture1.SampleLevel(PointSampler, TexCoord, 0).r;

	// Calculate the sssWidth scale (1.0 for a unit plane sitting on the
	// projection window):
	float distanceToProjectionWindow = 1.0f / tan(0.5f * SSS_FOV);
	float scale = distanceToProjectionWindow / depthM;

	// Calculate the final step to fetch the surrounding pixels:
	float2 finalStep = 0.012f * scale * dir;
	finalStep *= colorM.a; // Modulate it using the alpha channel.
	finalStep *= 1.0f / 3.0f; // Divide by 3 as the kernels range from -3 to 3.
		
	// Accumulate the center sample:
	float3 colorBlurred = colorM.rgb * kernel[0].rgb;
				
	
	// Accumulate the other samples:
	[unroll]
	for (int i = 1; i < SSSS_N_SAMPLES; i++)
	{
		// Fetch color and depth for current sample:
		float2 offset = TexCoord + kernel[i].a * finalStep;
		float3 color = InputTexture0.SampleLevel(LinearClampSampler, offset, 0).rgb;

		// --- FOLLOW SURFACE ------------------------------------------------------------------
		// If the difference in depth is huge, we lerp color back to "colorM":
		float depth = InputTexture1.SampleLevel(LinearClampSampler, offset, 0).r;
		float s = saturate(300.0f * distanceToProjectionWindow * 0.012f * abs(depthM - depth));
		color.rgb = lerp(color.rgb, colorM.rgb, s);
		//--------------------------------------------------------------------------------------

		// Accumulate:
		colorBlurred.rgb += kernel[i].rgb * color.rgb;
         }

	
	return float4(colorBlurred, colorM.a);
}


float4 SSS_Convolution_H(in VSOutput input) : SV_TARGET0
{
	return SSS_Blur(input.TexCoord, float2(1, 0));
}


float4 SSS_Convolution_V(in VSOutput input) : SV_TARGET0
{
	return SSS_Blur(input.TexCoord, float2(0, 1));
}


float3 SSS_AddSpecular(in VSOutput input) : SV_TARGET0
{
	return InputTexture0.Sample(LinearClampSampler, input.TexCoord).rgb +
	       InputTexture1.Sample(LinearClampSampler, input.TexCoord).rgb;
}
Advertisement

It's possible you are having depth precision issues. If you tweak your near and far plane does the banding change size? If this is a problem you could possibly take a different approach and only change your blur width based on the first sample. I know the original Jimenez paper did a ddx and ddy calculation on the first sample of depth to figure out the slope.

Edit:

Oh, also, just for testing, try sampling the alpha of your specular map on each blur sample and reject the color if the resulting sample is greater than 0. (I'm thinking my previous comment is wrong now, but I'll keep it just in case.)

Oh, also, just for testing, try sampling the alpha of your specular map on each blur sample and reject the color if the resulting sample is greater than 0. (I'm thinking my previous comment is wrong now, but I'll keep it just in case.)

Testing every blur sample inside the loop actually works ! But it has gotten quite a bit slower (almost a millisecond on close-up).

Even when leaving the follow surface part out (which doesn't do anything now). But why does his method work for him and not for me ? I don't think changing the near/far plane will do anything I checked his projection and it's 0.1f / 100.0f which is the same as I use for my viewport.

As far as I can tell he's also just using a regular ResolveSubresource on the R32_FLOAT linear depth RT. I did try it without msaa though and it was the same.

Oh, also, just for testing, try sampling the alpha of your specular map on each blur sample and reject the color if the resulting sample is greater than 0. (I'm thinking my previous comment is wrong now, but I'll keep it just in case.)

Testing every blur sample inside the loop actually works ! But it has gotten quite a bit slower (almost a millisecond on close-up).

Even when leaving the follow surface part out (which doesn't do anything now). But why does his method work for him and not for me ? I don't think changing the near/far plane will do anything I checked his projection and it's 0.1f / 100.0f which is the same as I use for my viewport.

As far as I can tell he's also just using a regular ResolveSubresource on the R32_FLOAT linear depth RT. I did try it without msaa though and it was the same.

Try to do pre pass where resulting texture is pre tested.

But its inside the loop that uses the calculated offset tex coords, how would I do that in a pre pass ?

But its inside the loop that uses the calculated offset tex coords, how would I do that in a pre pass ?

Premultiply masked areas to zero so bright stuff don't bleed.

Thanks, that made it quite a bit faster...still feel a little hacky all around but whatever tongue.png

The overhead of that technique is still quite high...as you need to do several special cases...has anyone tried doing it in half res ? Don't wanna wash out skin details though...

circa 4 ms on close-up @ 720p

Update:

The blur passes itself are just about 2ms but msaa resolves for scene, linear depth, and specular target add up to 1-2 ms (according to AMD perf studio) + the additional mask prepass & add specular passes...

This topic is closed to new replies.

Advertisement