SSAO with Deferred Shading Issues

Started by
3 comments, last by WFP 11 years ago

Greetings,

In my deferred shading pipeline, I am trying to add screen-space ambient occlusion. I'm (of course) creating it after my G-Buffer is created so I have the normals in view space and can reconstruct the position in view space from the depth buffer. I originally implemented it in my forward renderer following the example from Frank Luna's Direct3D 11 book, and it worked well, but I am running into some issues trying to adapt that to my deferred shading approach.

My G-Buffer normal render target is a R32G32B32A32_FLOAT format and my depth buffer is D24S8, so precision for calculations isn't an issue. As mentioned, I'm reconstructing the view space position from depth as described here by MJP.

The results I'm getting are shown below. As you can see in the first image, some of the occlusion looks correct, particularly where the box sits over the ground and the corner of the upper-middle box touches the lower-middle box, as well as the occlusion occurring on the sphere behind the left-most box. You'll notice in the first image that there is some occlusion being generated in the gap between the upper two boxes, but not as much as I would probably expect. Furthermore, when I move the camera to the right very slightly (second image), the occlusion value between them basically disappears, which makes me think I'm doing something wrong somewhere in view space. Notice that for those two boxes, too, the faces facing the screen have nothing else in front of them, so I would imagine they would receive no occlusion at all.

The third image is the same scene rendered from a different angle, this time showing that the sphere is somehow getting occlusion on faces that have no other geometry in front of them.

Also, the images are intentionally unblurred so we can see them for what they are and hopefully get a better idea of what is happening. I've tried it with bilateral blur passes enabled and the blurring works fine, but it's still just blurring the same incorrect data. Any ideas that might help in fixing this issue are very welcome and appreciated.

Here is the shader code I am using.

Vertex Shader:


struct VertexIn
{
	float3 posL : POSITION;
	float2 tex : TEXCOORD;
};

struct VertexOut
{
	float4 posH : SV_POSITION;
	float3 viewRay : VIEWRAY;
	float2 tex : TEXCOORD;
};

cbuffer cbPerFrame : register(cb0)
{
	float4x4 inverseProjectionMatrix;
};

VertexOut main(VertexIn vIn)
{
	VertexOut vOut;

	// already in NDC space
	vOut.posH = float4(vIn.posL, 1.0f);

	float3 positionV = mul(float4(vIn.posL, 1.0f), inverseProjectionMatrix).xyz;
	vOut.viewRay = float3(positionV.xy / positionV.z, 1.0f);

	// pass to pixel shader
	vOut.tex = vIn.tex;

	return vOut;
}

Pixel Shader:


struct VertexOut
{
	float4 posH : SV_POSITION;
	float3 viewRay : VIEWRAY;
	float2 tex : TEXCOORD;
};

cbuffer cbPerFrame : register(cb0)
{
	float4x4 gViewToTexSpace; // proj * texture
	float4 gOffsetVectors[14];

	float gOcclusionRadius;    //0.5f
	float gOcclusionFadeStart; // 0.2f
	float gOcclusionFadeEnd;   // 2.0f
	float gSurfaceEpsilon;     // 0.05f

	// for reconstructing position from depth
	float projectionA;
	float projectionB;

	float2 _padding;
};

Texture2D normalTexture : register(t0);
Texture2D depthStencilTexture : register(t1);
Texture2D randomVecMap : register(t2);

SamplerState samNormalDepth : register(s0);
SamplerState samRandomVec : register(s1);

// determines how much the sample point q occludes the point p as a function of distZ
float occlusionFunction(float distZ)
{
	float occlusion = 0.0f;
	if(distZ > gSurfaceEpsilon)
	{
		float fadeLength = gOcclusionFadeEnd - gOcclusionFadeStart;

		// linearly decrease occlusion from 1 to 0 as distZ goes from fade start to end
		occlusion = saturate((gOcclusionFadeEnd - distZ) / fadeLength);
	}
	return occlusion;
}

float4 main(VertexOut pIn) : SV_TARGET
{
	float3 normal = normalize(normalTexture.SampleLevel(samNormalDepth, pIn.tex, 0.0f).xyz);
	float depth = depthStencilTexture.SampleLevel(samNormalDepth, pIn.tex, 0.0f).r;
	float linearDepth = projectionB / (depth - projectionA);
	float3 position = pIn.viewRay * linearDepth;

	// extract random vector from map from [0,1] to [-1, 1]
	float3 randVec = 2.0f * randomVecMap.SampleLevel(samRandomVec, 4.0f * pIn.tex, 0.0f).rgb - 1.0f;

	float occlusionSum = 0.0f;

	// sample neighboring points about position in the hemisphere oriented by normal
	[unroll]
	for(int i = 0; i < 14; ++i)
	{
		// offset vectors are fixed and uniformly distributed - reflecting them about a random vector gives a random, uniform distribution
		float3 offset = reflect(gOffsetVectors.xyz, randVec);

		// flip offset vector if it is behind the plane define by (position, normal)
		float flip = sign(dot(offset, normal));

		// sample a point near position within the occlusion radius
		float3 q = position + flip * gOcclusionRadius * offset;

		// project q and generate projective tex-coords
		float4 projQ = mul(float4(q, 1.0f), gViewToTexSpace);
		projQ.xy /= projQ.w;

		// find nearest depth value along ray from eye to q
		float rz = depthStencilTexture.SampleLevel(samNormalDepth, projQ.xy, 0.0f).r;

		// reconstruct full view space position r = (rx, ry, rz)
		linearDepth = projectionB / (rz - projectionA);
		float3 r = pIn.viewRay * linearDepth;

		// test whether r occludes position
		float distZ = position.z - r.z;
		float dp = max(dot(normal, normalize(r - position)), 0.0f);
		float occlusion = dp * occlusionFunction(distZ);

		occlusionSum += occlusion;
	}

	occlusionSum /= 14;

	float access = 1.0f - occlusionSum;

	// sharpen the contrast of the SSAO map to make the effect more dramatic
	return saturate(pow(access, 4.0f));
}

Here is the application code for setting projectionA and projectionB (from Matt's post).


float clipDiff = farClipDistance - nearClipDistance;
float projectionA = farClipDistance / clipDiff;
float projectionB = (-farClipDistance * nearClipDistance) / clipDiff;

Thanks!

Advertisement

so you're doing world space SSAO, but your normals are view space?

try to set qp here

    float dp = max(dot(normal, normalize(r - position)), 0.0f);

to 1.f and although it's ignoring the normal, it should look somehow stable, then you know, you have to either use worldspace normals, or to transform the world space vector between the position of the surface and the 'hit' position on the ssao sample into view space.

Krypt0n: Just tried your suggestion, and it didn't make a difference besides making the effect darker on the screen (of course, since it was always 1.0f instead of between 0.0f and 1.0f). It still gave basically the exact same thing we see in the images above, though.

EDIT: Sorry, meant to answer your first question, too. I am trying to do everything in view space, not world space.

My thinking is that the issue is somewhere in these lines:


                // sample a point near position within the occlusion radius
		float3 q = position + flip * gOcclusionRadius * offset;

		// project q and generate projective tex-coords
		float4 projQ = mul(float4(q, 1.0f), gViewToTexSpace);
		projQ.xy /= projQ.w;

		// find nearest depth value along ray from eye to q
		float rz = depthStencilTexture.SampleLevel(samNormalDepth, projQ.xy, 0.0f).r;

because I am initially reconstructing the position from linear depth, but am not undoing that linear depth transformation before doing work with q to find the new sampling coordinates. I have to run to work, but can test this theory this evening, plus any other suggestions anyone may have.

Hm, I'm thinking that possibly in addition or instead of the idea I had earlier, it may actually be that I need to recalculate the view ray for the new texture location:


		// project q and generate projective tex-coords
		float4 projQ = mul(float4(q, 1.0f), gViewToTexSpace);
		projQ.xy /= projQ.w;

		// find nearest depth value along ray from eye to q
		float rz = depthStencilTexture.SampleLevel(samNormalDepth, projQ.xy, 0.0f).r;

		// reconstruct full view space position r = (rx, ry, rz)
		linearDepth = projectionB / (rz - projectionA);
		// this viewRay still points to the originally sampled coordinate
		float3 r = pIn.viewRay * linearDepth;

I'll try to figure out a solution for this. If anyone knows it offhand, that'd be great ;).

Thanks!

Edit: Update to comment in code sample

My idea above seems to have worked for the most part! Here's the new code to recalculate the view ray for the random sample:


		// sample a point near position within the occlusion radius
		float3 q = position + flip * gOcclusionRadius * offset;
		// new viewRay calculation
		float3 viewRay = float3(q.xy / q.z, 1.0f);

		// project q and generate projective tex-coords
		float4 projQ = mul(float4(q, 1.0f), gViewToTexSpace);
		projQ.xy /= projQ.w;

		// find nearest depth value along ray from eye to q
		float rz = depthStencilTexture.SampleLevel(samNormalDepth, projQ.xy, 0.0f).r;

		// reconstruct full view space position r = (rx, ry, rz)
		linearDepth = projectionB / (rz - projectionA);
		// use newly calculated viewRay
		float3 r = viewRay * linearDepth;

I've added a result screenshot below (still pre-blurring). If there are any other glaring deficiencies you see with this result or the way I'm calculating it please let me know! smile.png

Thanks!

This topic is closed to new replies.

Advertisement