Sign in to follow this  
Wh0p

SSAO artefacts

Recommended Posts

Hi,

I followed this article about ssao: http://www.gamedev.net/page/resources/_/technical/graphics-programming-and-theory/a-simple-and-practical-approach-to-ssao-r2753

 

But i encountered some difficulties as you can see below:

 

1) Why do the edges of the triangles reappear in the OcclusionBuffer and there for are visible in the final image (Thought that wouldn't be if i calc the normalized normals in pixel shader)?

2) If I look down on a plane surface this surface appears to be occluded (grows dark), should it be that way?

3) Artefacts like the black curvy lines and those rectangles nicly seen in the 3rd picture.

 

Here are the images:

ssao1.jpg

ssao2e.jpg

ssao3e.jpg

 

As far as i can tell the normals and positions seem correct, but anyway:

 

This is how i generate my normal and position buffer:

PS_INPUT VSMAIN (in VS_INPUT input)
{
	PS_INPUT Out;

	// viewspace position
	Out.viewpos = mul (float4 (input.position, 1.0f), WorldMatrix);
	Out.viewpos = mul (Out.viewpos, ViewMatrix);

	// projectited position
	Out.position = mul (Out.viewpos, ProjMatrix);

	// viewspace normals
	Out.normal = mul (float4 (input.normal, 0.0f), WorldMatrix);
	Out.normal = mul (Out.normal, ViewMatrix);
	

	return Out;
}

struct PS_OUTPUT
{
	float4 normal : SV_Target0;
	float4 viewpos : SV_Target1;
};

PS_OUTPUT PSMAIN (in PS_INPUT In)
{
	PS_OUTPUT Out;
	
	Out.normal =  float4((normalize(In.normal)).xyz * 0.5f + 0.5f, 1.0f);
	Out.viewpos = In.viewpos;

	return Out;
}

This is the ssao algorithm (pretty much the one from the article):

float3 getPosition (in float2 uv)
{
	return PositionBuffer.Sample (LinearSampler, uv).xyz;
}

float3 getNormal (in float2 uv)
{
	return DepthNormalBuffer.Sample (LinearSampler, uv).xyz;
}

float2 getRandom (in float2 uv)
{
	return normalize (RandomTexture.Sample (LinearSampler, uv).xy * 0.5f + 0.5f);
}

float doAmbientOcclusion (in float2 tcoord, in float2 occluder, in float3 p, in float3 cnorm)
{
	// vector v from the occludee to the occluder
	float3 diff = getPosition (tcoord + occluder) - p;
	const float3 v = normalize (diff);

	// distance between occluder and occludee
	const float d = length (diff) * Scale;

	return max (0.0, dot (cnorm,v) - Bias) * (1.0 / (1.0 + d) * Intensity);
}


float PSMAIN (in PS_INPUT In) : SV_Target
{
	const float2 vec[4] = {
		float2 (1,0), float2 (-1,0),
		float2 (0,1), float2 (0,-1)
	};

	float3 p = getPosition (In.tex);
	float3 n = getNormal (In.tex);
	float2 rand = getRandom(In.tex);

	// amboent occlusion factor
	float ao = 0.0f;
	float rad = Radius / p.z;

	int iterations = 4;
	for (int j = 0; j < iterations; ++j)
	{
		float2 coord1 = reflect(vec[j], rand) * rad;
		float2 coord2 = float2 (coord1.x*0.707 - coord1.y*0.707,
		                        coord1.x*0.707 + coord1.y*0.707);

		ao += doAmbientOcclusion (In.tex, coord1*0.25, p, n);
		ao += doAmbientOcclusion (In.tex, coord2*0.5, p, n);
		ao += doAmbientOcclusion (In.tex, coord1*0.75, p, n);
		ao += doAmbientOcclusion (In.tex, coord2, p, n);
	}
	ao /= (float)iterations*4.0;

	return 1 - ao;
}

Thank you if you are still reading :)

Share this post


Link to post
Share on other sites

So I got an update on this:

 

the issue with the rectangles was surprisingly easy solved by using a linear filter and not a point filter for the random texture...

 

however the other problems remain...

 

Applying a blur to the occlusion buffer did not hide those lines, as well I do have no clue how to get rid of the primitive edges.

 

Please help!

Share this post


Link to post
Share on other sites


the issue with the rectangles was surprisingly easy solved by using a linear filter and not a point filter for the random texture...

 

Now you haven't solved it but hidden the problem further, I think. Looks like you stretch your random texture across the full screen, both from your implementation and your screenshot. Compare your getRandom to the one of the article: The idea is to repeat it (wrapping adressing mode), tiling the whole screen, so to speak. For this you need to feed the right values for g_screen_size and random_size. Also, the sample uses denormalization (*2 - 1) of the texture sample, you do quite the opposite (one could use a SNorm texture though).

 

It helps using "debugging" output, e.g. output the random values only, to narrow problems down.

Share this post


Link to post
Share on other sites

The idea is to repeat it (wrapping adressing mode), tiling the whole screen, so to speak.

 

Aparently I did that too and gave the other modification the credit for the result.

But anyway those lines didn't dissapear and are only masked by the blur, but if you whatch really carefully you can still see them.

 

Is there realy noone here, whose got an idea to get rid of the primitive edges?

 

If it helps, the final output is calculated this way:

Out = In.color * ao * Ia;

if (NdotL > 0.0f) 
{
	Out += diffuseIntensity;
	Out += specularIntensity;
}

return Out;

where Ia is the ambient intensity, and ao the abient occlusion factor from the buffer.

Share this post


Link to post
Share on other sites

Good job on the article!

I indeed implemented a upper bound for the depth and that along with tweaking the bias value for the dot product got rid of occluding the plane surfaces that much.

 

However the primitive edges still give me a hard time... 

Share this post


Link to post
Share on other sites

 

However the primitive edges still give me a hard time... 

 

 

Are you sure your normals are correct? Are you using smooth normals? Do you see the primitive edges if you just use standard lighting, no AO?

Share this post


Link to post
Share on other sites

Yeah I do calculate the smooth normals. The normal buffer as well as the positions buffer look correct to me. Well you can't see a lot in the positions buffer just the four colors but if I choose a smller scaled model you might see that the possitions are smoothly interpalated.

 

ssao1.jpg

 

ssao2.jpg

 

ssao3.jpg

 

If the images don't match up with your experiences, tell me!

 

The term in the pixelshader to normalize the normals I use is:

Out.normal =  float4((normalize(In.normal)).xyz * 0.5f + 0.5f, 1.0f);

But I am wondereing what the * 0.5f + 0.5f is about. I haven't found any reason in tutorials, nor could I come up with one myself.

 

 

Edit:

I just realized that the primitives especially become visible if I cran up the intensity value.

I'll just show you how it looks right now and if you tell me its good and be done with it i do so,

however if you find anything off please report back :)

 

[url=http://postimg.org/image/vtxxdwvx1/]ssao2e.jpg[/url]

Edited by Wh0p

Share this post


Link to post
Share on other sites

I'm using DXGI_FORMAT_R16G16B16A16_FLOAT the thing with the positive range I knew, what is was wondering about is that the occlusion term should be evaluated wrong for points with negative components in their normal.

Share this post


Link to post
Share on other sites

Ok, its all settled now.

The main problem was, that I had configured a value for the radius that was much too small. Which toggled the points to occlude themselves. It look allright now.

I appreciate your help, Thaks!

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this