View more

View more

View more

Image of the Day Submit

IOTD | Top Screenshots

The latest, straight to your Inbox.

Subscribe to GameDev.net Direct to receive the latest updates and exclusive content.

SSAO artefacts

Old topic!

Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

11 replies to this topic

#1Wh0p  Members

Posted 22 August 2013 - 04:34 AM

Hi,

But i encountered some difficulties as you can see below:

1) Why do the edges of the triangles reappear in the OcclusionBuffer and there for are visible in the final image (Thought that wouldn't be if i calc the normalized normals in pixel shader)?

2) If I look down on a plane surface this surface appears to be occluded (grows dark), should it be that way?

3) Artefacts like the black curvy lines and those rectangles nicly seen in the 3rd picture.

Here are the images:

As far as i can tell the normals and positions seem correct, but anyway:

This is how i generate my normal and position buffer:

PS_INPUT VSMAIN (in VS_INPUT input)
{
PS_INPUT Out;

// viewspace position
Out.viewpos = mul (float4 (input.position, 1.0f), WorldMatrix);
Out.viewpos = mul (Out.viewpos, ViewMatrix);

// projectited position
Out.position = mul (Out.viewpos, ProjMatrix);

// viewspace normals
Out.normal = mul (float4 (input.normal, 0.0f), WorldMatrix);
Out.normal = mul (Out.normal, ViewMatrix);

return Out;
}

struct PS_OUTPUT
{
float4 normal : SV_Target0;
float4 viewpos : SV_Target1;
};

PS_OUTPUT PSMAIN (in PS_INPUT In)
{
PS_OUTPUT Out;

Out.normal =  float4((normalize(In.normal)).xyz * 0.5f + 0.5f, 1.0f);
Out.viewpos = In.viewpos;

return Out;
}


This is the ssao algorithm (pretty much the one from the article):

float3 getPosition (in float2 uv)
{
return PositionBuffer.Sample (LinearSampler, uv).xyz;
}

float3 getNormal (in float2 uv)
{
return DepthNormalBuffer.Sample (LinearSampler, uv).xyz;
}

float2 getRandom (in float2 uv)
{
return normalize (RandomTexture.Sample (LinearSampler, uv).xy * 0.5f + 0.5f);
}

float doAmbientOcclusion (in float2 tcoord, in float2 occluder, in float3 p, in float3 cnorm)
{
// vector v from the occludee to the occluder
float3 diff = getPosition (tcoord + occluder) - p;
const float3 v = normalize (diff);

// distance between occluder and occludee
const float d = length (diff) * Scale;

return max (0.0, dot (cnorm,v) - Bias) * (1.0 / (1.0 + d) * Intensity);
}

float PSMAIN (in PS_INPUT In) : SV_Target
{
const float2 vec[4] = {
float2 (1,0), float2 (-1,0),
float2 (0,1), float2 (0,-1)
};

float3 p = getPosition (In.tex);
float3 n = getNormal (In.tex);
float2 rand = getRandom(In.tex);

// amboent occlusion factor
float ao = 0.0f;

int iterations = 4;
for (int j = 0; j < iterations; ++j)
{
float2 coord1 = reflect(vec[j], rand) * rad;
float2 coord2 = float2 (coord1.x*0.707 - coord1.y*0.707,
coord1.x*0.707 + coord1.y*0.707);

ao += doAmbientOcclusion (In.tex, coord1*0.25, p, n);
ao += doAmbientOcclusion (In.tex, coord2*0.5, p, n);
ao += doAmbientOcclusion (In.tex, coord1*0.75, p, n);
ao += doAmbientOcclusion (In.tex, coord2, p, n);
}
ao /= (float)iterations*4.0;

return 1 - ao;
}


Thank you if you are still reading

#2Wh0p  Members

Posted 23 August 2013 - 04:32 AM

So I got an update on this:

the issue with the rectangles was surprisingly easy solved by using a linear filter and not a point filter for the random texture...

however the other problems remain...

Applying a blur to the occlusion buffer did not hide those lines, as well I do have no clue how to get rid of the primitive edges.

#3unbird  Members

Posted 23 August 2013 - 11:27 AM

the issue with the rectangles was surprisingly easy solved by using a linear filter and not a point filter for the random texture...

Now you haven't solved it but hidden the problem further, I think. Looks like you stretch your random texture across the full screen, both from your implementation and your screenshot. Compare your getRandom to the one of the article: The idea is to repeat it (wrapping adressing mode), tiling the whole screen, so to speak. For this you need to feed the right values for g_screen_size and random_size. Also, the sample uses denormalization (*2 - 1) of the texture sample, you do quite the opposite (one could use a SNorm texture though).

It helps using "debugging" output, e.g. output the random values only, to narrow problems down.

#4Wh0p  Members

Posted 23 August 2013 - 11:53 AM

The idea is to repeat it (wrapping adressing mode), tiling the whole screen, so to speak.

Aparently I did that too and gave the other modification the credit for the result.

But anyway those lines didn't dissapear and are only masked by the blur, but if you whatch really carefully you can still see them.

Is there realy noone here, whose got an idea to get rid of the primitive edges?

If it helps, the final output is calculated this way:

Out = In.color * ao * Ia;

if (NdotL > 0.0f)
{
Out += diffuseIntensity;
Out += specularIntensity;
}

return Out;


where Ia is the ambient intensity, and ao the abient occlusion factor from the buffer.

#5phil_t  Members

Posted 23 August 2013 - 12:17 PM

Maybe this blog post of mine will help:

http://mtnphil.wordpress.com/2013/06/26/know-your-ssao-artifacts/

#6Wh0p  Members

Posted 24 August 2013 - 03:59 AM

Good job on the article!

I indeed implemented a upper bound for the depth and that along with tweaking the bias value for the dot product got rid of occluding the plane surfaces that much.

However the primitive edges still give me a hard time...

#7DwarvesH  Members

Posted 24 August 2013 - 11:32 AM

Really nice blog. I'm reading through it right now. Especially the terrain and leaf generation parts are interesting.

#8phil_t  Members

Posted 24 August 2013 - 11:36 AM

However the primitive edges still give me a hard time...

Are you sure your normals are correct? Are you using smooth normals? Do you see the primitive edges if you just use standard lighting, no AO?

#9Wh0p  Members

Posted 24 August 2013 - 12:24 PM

Yeah I do calculate the smooth normals. The normal buffer as well as the positions buffer look correct to me. Well you can't see a lot in the positions buffer just the four colors but if I choose a smller scaled model you might see that the possitions are smoothly interpalated.

If the images don't match up with your experiences, tell me!

The term in the pixelshader to normalize the normals I use is:

Out.normal =  float4((normalize(In.normal)).xyz * 0.5f + 0.5f, 1.0f);


But I am wondereing what the * 0.5f + 0.5f is about. I haven't found any reason in tutorials, nor could I come up with one myself.

Edit:

I just realized that the primitives especially become visible if I cran up the intensity value.

I'll just show you how it looks right now and if you tell me its good and be done with it i do so,

however if you find anything off please report back

Edited by Wh0p, 24 August 2013 - 12:55 PM.

#10Styves  Members

Posted 25 August 2013 - 05:59 AM

What formats are your g-buffer targets?

Also, the * 0.5 + 0.5 brings the normalized normals from range [-1, 1] to range [0, 1].

#11Wh0p  Members

Posted 25 August 2013 - 09:18 AM

I'm using DXGI_FORMAT_R16G16B16A16_FLOAT the thing with the positive range I knew, what is was wondering about is that the occlusion term should be evaluated wrong for points with negative components in their normal.

#12Wh0p  Members

Posted 25 August 2013 - 02:17 PM

Ok, its all settled now.

The main problem was, that I had configured a value for the radius that was much too small. Which toggled the points to occlude themselves. It look allright now.

I appreciate your help, Thaks!

Old topic!

Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.