having problems debugging SSAO

Started by
4 comments, last by GreenGodDiary 6 years, 3 months ago

Please look at my new post in this thread where I supply new information!

I'm trying to implement SSAO in my 'engine' (based on this article) but I'm getting odd results. I know I'm doing something wrong but I can't figure out what's causing the particular issue im having at the moment.

" rel="external">Here's a video of what it looks like . The rendered output is the SSAO map.

As you can see the result is heavily altered depending on the camera (although it seems to be unaffected my camera translation). The fact that the occlusion itself isn't correct isn't much of a problem at this stage, since I've hardcoded a lot of stuff that shouldn't be. E.g. I don't have a random-vector texture, all I do is use one of the sample vectors in order to construct the TBN matrix.
One issue at a time...

My shaders are as follows:


//SSAO VS

struct VS_IN
{
    float3 pos : POSITION;
    float3 ray : VIEWRAY;
};

struct VS_OUT
{
    float4 pos : SV_POSITION;
    float4 ray : VIEWRAY;
};

VS_OUT VS_main( VS_IN input )
{
    VS_OUT output;
    output.pos = float4(input.pos, 1.0f);  //already in NDC space, pass through
    output.ray = float4(input.ray, 0.0f); //interpolate view ray
    return output;
}

 


Texture2D depthTexture  : register(t0);
Texture2D normalTexture : register(t1);

struct VS_OUT
{
	float4 pos : SV_POSITION;
	float4 ray : VIEWRAY;
};

cbuffer	cbViewProj : register(b0)
{
	float4x4 view;
	float4x4 projection;
}

float4 PS_main(VS_OUT input) : SV_TARGET
{
	//Generate samples
	float3 kernel[8];

kernel[0] = float3(1.0f, 1.0f, 1.0f);
kernel[1] = float3(-1.0f, -1.0f, 0.0f);

kernel[2] = float3(-1.0f, 1.0f, 1.0f);
kernel[3] = float3(1.0f, -1.0f, 0.0f);

kernel[4] = float3(1.0f, 1.0f, 0.0f);
kernel[5] = float3(-1.0f, -1.0f, 1.0f);

kernel[6] = float3(-1.0f, 1.0f, .0f);
kernel[7] = float3(1.0f, -1.0f, 1.0f);

    //Get texcoord using SV_POSITION
	int3 texCoord = int3(input.pos.xy, 0);

	//Fragment viewspace position (non-linear depth)
	float3 origin = input.ray.xyz * (depthTexture.Load(texCoord).r);

	//world space normal transformed to view space and normalized
	float3 normal = normalize(mul(view, float4(normalTexture.Load(texCoord).xyz, 0.0f)));

    //Grab arbitrary vector for construction of TBN matrix
	float3 rvec = kernel[3];
	float3 tangent = normalize(rvec - normal * dot(rvec, normal));
	float3 bitangent = cross(normal, tangent);
	float3x3 tbn = float3x3(tangent, bitangent, normal);


	
	float occlusion = 0.0;
	for (int i = 0; i < 8; ++i) {
		// get sample position:
		float3 samp = mul(tbn, kernel[i]);
		samp = samp * 1.0f + origin;

		// project sample position:
		float4 offset = float4(samp, 1.0);
		offset = mul(projection, offset);
		offset.xy /= offset.w;
		offset.xy = offset.xy * 0.5 + 0.5;

		// get sample depth. (again, non-linear depth)
		float sampleDepth = depthTexture.Load(int3(offset.xy, 0)).r;

		// range check & accumulate:
		occlusion += (sampleDepth <= samp.z ? 1.0 : 0.0);
	}

    //Average occlusion
	occlusion /= 8.0;

	return min(occlusion, 1.0f);
}

 

I'm fairly sure my matrices are correct (view and projection) and that the input rays are correct.
I don't think the non-linear depth is the problem here either, but what do I know :| I haven't fixed the linear depth mostly because I don't really understand how it's done...


Any ideas are very appreciated!

Advertisement

Bumping with new information. I'm getting quite desperate, if someone could help me out I would be forever greatful<3

I have revamped my way of constructing the view space position. Instead of directly binding my DepthStencil as a shader resource (which thinking back made no sense to do), I'm now in the G-buffer pass outputting 'positionVS.z / FarClipDistance' to a texture and using that, and remaking my viewRays in the following way: (1000.0f is FarClipDistance)


		//create corner view rays
		float thfov = tan(fov / 2.0);
		float verts[24]
		{
			-1.0f, 1.0f, 0.0f, //Pos TopLeft corner
			-1.0f * thfov * aspect, 1.0f * thfov, 1000.0f,	//Ray 

			1.0f, 1.0f, 0.0f, //Pos	TopRight corner
			1.0f * thfov * aspect, 1.0f * thfov, 1000.0f,	//Ray

			-1.0f, -1.0f, 0.0f,	//Pos BottomLeft corner
			- 1.0f * thfov * aspect, -1.0f * thfov, 1000.0f,//Ray

			1.0f, -1.0f, 0.0f, //Pos BottomRight corner
			 1.0f * thfov * aspect, -1.0f * thfov, 1000.0f,	//Ray
		};

 

In my SSAO PS, I reconstruct view-space position like this:


	float3 origin = input.ray.xyz * (depthTexture.Load(texCoord).r);
	origin.x *= 1000;
	origin.y *= 1000;

Why do I multiply by 1000? Because it works. Why does it work? Don't know. But this gives me the same value that I had in the G-pass vertex shader. If someone knows why this works/why it shouldnt, do tell me.


Anyway, next I get the world-space normal from the G-buffer and multiply by my view matrix to get view-space normal:


	float3 normal = normalTexture.Load(texCoord).xyz;
	normal = mul(view, normal);
	normal = normalize(normal);

 

I now have a random-vector-texture that I sample.
Next I construct the TBN matrix using this vector and the view-space normal:


	float3 rvec = randomTexture.Sample(randomSampler, input.pos.xy).xyz;
	rvec.z = 0.0;
	rvec = normalize(rvec);
	float3 tangent = normalize(rvec - normal * dot(rvec, normal));
	float3 bitangent = normalize(cross(normal, tangent));
	float3x3 tbn = float3x3(tangent, bitangent, normal);

This is where I'm not sure if I'm doing it right. I am doing it exactly like the article in the original post, however since he is using OpenGL maybe something is different here?
The reason this part looks suspicious to me is that when I later use it, I get values that to me don't make sense.


        float3 samp = mul(tbn, kernel[i]);
        samp = samp + origin;

samp here is what looks odd to me. If the values are indeed wrong, I must be constructing my TBN matrix wrong somehow.

 

Next up, projecting samp in order to get the offset in NDC so that I can then sample the depth of samp:


		float4 offset = float4(samp, 1.0);
		offset = mul(offset, projection);
		offset.xy /= offset.w;
		offset.xy = offset.xy * 0.5 + 0.5;

		// get sample depth:
		float sampleDepth = depthTexture.Sample(defaultSampler, offset.xy).r;

		occlusion += (sampleDepth <= samp.z ? 1.0 : 0.0);


The result is still nowhere near what you'd expect. It looks slightly better than the video linked in the original post but still same story; huge odd artifacts that change heavily based on the cameras orientation.

What am I doing wrong?


help im dying

Bump. (sorry)

While just briefly reading the code (it's quite hard to say what is going on - your SSAO calculation doesn't look correctly to me though), here are few notes which might lead you to where the issue is:

  • Make sure you know in which space you are - world space, view space, object space, etc. ... doing this wrong will be one of the reasons for view-dependent errors.
  • Do NOT multiply by random constants that make it "look good" - make sure each constant has a reason why it is there. Put it in the comment.
  • Compare everything - you can write how 'view space normals', 'view space position', etc. when generating G-Buffer (into another buffer), and compare against your reconstruction - this way you can proof that you have your input data correct

Now, for the SSAO:

  • Make sure you're sampling in hemisphere ABOVE the point in direction of normal. From your specified vectors you will also attempt to sample in the opposite hemisphere.
  • You will need some randomization (otherwise you will need a lot of samples to make SSAO look like anything resembling SSAO).
  • I also recommend checking out other shaders doing SSAO - F.e. on ShaderToy - https://www.shadertoy.com/view/4ltSz2 - it might help you find what is wrong on your side (I'm intentionally adding it here, as if you compare the actual SSAO calculation, as yours does seem incorrect to me)

My current blog on programming, linux and stuff - http://gameprogrammerdiary.blogspot.com

8 hours ago, Vilem Otte said:

While just briefly reading the code (it's quite hard to say what is going on - your SSAO calculation doesn't look correctly to me though), here are few notes which might lead you to where the issue is:

  • Make sure you know in which space you are - world space, view space, object space, etc. ... doing this wrong will be one of the reasons for view-dependent errors.
  • Do NOT multiply by random constants that make it "look good" - make sure each constant has a reason why it is there. Put it in the comment.
  • Compare everything - you can write how 'view space normals', 'view space position', etc. when generating G-Buffer (into another buffer), and compare against your reconstruction - this way you can proof that you have your input data correct

Now, for the SSAO:

  • Make sure you're sampling in hemisphere ABOVE the point in direction of normal. From your specified vectors you will also attempt to sample in the opposite hemisphere.
  • You will need some randomization (otherwise you will need a lot of samples to make SSAO look like anything resembling SSAO).
  • I also recommend checking out other shaders doing SSAO - F.e. on ShaderToy - https://www.shadertoy.com/view/4ltSz2 - it might help you find what is wrong on your side (I'm intentionally adding it here, as if you compare the actual SSAO calculation, as yours does seem incorrect to me)

Thanks alot for these pointers, I will definitely look into it further using your advice.
One question though:

Quote

From your specified vectors you will also attempt to sample in the opposite hemisphere.

Are you sure this is the case? Because my kernels are in the range ([-1, 1], [-1, 1], [0, 1]), wont it exclusively sample from the "upper" hemisphere? Or am i thinking about it wrong?

Thanks again

This topic is closed to new replies.

Advertisement