SSAO issues [Fixed]

Started by
11 comments, last by DwarvesH 10 years, 8 months ago

Hey folks,

I'm working through an excellent article on SSAO (http://www.gamedev.net/page/resources/_/technical/graphics-programming-and-theory/a-simple-and-practical-approach-to-ssao-r2753) but am having troubles generating sensible data. Although the scene looks reasonably well lit using the SSAO technique, the map isn't exactly what I'm expecting.

I'm have a deferred render, so I have to reconstruct both the normals & positions in view space, which I do like this :


// Given the supplied texture coordinates, this function will use the depth map and inverse view projection
// matrix to construct a view-space position
float4 CalculateViewSpacePosition( float2 someTextureCoords )
{
 // Get the depth value
 float currentDepth = DepthMap.Sample( DepthSampler, someTextureCoords ).r;
	
 // Calculate the screen-space position
 float4 currentPosition;
 currentPosition.x = someTextureCoords.x * 2.0f - 1.0f;
 currentPosition.y = -(someTextureCoords.y * 2.0f - 1.0f);
 currentPosition.z = currentDepth;
 currentPosition.w = 1.0f;
	
 // Transform the screen space position in to view-space 
 currentPosition = mul( currentPosition, ourInverseProjection );

 return currentPosition;
}

// Reads in normal data using the supplied texture coordinates, converts in to view-space
float3 GetCurrentNormal( float2 someTextureCoords )
{
 // Get the normal data for the current point
 float3 normalData = NormalMap.Sample(NormalSampler, someTextureCoords).xyz;
	
 // Transform the normal in to view space
 normalData = normalData * 2.0f - 1.0f;
 normalData = mul( normalData, ourInverseProjection );
	
 return normalize(normalData);
}

So the scene that I'm rendering (without any lights) looks like this. The individual cube faces seem to have different colour values, where as I would expect the difference to be where the cubes have a common edge:

bGHAo2R.png

But the SSAO buffer isn't exactly what I would expect (I would think there would be black lines running down the crevasses ) :

yzkF4b2.png

My guess is that the way I'm calculating my view-space normals must be messed up - but any suggestions from the audience would be greatly appreciated smile.png

This is what I'm outputting from the SSAO shader :


// ambient occlusion calculations
...

ambientFactor /= 16.0f; // iterations * 4 (for each ambient factor calculated)
return float4( 1.0f - ambientFactor, 1.0f - ambientFactor, 1.0f - ambientFactor, 1.0f );

Cheers!

Advertisement

How do you construct the depth map and the normal map textures?

FastCall22: "I want to make the distinction that my laptop is a whore-box that connects to different network"

Blog about... stuff (GDNet, WordPress): www.gamedev.net/blog/1882-the-cuboid-zone/, cuboidzone.wordpress.com/

How do you construct the depth map and the normal map textures?

The depth values are in screen space :


Vertex Shader :

// Calculate screen-space position
output.myPosition = mul( anInput.myPosition, aWorldMatrix );
output.myPosition = mul( output.myPosition, aViewMatrix );
output.myPosition = mul( output.myPosition, aProjectionMatrix );

// Set the depth values for the pixel shader
output.myDepth.x = output.myPosition.z;
output.myDepth.y = output.myPosition.w;

// Normal
output.myNormal = mul( anInput.myNormal, aWorldMatrix );



Pixel Shader :

// Normal
output.myNormal.xyz = 0.5f * (normalize(anInput.myNormal) + 1.0f);

// Depth
output.myDepth = anInput.myDepth.x / anInput.myDepth.y;

So I guess my normal values are probably in world space when I re-construct it in the SSAO pixel shader (after sampling from the buffer, before the matrix multiplication)?

I've tried multiplying the normal values by my camera's view matrix, but that produces very strange results - geometry that directly faces the camera is much brighter than other geometry. The bright highlights follow the camera as I pan it around the scene :

Xf53Mn5.png

This is how I calculate my depth:


Apply WORLD, VIEW, PROJ to position

output.Depth.x = output.position.z;
output.Depth.y = output.position.w;

So that seems fine I guess.

But in the normals, I do this instead:


output.NormalW = mul(float4(normal.xyz,0), mul(worldMatrix, viewMatrix));

So the output normal has the w component to 0, and then multiplied to the world matrix with the view.

And this is how I output them:


output.Normals = float4(normalize(input.NormalW), 1);

Now my way may just be different, or maybe there is something to it.

(I may be completely wrong here, because I have never touched this kind of conversion)

FastCall22: "I want to make the distinction that my laptop is a whore-box that connects to different network"

Blog about... stuff (GDNet, WordPress): www.gamedev.net/blog/1882-the-cuboid-zone/, cuboidzone.wordpress.com/

I don't think that there's a problem with the data in my normal map; I can reconstruct them in world-space when I perform lighting calculations, and everything there works without any trouble :


From my directional lighting shader:

// Get the normal data out of the normal map
float4 normalData = NormalMap.Sample( NormalSampler, anInput.myTexCoords );
	
// Transform the normal back in to [-1,1] range
float3 currentNormal = 2.0f * normalData.xyz - 1.0f;

// Calculate the diffuse light, given a light intensity
float3 currentLightVector 	= -normalize( myLightDirection );
float lightIntensity 		= max( 0, dot(currentNormal, currentLightVector) );

I'll give your method a go though, thanks!


I'll give your method a go though, thanks!

No luck unfortunately

How are you doing you SSAO calculations? Just the same way as the tutorial?

FastCall22: "I want to make the distinction that my laptop is a whore-box that connects to different network"

Blog about... stuff (GDNet, WordPress): www.gamedev.net/blog/1882-the-cuboid-zone/, cuboidzone.wordpress.com/

Just to eliminate some potential problems, try writing to a position texture and use that instead of reconstructing the position, like this:


VS:
output.tPosition.xyz = mul(float4(position.xyz,1), mul(worldMatrix, viewMatrix)).xyz;

PS:
output.Position = float4(input.tPosition.xyz, 1.0f); // out.Pos is a render target

And then when getting the position, simply write:


float3 getPosition(in float2 uv)
{
	return t_posmap.Sample(ss, uv).xyz;
}

FastCall22: "I want to make the distinction that my laptop is a whore-box that connects to different network"

Blog about... stuff (GDNet, WordPress): www.gamedev.net/blog/1882-the-cuboid-zone/, cuboidzone.wordpress.com/

How are you doing you SSAO calculations? Just the same way as the tutorial?

Pretty much, the only difference is I'm using some hard coded values for the constants :


// Pixel shader that calculates ambient occlusion
float4 PS( PixelShaderInput anInput ) : SV_TARGET
{
 float3 currentPosition 	= CalculateViewSpacePosition( anInput.myTexCoords ).xyz;
 float3 currentNormal   	= GetCurrentNormal( anInput.myTexCoords );
 float2 randomNormal 	= GetRandomNormal( anInput.myTexCoords );

 float 	ambientFactor = 0.0f;
 float  radius 	      = 1.0f / currentPosition.z; // 1.0f is the sample radius value
 const  float2 vec[4] = { float2(1,0), float2(-1,0), float2(0,1), float2(0,-1) };
	
for( int i = 0; i < 4; i++ )
{
 float2 coord1 = reflect( vec[i], randomNormal ) * radius;
 float2 coord2 = float2( coord1.x * 0.707f - coord1.y * 0.707f, coord1.x * 0.707f + coord1.y * 0.707f );
		
 ambientFactor += CalculateAmbientOcclusion( anInput.myTexCoords, coord1 * 0.25f, currentPosition, currentNormal );
 ambientFactor += CalculateAmbientOcclusion( anInput.myTexCoords, coord2 * 0.5f, currentPosition, currentNormal );
 ambientFactor += CalculateAmbientOcclusion( anInput.myTexCoords, coord1 * 0.75f, currentPosition, currentNormal );
 ambientFactor += CalculateAmbientOcclusion( anInput.myTexCoords, coord2, currentPosition, currentNormal );
}
	
 ambientFactor /= 16.0f; // iterations * 4 (for each ambient factor calculated)
	
 return float4( 1.0f - ambientFactor, 1.0f - ambientFactor, 1.0f - ambientFactor, 1.0f );
}

And the CalculateAmbientOcclusion is almost identical :


// Calculates an ambient occlusion sample by looking at the depth of pixels around the current position
float CalculateAmbientOcclusion( float2 someTexCoords, float2 aSampleOffset, float3 aPosition, float3 aNormal )
{
 float3 difference  = CalculateViewSpacePosition( someTexCoords + aSampleOffset ).xyz - aPosition;

 float3 nDifference = normalize( difference );

 float  distance    = length(difference) * 1.0f; // 1.0f is the scale
	
 return max( 0.0f, dot(aNormal, nDifference) - 0.001f ) * (1.0f / (1.0f + distance)) * 1.0f; // 0.001f is the g_bias, * 1.0f at the end is the intensity
}

Just to eliminate some potential problems, try writing to a position texture and use that instead of reconstructing the position

Not a bad idea. I do reconstruct the position values in my shadow mapping calculations (which work), but it's worth a shot.

This topic is closed to new replies.

Advertisement