SSAO adventures : camera-space reconstruction nightmares

Started by
3 comments, last by kalle_h 9 years, 6 months ago

So I've been trying to implement Scalable Ambient Obscurance for a few weeks now but I'm still horribly stuck at the 'getting the right coordinates/space' part. I can't believe how hard it is to figure out. My fault probably. But as an excuse, my setup is a bit unusual since I'm not working with some kind of ready-made game engine but with a dx9 injector that... injects SSAO into dx9 games. It's already working nicely except for all these advanced SSAO algorithms (like Scalable Ambient Obscurance) that specifically need camera-space/view-space positions.

Since I can't really retrieve the original projection matrix from the game I'm hooking, what I do for now is to manually create it on the CPU then invert it :


	// create a projection matrix
	width = 1024; // viewport width
	height = 768; // viewport height
	D3DXMatrixPerspectiveFovLH(&matProjectionInv, (float)D3DXToRadian(90), (FLOAT)width / (FLOAT)height, 0.1, 100);
	D3DXMatrixInverse(&matProjectionInv, NULL, &matProjectionInv);

Then once in my SSAO shader I naturally try to reconstruct position from depth using the above inverted projection matrix (yes this is a snippet from Mjp's blog) :


extern const float4x4 matProjectionInv;

struct VSOUT
{
	float4 vertPos : POSITION0;
	float2 UVCoord : TEXCOORD0;
};

struct VSIN
{
	float4 vertPos : POSITION0;
	float2 UVCoord : TEXCOORD0;
};


VSOUT FrameVS(VSIN IN) {
	VSOUT OUT = (VSOUT)0.0f;
 
	OUT.vertPos = IN.vertPos;
	OUT.UVCoord = IN.UVCoord;
 
	return OUT;
}

float3 VSPositionFromDepth(float2 vTexCoord)
{
    // Get the depth value for this pixel
    float z = tex2D(depthSampler, vTexCoord).r;

    // Get x/w and y/w from the viewport position
    float x = vTexCoord.x * 2 - 1;
    float y = (1 - vTexCoord.y) * 2 - 1;
    float4 vProjectedPos = float4(x, y, z, 1.0f);

    // Transform by the inverse projection matrix
    float4 vPositionVS = mul(vProjectedPos, matProjectionInv);

    // Divide by w to get the view-space position
    return vPositionVS.xyz / vPositionVS.w;  
}

float4 SSAOCalculate(VSOUT IN) : COLOR0
{
	float3 pos = VSPositionFromDepth(IN.UVCoord);
	//return float4(pos, 1.0);  // See Attached image 1

	float3 n_C = reconstructCSFaceNormal(pos); // normalize(cross(ddy(pos), ddx(pos)));
	return float4(n_C, 1.0);    // See Attached image 2

        // Truncated for readability reasons
}


First attachment is the 'position map' (I guess that's the name ?) and second is the normal map. Obviously the normals are very wrong. I've been told they looked like as if they were world space normals while I'm desperately trying to get camera-space normals (they are required for SAO to work properly)

I should mention that I'm using a hardware depth buffer (INTZ texture) which I'm directly reading from. According to Mjp "a hardware depth buffer will store the post-projection Z value divided by the post-projection W value" (I'm not sure I completely understand what this means by the way).

So is the above reconstruction incomplete ? Is my 'fake' Projection matrix sufficient/correct ?

NB : the full project will be available on Github once working

[attachment=23617:screenshot_2014-09-15_08-48-01_0.png][attachment=23618:screenshot_2014-09-15_08-48-16_1.png]

Advertisement

I'm not seeing anything immediately wrong with your code. Are you 100% sure that your inverse project matrix is getting set correctly? More specifically, is it being sent to the shader as row-major when the shader expects column-major?

With regards to "post-projection z divided by post-projection w", it means that the depth buffer stores the equivalent of this code:

float4 projectedPosition = mul(float4(viewSpacePosition, 1.0f), ProjectionMatrix);
float zBufferDepth = projectedPosition.z / projectedPosition.w;

So basically you apply the projection matrix to your position, then divide the resulting z component by the resulting w component. The resulting value will be of the range [0, 1], where 0 == the near clip plane and 1 == the far clip plane. Does that make sense?

I'm not seeing anything immediately wrong with your code. Are you 100% sure that your inverse project matrix is getting set correctly? More specifically, is it being sent to the shader as row-major when the shader expects column-major?

Good observation. I can't debug the shader unfortunately. PIX won't work in combination with the injector I'm working with. (Well I could still output colors according to values in the shader...). But I read HLSL default behaviour was to treat matrices column-major unless specified otherwise by a dedicated pragma http://fgiesen.wordpress.com/2012/02/12/row-major-vs-column-major-row-vectors-vs-column-vectors/#comment-4318

This is really weird because out of desperation I tried the following (see relevant comment in reconstructCSPosition function) :


// ------------------- Original code comment about how to build the projection vector ("projInfo") ---------------------------

// vec4(-2.0f / (width*P[0][0]),
//      -2.0f / (height*P[1][1]),
//      (1.0f - P[0][2]) / P[0][0],
//      (1.0f + P[1][2]) / P[1][1])

// where P is the projection matrix that maps camera space points 
// to [-1, 1] x [-1, 1].  That is, GCamera::getProjectUnit().


// ------------------- My attempt at building the projection vector ("projInfo") described above on the CPU ------------------

// width = 1024; // viewport width
// height = 768; // viewport height
// D3DXMatrixPerspectiveFovLH(&matProjection, (float)D3DXToRadian(90), (FLOAT)width / (FLOAT)height, (FLOAT)1.0, (FLOAT)70.0);
// D3DXMatrixInverse(&matProjection, NULL, &matProjection);

// D3DXVECTOR4 projInfo;
// projInfo.x = -2.0f / ((float)width*matProjection._11);
// projInfo.y = -2.0f / ((float)height*matProjection._22),
// projInfo.z = ((1.0f - matProjection._13) / matProjection._11) + projInfo.x * 0.5f;
// projInfo.w = ((1.0f + matProjection._23) / matProjection._22) + projInfo.y * 0.5f;

// HRESULT hr = effect->SetMatrix(matProjectionHandle, &matProjection);
// hr = effect->SetVector(projInfoHandle, &projInfo);


// ------------------- HLSL function from SAO shader to go from screenspace to camera-space -----------------------------------

float4 projInfo;

static const float nearZ = 1.0;   // Arbitrary values : I can't retrieve the exact values from the game
static const float farZ = 70.0;

float LinearizeDepth(float depth)
{
	return rcp(depth * ((farZ - nearZ) / (-farZ * nearZ)) + farZ / (farZ * nearZ));
}

/** Reconstruct camera-space P.xyz from screen-space S = (x, y) in
    pixels and camera-space z < 0.  Assumes that the upper-left pixel center
    is at (0.5, 0.5) [but that need not be the location at which the sample tap 
    was placed!]

    Costs 3 MADD.  Error is on the order of 10^3 at the far plane, partly due to z precision.
  */
float3 reconstructCSPosition(float2 S, float z)
{
	//return float3((S.xy * projInfo.xy + projInfo.zw) * z, z); // Original code... doesn't work : AO is barely visible
	
	return float3(S, z);                                        // Desperate code... seems to work ? (see attached image)
}

/** Read the camera-space position of the point at screen-space pixel ssP */
float3 getPosition(float2 ssP)
{
    float3 P;

    P.z = LinearizeDepth(tex2D(depthSampler, ssP).r);   // retrieves the post-projection depth in my INTZ texture

    // Offset to pixel center
	P = reconstructCSPosition(float2(ssP) + float2(0.5, 0.5), P.z);
    return P;
}

And it seems to give decent results minus the self occlusions issues (see AO output and normals attached below)... a lot better than my previous efforts at least although I've been told every edge should have at least some blue in them and there is none at all in my normals (most likely because there is no position reconstruction in my "desperate code").

My brain is melting biggrin.png

So... Correct me if I'm wrong but that would mean that I am already in camera space the moment I'm applying my SSAO because I'm still running the same code :


float3 reconstructCSPosition(float2 S, float z)
{
	//return float3((S.xy * projInfo.xy + projInfo.zw) * z, z); // Original code... doesn't work : AO is barely visible
	
	return float3(S, z);                                        // Desperate code... seems to work ? (see attached image)
}

... That literally doesn't reconstruct anything at all obviously -and should I add I still totally lack the blue component in my normals- But the result seems correct (see below).

Case solved ?

I'm confused because I really thought post-processing effects like SSAO necessarily originated from screen-space and the shader code explicitly wanted a conversion from screen-space to camera-space... But oh well... It seems to run well without any of that.

I'm sorry if the thread is a bit of a mess... It's hard to explain everything properly... Especially when you're confused.

Thanks MJP

You want to calculate ssao at camera space to get uniform fallof function and not biased based on depth.

This topic is closed to new replies.

Advertisement