So I've been trying to implement Scalable Ambient Obscurance for a few weeks now but I'm still horribly stuck at the 'getting the right coordinates/space' part. I can't believe how hard it is to figure out. My fault probably. But as an excuse, my setup is a bit unusual since I'm not working with some kind of ready-made game engine but with a dx9 injector that... injects SSAO into dx9 games. It's already working nicely except for all these advanced SSAO algorithms (like Scalable Ambient Obscurance) that specifically need camera-space/view-space positions.
Since I can't really retrieve the original projection matrix from the game I'm hooking, what I do for now is to manually create it on the CPU then invert it :
// create a projection matrix
width = 1024; // viewport width
height = 768; // viewport height
D3DXMatrixPerspectiveFovLH(&matProjectionInv, (float)D3DXToRadian(90), (FLOAT)width / (FLOAT)height, 0.1, 100);
D3DXMatrixInverse(&matProjectionInv, NULL, &matProjectionInv);
Then once in my SSAO shader I naturally try to reconstruct position from depth using the above inverted projection matrix (yes this is a snippet from Mjp's blog) :
extern const float4x4 matProjectionInv;
struct VSOUT
{
float4 vertPos : POSITION0;
float2 UVCoord : TEXCOORD0;
};
struct VSIN
{
float4 vertPos : POSITION0;
float2 UVCoord : TEXCOORD0;
};
VSOUT FrameVS(VSIN IN) {
VSOUT OUT = (VSOUT)0.0f;
OUT.vertPos = IN.vertPos;
OUT.UVCoord = IN.UVCoord;
return OUT;
}
float3 VSPositionFromDepth(float2 vTexCoord)
{
// Get the depth value for this pixel
float z = tex2D(depthSampler, vTexCoord).r;
// Get x/w and y/w from the viewport position
float x = vTexCoord.x * 2 - 1;
float y = (1 - vTexCoord.y) * 2 - 1;
float4 vProjectedPos = float4(x, y, z, 1.0f);
// Transform by the inverse projection matrix
float4 vPositionVS = mul(vProjectedPos, matProjectionInv);
// Divide by w to get the view-space position
return vPositionVS.xyz / vPositionVS.w;
}
float4 SSAOCalculate(VSOUT IN) : COLOR0
{
float3 pos = VSPositionFromDepth(IN.UVCoord);
//return float4(pos, 1.0); // See Attached image 1
float3 n_C = reconstructCSFaceNormal(pos); // normalize(cross(ddy(pos), ddx(pos)));
return float4(n_C, 1.0); // See Attached image 2
// Truncated for readability reasons
}
First attachment is the 'position map' (I guess that's the name ?) and second is the normal map. Obviously the normals are very wrong. I've been told they looked like as if they were world space normals while I'm desperately trying to get camera-space normals (they are required for SAO to work properly)
I should mention that I'm using a hardware depth buffer (INTZ texture) which I'm directly reading from. According to Mjp "a hardware depth buffer will store the post-projection Z value divided by the post-projection W value" (I'm not sure I completely understand what this means by the way).
So is the above reconstruction incomplete ? Is my 'fake' Projection matrix sufficient/correct ?
NB : the full project will be available on Github once working
[attachment=23617:screenshot_2014-09-15_08-48-01_0.png][attachment=23618:screenshot_2014-09-15_08-48-16_1.png]