Reconstructing view-space position from depth

Started by
16 comments, last by JorenJoestar 13 years ago
[font="arial, verdana, tahoma, sans-serif"]Okay, last post and I promise (if nobody else replies) I'll stop spamming the forum.

Cleaning up the frustum corner calculations has eliminated the error in the reconstructed z position. I also find that if I replace the half-pixel offset in my vertex shader with a half-texel offset in my full-screen quad texture coordinates, the error between the sampled and reconstructed position disappears almost entirely (not visible to the naked eye, but if I multiply it by 100 I can see some noise). I can't really explain this result as I thought the vertex shader offset was supposed to be all that was required to map a texture to the screen correctly...

However, even with the positions now agreeing at each location, the AO result still looks just as bad. The ambient occlusion calculation samples a bunch of different positions around the current pixel, using fairly arbitrary offsets, so I thought I should look at the sum of the errors from every sample that's taken, rather than just at the pixel the occlusion is being calculated for. The resulting image is blown out 100% yellow... meaning there is no accumulated error in the z components but the x and y errors are still significant.

EDIT:

Actually I think this behavior (described in the second paragraph) does make sense. I have an interpolated frustum corner (position on the far clip plane) that's valid for the current pixel on the screen but is not valid for reconstructing position anywhere else (i.e. the neighbor sampling locations for comping the occlusion). This explains why there's no more error in the z-values but still error in x and y.

EDIT2:

@mind in a box I think your second method is the valid only option for sampling positions other than the current pixel. None of the methods that interpolate a value from the vertex shader will work because they only give a value that's valid for the current pixel in the pixel shader. Unless somebody knows how to extrapolate the frustum ray to the sampled locations, I think your solution is the only one that works because it's able to computer everything from the texture coordinates and the depth, which is really all the information you have for those other locations. Another advantage of that code is it works if you have a hardware depth buffer instead of a hand-made depth texture.
Advertisement
Unless somebody knows how to extrapolate the frustum ray to the sampled locations


Would it be possible to render the frustum rays to a very small rendertarget and let the hardware interpolate them?
Just had to test this, and yes, it's very possible and gained me no less than 20 30 fps :)

Using 16x12 16bit view ray texture @ 154 fps

50839880.png

Computing view ray per ssao sample @ 124 fps:

36052154.png

Visual differences are very minimal.
Hey that's not bad! You're using a half-float render target for that? I'm using XNA which unfortunately doesn't support filtering floating-point textures (due to Xbox 360 limitations), but that's a good idea overall. Matt had a similar idea to interpolate between the corners by hand in the shader.

You're using a half-float render target for that?


Yes. Interestingly there was almost no speedup when using 32 bit floats.

After taking a closer look and without blur i noticed a sampling pattern going on though.
It becomes invisible if you use a slightly larger render target. The performance gain still is significant.

16x12 render target

ray16x12.png

64 x 48 render target

ray64x48.png
I have the same problem with reconstruction as you guys...I`m trying to use the depth and no luck.
I use this reconstruction method (I'm using right-handed coordinates):

[color="#c0c0c0"]

[color="#c0c0c0"]float depth = tex2Dlod( g_depth, float4(uv, 0, 0) ).r * g_far_clip;

float4 positionCS = float4((uv.x-0.5)*2, (0.5-uv.y)*2, 1, 1);
float4 ray = mul( positionCS, gProjI );
ray.xyz /= ray.w;
position = ray.xyz * depth / ray.z
position.z *= -1; // This is for right-handed.


Daniel what method are you using?
Thanks!

---------------------------------------http://badfoolprototype.blogspot.com/
[color="#c0c0c0"]
Daniel what method are you using?
Thanks!


Hey, I'm using one of MJP's methodes to reconstruct the position.
I'll better post the entire shader because I modified the SSAO by José María Méndez.

G-Buffer Pass


VS_OutGBuffer vs_GBuffer(VS_input input)
{
VS_OutGBuffer output;

/* snip */

output.Position = mul(float4(input.pos.xyz, 1.0f), MatWVP);
output.normal = normalize(mul(input.normal, (float3x3)MatWorld));
output.depth = mul(float4(input.pos.xyz, 1.0f), MatWorldView).xyz;

return output;
}

PS_OutGBuffer ps_GBuffer(in VS_OutGBuffer input)
{
PS_OutGBuffer output = (PS_OutGBuffer)0;

/* snip */

output.Depth = length(input.depth);
output.Normal = float4(normalize(normal), 1);

return output;
}


View Ray Pass


Here I render a view ray to a small buffer (64 x 48 pixels) so I can read it in the SSAO shader using bilinear filtering.


VS_OutViewRay vs_viewRay(in VS_INPUT In)
{
VS_OutViewRay Out;

Out.Pos = float4(In.Pos.xy, 0.0f, 1.0f);
Out.ray = mul(Out.Pos, MatViewProjInv).xyz - camPosWs;

return Out;
}

PS_output ps_viewRay(in VS_OutViewRay In)
{
PS_output Out;

Out.Color = float4(In.ray, 1);

return Out;
}


SSAO Pass



VS_OutSSAO vs_SSAO(in VS_INPUT In)
{
VS_OutSSAO Out;

Out.UV = In.UV + 0.5f / g_screen_size; // align texels to pixels (dx9)
Out.Pos = float4(In.Pos.xy, 0.0f, 1.0f);

return Out;
}

float3 getNormal(in float2 uv)
{
return tex2D(texNormals, uv).xyz;
}

float2 getRandom(in float2 uv)
{
return normalize(tex2D(texRand, g_screen_size * uv / random_size).xy * 2.0f - 1.0f);
}

float3 getPosition(in float2 uv)
{
const float depth = tex2D(texDepth, uv).x;

//align texels to pixels, very crucial here (dx9 only(?))

//rayBufContraction contains:
//rayBufContraction.xy = (rayBufferSize - 1) / rayBufferSize;
//rayBufContraction.zw = 0.5f / rayBufferSize;

uv *= rayBufContraction.xy;
uv += rayBufContraction.zw;

const float3 eyeToPixel = normalize(tex2D(texViewRay, uv).xyz);

return eyeToPixel * depth;
}

float doAmbientOcclusion(in float2 tcoord, in float2 uv, in float3 p, in float3 cnorm)
{
const float3 diff = getPosition(tcoord + uv) - p;
const float3 v = normalize(diff);
const float d = length(diff) * g_scale;

return max(0.0, dot(cnorm, v) - g_bias) * (1.0f / (1.0f + d));
}

PS_output ps_SSAO(in VS_OutSSAO In)
{
PS_output Out;

Out.Color = 1.0f;

const float2 vec[4] = { float2(1, 0), float2(-1, 0), float2(0, 1), float2(0, -1) };

const float3 p = getPosition(In.UV);
const float3 n = getNormal(In.UV);
const float2 rand = getRandom(In.UV);

const float invDepth = 1.0f - tex2D(texDepth, In.UV).w / FARCLIPDIST;
const float rad = g_sample_rad * (invDepth * invDepth);

float ao = 0.0f;

const int iterations = 4;

for(int j = 0; j < iterations; ++j)
{
float2 coord1 = reflect(vec[j], rand) * rad;
float2 coord2 = float2(coord1.x * 0.707f - coord1.y * 0.707f, coord1.x * 0.707f + coord1.y * 0.707f);

ao += doAmbientOcclusion(In.UV, coord1 * 0.25f, p, n);
ao += doAmbientOcclusion(In.UV, coord2 * 0.5f, p, n);
ao += doAmbientOcclusion(In.UV, coord1 * 0.75f, p, n);
ao += doAmbientOcclusion(In.UV, coord2, p, n);
}

ao /= iterations * 4.0f;

Out.Color = saturate(ao * g_intensity);

return Out;
}


My parameters:

g_sample_rad = 0.03f
g_intensity = 1.0f
g_scale = 0.5f
g_bias = 0.2f

The result (no blurring):

25q51xs.jpg

I'm quite happy with it :)

If you have any questions, let me know ;)

edit: MatViewProj => MatViewProjInv

edit edit: to answer your actual question, the method I use is described here
Thanks Daniel, you are very kind to post your code!
Actually yesterday night I get the Arkano's implemenation working, I had to change a little in the uv calculation...it seems those math is working only with the view space texture!

I'm using this method to reconstruct:

[color="#c0c0c0"]float depth = [color="#c0c0c0"]tex2D(gDepthTex, uv).r * gFar;

float4 pos = float4( (uv.x-0.5)*2, (0.5-uv.y)*2, 1, 1 );

float4 ray = mul(pos, gProjectionInverse);

return ray.xyz * depth;

[color="#000000"]And it is working quite well!
I will try your method too, the trade between the multiplication with a texture fetch is interesting :)

Thanks again!!!

---------------------------------------http://badfoolprototype.blogspot.com/

This topic is closed to new replies.

Advertisement