Position from Depth

Started by
20 comments, last by knighty 10 years, 2 months ago

Hi guys, I know this has been discussed ad nauseam but I can't get it to work.

I'm using an SSAO shader which works fine using a position texture but not with position from depth reconstruction.

I'm following MJP's http://mynameismjp.wordpress.com/2010/09/05/position-from-depth-3/ and tried both the spotlight and directional versions:

// vertex shader
output.vpos=mul(input.position,InverseProjection);
 
// pixel shader
float3 viewRay=input.vpos.xyz;
 
// get position function (in pixel shader)
float3 getPosition(in float2 uv,in float3 viewRay)
{
 // old version works perfectly
 // return tex2D(PositionTexture,uv).xyz;
 
 // reconstruction doesn't work
 float depth=tex2D(DepthTexture,uv).x;
 return viewRay*depth;
}

Please help.

Advertisement

Well let's start with the obvious: either your view ray is wrong or your depth is wrong. Depth is the easier one to get and to test, so start with that. Then try testing the view ray at various points, eg center of viewport and corners.

SlimDX | Ventspace Blog | Twitter | Diverse teams make better games. I am currently hiring capable C++ engine developers in Baltimore, MD.

Hi, I am almost certain I can rule out the depth. The depth map looks good, and the depth values work perfectly in this, as well as another, shader. It's a simple linear depth shader.

The viewRay is simply the input.position multiplied by the InverseProjection, which then gets multiplied by the depth (from the depth texture.)

I think I followed his example correctly and it's not working out for me. Maybe it's because I'm using it for something other than lighting, I don't know.

Thanks.

What exactly (which values) is input.position? It should be (x, y, 1.0, 1.0) with x, y either -1.0 or 1.0.

Are you sure that that the shader does:

input.vpos.xyz == output.vpos.xyz / output.vpos.w

I might be wrong but I think what happens is actually:

input.vpos.xyzw == output.vpos.xyzw

so you are probably missing the division by w.

Also, are you sure that the values in the DepthTexture are linear?

Thanks for trying to help.

input.position is the vertices that make up the full screen quad (-1.0 to 1.0)

The depth is linear, I'm doing it like this:

// depth vertex shader
output.vpos=mul(input.position,WorldView).xyz;
 
// depth pixel shader
output.depth=input.vpos.z/FarZ;

I tried the divide by w and it didn't work.

When I do the SSAO it looks like lots of tiny mirrors all distorted (for wont of a better explanation.)

That's your problem. You shouldn't use your full screen quad coordinates, that won't work. You need the frustum corners, passed in as vertex data, and output from your quad (thus interpolating it for the pixel shader).

I tried what you guys suggested and it's not working at all. Here is the implementation I'm working from. What I don't understand is why there are two different maths, one for point lights and spotlights and another for directional lights; it sounds silly to ask if there is a different math when using it for ssao; isn't computing the position independent of what you need it for ?


<pre>
// G-Buffer vertex shader
// Calculate view space position of the vertex and pass it to the pixel shader
output.PositionVS = mul(input.PositionOS, WorldViewMatrix).xyz;

// G-Buffer pixel shader
// Divide view space Z by the far clip distance
output.Depth.x = input.PositionVS.z / FarClipDistance;

// Light vertex shader
#if PointLight || SpotLight
// Calculate the view space vertex position
output.PositionVS = mul(input.PositionOS, WorldViewMatrix);
#elif DirectionalLight
// Calculate the view space vertex position (you can also just directly map the vertex to a frustum corner to avoid the transform)
output.PositionVS = mul(input.PositionOS, InvProjMatrix);
#endif

// Light Pixel shader
#if PointLight || SpotLight
// Extrapolate the view space position to the  far clip plane
float3 viewRay = float3(input.PositionVS.xy * (FarClipDistance / input.PositionVS.z), FarClipDistance);
#elif DirectionalLight
// For a directional light, the vertices were already on the far clip plane so we don't need to extrapolate
float3 viewRay = input.PositionVS.xyz;
#endif

// Sample the depth and scale the view ray to reconstruct view space position
float normalizedDepth = DepthTexture.Sample(PointSampler, texCoord).x;
float3 positionVS = viewRay * normalizedDepth;</pre>

I'm going to work on something else and come back to this in a day or two. Thanks again for trying to help.

What I don't understand is why there are two different maths, one for point lights and spotlights and another for directional lights; it sounds silly to ask if there is a different math when using it for ssao; isn't computing the position independent of what you need it for ?


The primary difference is that pointlights and spotlights are local effects, confined to a specific region which is usually defined by a mesh (sphere or cone) and when you render that mesh, you need to extract the view ray from it's surface. Directional lights however effect the entire screen and thus work with a full screen quad. Hence the slightly different approach in the view ray computation.
In that sense, ssao falls into the "directional light category", because it effects the entire screen.

input.position is the vertices that make up the full screen quad (-1.0 to 1.0)

As I stated above, please post the exact values. As far as I understand your code, it should be 4 vectors with 4 components each:
(-1.0 -1.0 1.0 1.0)
( 1.0 -1.0 1.0 1.0)
( 1.0 1.0 1.0 1.0)
(-1.0 1.0 1.0 1.0)

Are you familiar with the difference between euclidean coordinates and projective coordinates?

For testing this stuff it is extremely helpfull to disable the SSAO effect and output the reconstructed position as color values.

Hello,

I was using z=0 for my input.position, I changed it to z=1 and now have the exact values listed in the post just above this one.

Things are shaping up, there is no hall of mirrors; the result is still off though, everything is darker and the edges are brighter. I feel like it wants to work.

I would like to output to color textures the position map and position reconstruction to compare them but I'm having problems, the position map (which I know is correct) comes out as a big colorful blur. The positions are in view space and I'm not sure how to scale them; I tried lots of scales and I can barely make out the geometry.

Thanks for helping, I appreciate it.

Just scaling the eye space position by a constant factor should suffice. You should end up with an image like this:

[attachment=19492:ViewSpaceColors.jpg]

As you can see, everything close to the camera (the origin of the eye space coordinate system) is black, everything to the right is red, everything to the top is green and everything in the back is blue, assuming you are using the DirectX convention of a left handed coordinate system with the camera looking at +z. If not, than you should get the same image without any blue.

If scaling by a factor doesn't work for your position map, then the coordinates in the position map are probably not eye space. They could be world space or eye space but without the camera position removed.

Maybe you can post your images for the position map and the position reconstruction so we can take a look at them.

This topic is closed to new replies.

Advertisement