Reconstruct Linear Depth

Started by
11 comments, last by Shael 14 years ago
I'm trying to reconstruct linear depth for use with point lights. I have it working for directional. I've been reading this page: http://mynameismjp.wordpress.com/2009/05/05/reconstructing-position-from-depth-continued/ I use the methods there but the results I get are completely wrong. How do I set up the texture coords exactly? I tried using posVS.xy/posVS.w in the pixel shader but that doesn't work.
Advertisement
So you're trying to get the view-space position for arbitrary geometry? I'm not entirely sure because you mention texture coordinates, and this doesn't involve texture coordinates. You're just trying to take a position in view space, and generate a ray that passes through it and ends at the far plane. That's just:

posVS.xyz * (farClip / posVS.z); // Left HandedposVS.xyz * (farClip / -posVS.z); // Right handed


Now just scale that by your depth [0,1]. Is your problem that you don't know how to get the texture coordinates to sample the depth buffer at? If so, read this:

http://diaryofagraphicsprogrammer.blogspot.com/2008/09/calculating-screen-space-texture.html
-- gekko
Yep sorry thats what I was referring too. I cant seem to get it to work though. This is the code I trying:

Vertex Shader
float4x4 wvp = mul(mWorld, mViewProj);float4x4 wv = mul(mWorld, mView);float4 p = mul(pos, wvp);output.Position = p;output.PositionVS = mul(pos, wv);	float4 TexCoords = p;float2 Screen = float2(1.0f/fViewportWidth, 1.0f/fViewportHeight);float2 Target = float2(1.0f / 2*fViewportWidth, 1.0f / 2*fViewportHeight);	TexCoords.x = ((p.x + p.w) * Screen.x + p.w) * Target.x;TexCoords.y = ((p.w - p.y) * Screen.y + p.w) * Target.y;output.Tex0 = Tex0;output.Tex1 = TexCoords;



Pixel Shader
float3 vFrustumRayVS = In.PositionVS.xyz * (FarClip/-In.PositionVS.z);float3 Position = tex2Dproj(SamplerDepth, In.Tex1).r * vFrustumRayVS;Position = mul(float4(Position, 1.0f), transpose(mView));float4 Normal = tex2D(SamplerNormal, In.Tex0);float3 N = normalize(2.0f * Normal.rgb - 1.0f);


See anything wrong there?

Heres a screenshot of the problem:
I believe I found your problem:

float4 TexCoords = p; // Oldfloat4 TexCoords = output.Position; // New


You're taking the position in model space, when you need the position after you projected it. Remember that after projection (and the w-divide), X and Y will be in the range [-1,1]. You simply want to re-map that to [0,1] and you're done (in theory).

The rest of the math involved is adjusting for half-texel offsets, flipping the Y, and leaving the w-divide for the call to tex2Dproj. I didn't check to see if all the other bits of your code were correct, so if you have more problems and can't find them, just ask again.
-- gekko
Finally got back to this after a long run of getting SSAO to work. I've managed the light volume for point lights working a little better, but it's still quite glitchy. Sometimes it looks ok, but when moving around it flickers and gets messed up a bit. Eg.

1. Looks ok-ish but has weird artifact up the top right.

http://img256.imageshack.us/img256/9431/pointlight1.jpg


2. Panned the view to the right and the lighting gets messed up

http://img532.imageshack.us/img532/7927/pointlight2.jpg

Vertex Shader
float4x4 wv = mul(mWorld, mView);	float4x4 wvp = mul(wv, mProj);		float4 p = mul(pos, wvp);    output.Position = p;	output.PositionVS = mul(float4(pos.xyz, 1.0f), wv);	output.LightPos = mul(float4(LightPosition, 1.0f), wv);	output.ViewPos = mul(float4(ViewPosition, 1.0f), wv);		p = float4( 0.5*( float2(p.x + p.w, p.w - p.y) + p.w*float2(1.0f/fViewportWidth, 1.0f/fViewportHeight)), p.zw);		output.Tex0 = p;


Pixel Shader
float3 VSPositionFromDepth(float4 vTexCoord, float3 vPositionVS){    // Calculate the frustum ray using the view-space position.    // g_fFarCip is the distance to the camera's far clipping plane.    // Negating the Z component only necessary for right-handed coordinates    float3 vFrustumRayVS = vPositionVS.xyz * (FarClip/-vPositionVS.z);    return tex2Dproj(DepthSampler, vTexCoord).r * vFrustumRayVS;}PS_LIGHT_OUT PS_PointLight( VS_POINT_OUT In ){	PS_LIGHT_OUT output = (PS_LIGHT_OUT)0;		float3 Position = VSPositionFromDepth(In.Tex0, In.PositionVS);		// Get normal and expand back into signed range [-1, 1]	float4 Normal = tex2D(SamplerNormal, In.Tex0);	float3 N = normalize(2.0f * Normal.rgb - 1.0f);	// Light and View directions	float3 LightDir = (In.LightPos - Position);	float3 ViewDir = normalize(In.ViewPos - Position);	// Attenuation = 1 - ((x/r)2 + (y/r)2 + (z/r)2)	float Att = saturate(1 - dot(LightDir/LightRadius, LightDir/LightRadius));	Att = Att * Att * LightRange;		LightDir = normalize(LightDir); 	// N.L	float NL = dot(N, LightDir);		// N.E	float NE = dot(N, ViewDir);	// R = 2 * (N.L) * N - L    float3 Reflect = normalize(2 * NL * N - LightDir);     float Specular = pow(saturate(dot(Reflect, ViewDir)), Normal.aaa); // R.V^n	output.Light0 = float4(LightColor.r, LightColor.g, LightColor.b, Specular) * NL * Att;		return output;}



Maybe MJP has some insight as I'm using what is posted on his blog?
The only difference I can see right now is I do

float2 uv = IN.hPos.xy * float2(0.5f, -0.5f) / IN.hPos.w + 0.5f

in the pixel shader, where hPos is the position of the sphere vertex after being transformed by the WVP matrix, and then just use tex2D to get the depth and normal.

Also, assuming ViewPosition is the position of the camera in world space, that calculation is unnecessary as when you transform it to view space, the view space camera position is always (0,0,0).
Thanks man you saved the day again. Do you have an explanation of what that code does exactly? I don't recall seeing it on any of the pages I read when trying to solve this.
Quote:Original post by Shael
Thanks man you saved the day again. Do you have an explanation of what that code does exactly? I don't recall seeing it on any of the pages I read when trying to solve this.


Glad it helped.

I originally found it here:
http://www.riemers.net/eng/Tutorials/XNA/Csharp/Series4/Perfect_mirror.php and http://habibs.wordpress.com/lake/.

Essentially, we are transforming clip space into texture coordinates (-1,1)->(0,1), and then using the w component to get where the pixel ended up based on the projection... or something to that effect.
Ah rightio thanks. That seems to be what gekko was talking about above.

While we're on the topic of point light volumes. How do you determine the correct radius for the sphere so you can choose which cull mode to use?

I'm doing this:

float dist = length(CamPos - LightPos);if ( dist < (lightRadius + 0.0001f) )	CULL_CWelse if ( dist > (lightRadius + 0.0001f) )	CULL_CCWelse	CULL_NONE


The light seems to "turn off" when I walk into the light volume a bit and then back on as I'm closer to the center of the sphere. Only thing I can think of is the lightRadius value doesn't match up to the actual mesh radius - but the sphere uses the same value so I don't know.
Quote:Original post by Shael
Ah rightio thanks. That seems to be what gekko was talking about above.

While we're on the topic of point light volumes. How do you determine the correct radius for the sphere so you can choose which cull mode to use?

I'm doing this:

*** Source Snippet Removed ***

The light seems to "turn off" when I walk into the light volume a bit and then back on as I'm closer to the center of the sphere. Only thing I can think of is the lightRadius value doesn't match up to the actual mesh radius - but the sphere uses the same value so I don't know.


I don't actually handle lights that way. I turn off depth test and depth write (might just be for opengl) and render the spheres, but just render them as if I was inside it.

This topic is closed to new replies.

Advertisement