Jump to content
  • Advertisement
Sign in to follow this  
Paul C Skertich

Is this correct Reconstructing Position from Depth Buffer

This topic is 1079 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I've rendered diffuse color, normal color and depth color to the gBufffer. Inside the directional light shader file inside the pixel shader I have:

float4 position;
		position.x = 2.0f * input.TexCoord.x - 1.0f;
		position.y = -2.0f * input.TexCoord.y + 1.0f; //-- CHANGED: position.y = -input.TexCoord.y * 2.0f - 1.0f;
		position.z = depthValue;
		position.w = 1.0f;
		float4 projectedPos = mul(position, invViewProj);
		projectedPos.xyz /= projectedPos.w;
		

When I render the quad I have the invViewProj as:

XMMATRIX viewProj = XMMatrixMultiply(view, projection);
XMVECTOR det;
XMMATRIX invViewProj = XMMatrixInverse(&det, viewProj);

the invViewProj connects to the screen quad's constant buffer.

 

I've have the option to disable deferred shading on and off with F1 button. I took a in app screen capture.

 

This is returning the  CameraPosition - projectedPosition inside the shader.

[attachment=28402:ScreenCaptured_nossao00000297.jpg]

 

However when I move the camera around everything changes to a different color as shown below:

[attachment=28404:ScreenCaptured_nossao00018723.jpg]

 

Finally this is just Deferred Shading DIsabled.

[attachment=28403:ScreenCaptured_nossao00006590.jpg]

 

This is the deferred Shading enabled without returning cameraposition - projectedPos.

 

[attachment=28405:ScreenCaptured_nossao00000297.jpg]

 

Does anything look off to anyone? It could be just me but the forward rendering is putting out a better image than deferred shading. It could be my mistake getting use to the deferred rendering part.

 

Share this post


Link to post
Share on other sites
Advertisement

Forgot just one little thing too: I captured the projected space of the screen quad below (with the change of new code)

position.y = -(input.texCoord.y * 2.0f - 1.0f);

I reverted it back to the original position.y code. Took a snapshot of the projected position of the screen quad.

 

[attachment=28406:ScreenCaptured_nossao00000297.jpg]

 

I've used google images and searched for reconstructing position from depth buffer and saw saw one link showing this was projected space.

 

As I said I'm getting use to deferred rendering - I know I had started on it long ago but decided to recap it because I struggled the most with the concept. Prepass lighting is throwing me for a loop as well because it's hard for me to understand. What I've read is deferred shading is preferred by many games than deferred lighting is. I could completely wrong just read a couple articles that deferred shading was mostly favorable.

 

Share this post


Link to post
Share on other sites

Inside the rendering code I changed the inversed view projection matrix to just inversed projection matrix and got this when I returned the projectedPosition.

 

[attachment=28408:ScreenCaptured_nossao00000297.jpg]

 

Can't say for 100% sure if I'm on the right path and also inside the code where it renders the screen quad I changed the texture coords thinking it was that was the issue.

Share this post


Link to post
Share on other sites

Apparently I was using the Perspective Projection when I was inversing. The screen quad uses Orthographic matrix so I inversed view and orthographic view and got similar to what Google images has shown.

 

[attachment=28411:ScreenCaptured_nossao00000297.jpg]

 

Now I can say that I'm heading in the correct manner.

 

I also changed the screen quad pixel shader of reconstruct position from depth.

float4 position;
		position.x = input.TexCoord.x * 2.0f  - 1.0f;
		position.y = (1.0f - input.TexCoord.y)  * 2.0f - 1.0f;
		position.z = depthValue;
		position.w = 1.0f;
		float4 projectedPosition = mul(position, invViewProj);
		projectedPosition.xyz = projectedPosition.xyz / projectedPosition.w;

Inside the vertex shader the texcoords are put into the vertex shader as:

output.TexCoord = input.TexCoord - halfPixel;

//-- Half Pixel is sent to the constant buffer as 
float2 halfPixel;
halfPixel.x = 0.5 / canvas->getWidth();
halfPixel.y = 0.5 / canvas->getHeight();

[attachment=28412:ScreenCaptured_nossao11082541.jpg]

 

Which makes me realize something - 1) I'm getting close 2) I'm still yet far away from actual success.

Share this post


Link to post
Share on other sites

more tinkering around and finally got this:

 

[attachment=28421:ScreenCaptured_nossao39719721.jpg]

 

inside the lighting.hlsl file:

float4 position;
		position.xy = input.TexCoord.x * 2 - 1;
		position.y = (1 - input.TexCoord.y) * 2 + 1;
		position.z = depthValue;
		position.w = 1.0f;
		float4 projectedPosition = mul(position,invViewProj);
		projectedPosition.xyz = projectedPosition.xyz / projectedPosition.w;
		

inside the deferredShader.hlsl

float3 normalmap =  textures[1].Sample(ss, input.tex).xyz - float3(0.5f,0.5f,0.5f);
	
	input.normals = computeNormalMap(input.normals, input.bitangents, input.tangents, normalmap);
	input.normals = normalize(input.normals);
	float3 normal = 0.5f * normalize(input.normals) + 1.0f;
	
	normalOut = float4(normal,1.0f);
	depthOut =  1 - input.position.z / input.position.w;
	
	output.diffuse = diffuseOut;
	output.normal = normalOut;
	output.depth = float4(depthOut,0,0,1.0f);

It looks specular to me --- sigh....

Share this post


Link to post
Share on other sites

So my problem is that the specular lights move and I've viewed Deferred Shading on youtube and it doesn't look like mine. It's funny you people know what I'm doing wrong but not telling me.

 

I'm working with directional lights. I stuffed a specular map inside diffuse.a and the specular power inside normals.a when preparing the g-buffer.

 

the depth buffer is as:
 

float depthOut = input.position.z / input.position.w;

and yeah read this article a million of times https://mynameismjp.wordpress.com/2009/03/10/reconstructing-position-from-depth/

 

Still looks wrong the returned projected positions looks weird just giving me projected coords but nothing compared to google images.

 

the screen quad is orthographic with minDepth 1.0f to maxDepth 1000.0f.

Share this post


Link to post
Share on other sites

If you store depth yourself and don't use the HW depthbuffer for reconstruction, store linear depth. Reconstruction then is a lot simpler(and faster)

 

Also i would start(and i still do it) by using worldspace for light positions and reconstructed position.

 

That also means lightpositions(or directions) need to be in the same space as the position you reconstructed.

For me it sounds like the lightdirection is in worldspace and the reconstructed position in viewspace.

 

And last, i'm a bit confused about wich matrix you inverse for the reconstruction, the same as you used to render into GBuffer ?

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!