• Advertisement
Sign in to follow this  

Retrieving World Position in Deferred Rendering

This topic is 670 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi,

 

I am playing around with deferred lightning but I have a problem to retrieve correct pixel world position in the lightning shader. Without a correct world position, I can't render my point light or spot light correctly.

 

I have tried different techniques found on the web, but none of them give me coherent result, they all return strange position..

 

Here is three codes snippets I have tried :

 

 

Method 1 :

float ConvertDepthToLinear(float depth)
{
float linearDepth = PerspectiveValues.z / (depth + PerspectiveValues.w);
return linearDepth;
}
 
float3 CalcWorldPos(float2 csPos, float depth)
{
float4 position;
 
position.xy = csPos.xy * PerspectiveValues.xy * depth;
position.z = depth;
position.w = 1.0;
 
return mul(position, ViewInv).xyz;
}
 
 
// Inside the PixelShader
float depth = DepthTexture.Load(screenPos).x;
float linearDepth = ConvertDepthToLinear(depth);
float3 WPosition = CalcWorldPos(position, linearDepth);

Method 2 :

float depth = DepthTexture.Load(screenPos).x;
float x = uv.x * 2 - 1;
float y = (1 - uv.y) * 2 - 1;
// Unproject -> transform by inverse projection
float4 posVS = mul(float4(x, y, depth, 1.0f), InverseProjection);
// Perspective divide to get the final view-space position
float3 WPosition = posVS.xyz / posVS.w;

Method 3 :

float depth = DepthTexture.Load(screenPos).x;
float4 cPos = float4(position, depth, 1);
float4 wPos = mul(ViewProjectionInverse, cPos);
float3 WPosition  = wPos.xyz / wPos.w;

Someone can help me with that ?

 

Thank in advance.

Edited by karnaltaB

Share this post


Link to post
Share on other sites
Advertisement

I'm amazed no one has replied to this yet, so I'll take a stab. Can you debug the shader? If so, try pushing the world space position from the vertex shader to the pixel shader, in the VSOUTPUT,  and then compare results of your depth reconstruction to the WSP you pushed through. It should at least give you an idea of what is going wrong.

 

I've not worked with linear depth, so i can't comment on #1, but #2 should be using the InverseViewProjection, I can't see the shader but i don't think that's what your variable describes. So it could be you are getting a position in Viewspace. Maybe post the full vertex and pixel shaders so we can see the whole routine?

Share this post


Link to post
Share on other sites

I think eveybody is in vacations. I will try to give you a  hand mate.

 

Here is a document from the great MJP, it helped me a  lot once:

 

https://mynameismjp.wordpress.com/2009/03/10/reconstructing-position-from-depth/

 

Explained there is a nice optimization also too based on the screen aligned vertex positions. Very cool indeed.

 

Hope it helps you as did it to me. :)

Share this post


Link to post
Share on other sites

Are you sure you need the world position?  If you store the linear depth, then this is the view space depth (the distance from the camera).  When rendering your lights, if you render a quad (or an extra large triangle) and use the corners as the view vectors, you can then multiply the view vector per pixel by the depth to get the view space position.  With this, you can do all your lighting in view space, just transform your lights from world space to view space, then the lighting equation remains the same.

 

Here's the code code to calculate the linear depth in the pixel shader, this is the first shader that stores all the info into my g-buffers:

outputPixel.depthBuffer = (inputPixel.viewPosition.z / Camera::maximumDistance);

This just takes the view space position, takes the depth of it, then converts it to the [0,1] range.

 

Here's code to use that depth to get the view space position:

float3 getViewPosition(float2 texCoord, float depth)
{
    float2 adjustedCoord = texCoord;
    adjustedCoord.y = (1.0 - adjustedCoord.y);
    adjustedCoord.xy = (adjustedCoord.xy * 2.0 - 1.0);
    return (float3((adjustedCoord * Camera::fieldOfView), 1.0) * depth * Camera::maximumDistance);
}

float surfaceDepth = Resources::depthBuffer.Sample(Global::pointSampler, inputPixel.texCoord);
float3 surfacePosition = getViewPosition(inputPixel.texCoord, surfaceDepth);

This uses the texture coordinate, remaps it from [0,1] to [-1,1], then treats that as a vector away from the camera.

 

Just to finish it off, here's the code that renders a large triangle over the screen.  This is from Bill Bilodeau's vertex shader tricks presentation:

http://www.slideshare.net/DevCentralAMD/vertex-shader-tricks-bill-bilodeau

Pixel mainVertexProgram(in uint vertexID : SV_VertexID)
{
    Pixel pixel;
    pixel.texCoord = float2((vertexID << 1) & 2, vertexID & 2);
    pixel.position = float4(pixel.texCoord * float2(2.0f, -2.0f)
                                           + float2(-1.0f, 1.0f), 0.0f, 1.0f);
    return pixel;
} 

You can just draw a simple 3 vertex primitive with this shader, no vertex or index buffers required since it uses the vertex ID.

Edited by xycsoscyx

Share this post


Link to post
Share on other sites

First thank you for trying to help me out with my problem. I am a beginner in DirectX so sorry if I don't get it easily ;)

 

Here is a bit more code of my try :

 

The PixelShader that generate my texture (the depth texture is generated automaticly by the DephStencilView) :

GBufferPSOUT PSFillGBuffer(GBufferPSIN pIn)
{
	// Texture map if any
	float4 diffuseColor = MaterialDiffuse;
	if (HasDiffuseMap)
		diffuseColor = TexDiffuse.Sample(DefaultSampler, pIn.TextureUV);

	// Specular map if any
	float specIntensity = (MaterialSpecular.x + MaterialSpecular.y + MaterialSpecular.z) / 3;
	if (HasSpecularMap)
	{
		float3 specColor = TexSpecular.Sample(DefaultSampler, pIn.TextureUV).rgb;
		specIntensity = (specColor.x + specColor.y + specColor.z) / 3;
	}

	// Normal map if any
	float3 normal = normalize(pIn.WNormal);
	if (HasNormalMap)
	{
		float4 normalPixel = TexNormal.Sample(DefaultSampler, pIn.TextureUV);
		pIn.WNormal = normalize((normalPixel.x * pIn.WTangent) + (normalPixel.y * pIn.WBitangent) + (normalPixel.z * pIn.WNormal));
	}

	// GBuffer Output
	GBufferPSOUT result = (GBufferPSOUT)0;
	result.Target0.xyz = diffuseColor.rgb;
	result.Target0.w = specIntensity;
	result.Target1 = float4(normal.xyz * 0.5 + 0.5, 0.0);
	result.Target2.xyz = MaterialAmbient.rgb;
	result.Target2.w = SpecularPower / 100;

	// Return result
	return result;
}

Then my Vertex Shader to reconstruct and light the GBuffer textures (I don't use the SV_VertexID in debug mode cause Visual Studio can't debug them) :

QuadPSIN VS_Main(uint vertexID : SV_VertexID)
{
	QuadPSIN result;

	result.UV = float2((vertexID << 1) & 2, vertexID & 2);
	result.Position = float4(result.UV * float2(2.0f, -2.0f) + float2(-1.0f, 1.0f), 0.0f, 1.0f);

	return result;
}

// When debugging, we can't use a SV_VertexID quad because Visual Studio Graphics Debugger can't trace pixel history from it.
QuadPSIN VS_Main_DEBUG(QuadVSIN vIn)
{
	QuadPSIN result;

	// The input quad is expected in device coordinates
	// (i.e. 0,0 is center of screen, -1,1 top left, 1,-1
	// bottom right). Therefore no transformation!
	result.Position = vIn.Position;
	result.Position.w = 1.0f;

	// The UV coordinates are top-left 0,0 bottom-right 1,1
	result.UV.x = result.Position.x * 0.5 + 0.5;
	result.UV.y = result.Position.y * -0.5 + 0.5;

	return result;
}

Here is the Pixel Shader :

float4 PS_Main(QuadPSIN pIn) : SV_Target
{
	GBufferAttributes attrs;
	bool processPixel = true;
	ExtractGBufferAttributes(pIn.Position.xy, pIn.UV, Texture0, Texture1, Texture2, TextureDepth, attrs, processPixel);

	if (!processPixel)
		discard;

	float3 WPos = attrs.Position;
	float3 Vn = normalize(CameraPosition - WPos);
	float3 Nn = attrs.Normal;

	float3 litColor = float3(0.0f, 0.0f, 0.0f);

#if HEMI
	// Calculating Ambient Light
	litColor += ComputeHemisphericAmbientLight(Nn, attrs.Diffuse, AmbientUpColor, AmbientDownColor);
#endif

#if DIR
	litColor += ComputeParallelLight(normalize(-DL_Direction), Nn, Vn, attrs.Ambient, attrs.Diffuse, float3(attrs.specIntensity, attrs.specIntensity, attrs.specIntensity), attrs.SpecularPower);
#endif

#if POINT
	// Calculating Point Light
	litColor += ComputePointLight(WPos, Nn, Vn, attrs.Ambient, attrs.Diffuse, float3(attrs.specIntensity, attrs.specIntensity, attrs.specIntensity), attrs.SpecularPower);
#endif

#if SPOT
	// Calculating Spot Light
	litColor += ComputeSpotLight(WPos, Nn, Vn, attrs.Ambient, attrs.Diffuse, float3(attrs.specIntensity, attrs.specIntensity, attrs.specIntensity), attrs.SpecularPower);
#endif

	// Return result
	return float4(litColor, 1);
}

And finaly my function supposed to unpack my GBuffer maps to usable data :

void ExtractGBufferAttributes(float2 position, float2 uv, Texture2D<float4> t0, Texture2D<float4> t1, Texture2D<float4> t2, Texture2D<float> t3, out GBufferAttributes attrs, out bool processPixel)
{
	int3 screenPos = int3(position, 0);
	processPixel = true;

	attrs.Diffuse = t0.Load(screenPos).xyz;
	attrs.SpecularIntensity = t0.Load(screenPos).w;
	attrs.Normal = normalize(t1.Load(screenPos).xyz * 2.0 - 1.0);
	attrs.Ambient = t2.Load(screenPos).xyz;
	attrs.SpecularPower = t2.Load(screenPos).w * 100;

	float depth = t3.Load(screenPos).x;

	if (depth == 1.0f)
		processPixel = false;

	float4 cPos = float4(position, depth, 1);
	float4 wPos = mul(ViewProjectionInverse, cPos);
	attrs.Position = wPos.xyz / wPos.w;
}

I am actually looking at your code sample, but what I am not sure to get, is if you create your depth map manually inside the shader ? You don't use the one generated by the DepthStencil ?

Edited by karnaltaB

Share this post


Link to post
Share on other sites

Correct, I probably could/should set it up to use the depth buffer itself, but so far I'm writing to a separate render target for the depth.  It started out because I was packing information into my g-buffers, but at this point it is just a separate render target that I write the depth to.

Share this post


Link to post
Share on other sites

Hi, 

 

It work now ;) I used the method #2 on my first post but as specified by Burnt_Fyr, I had to use the InverseViewProj matrix and not the InverseProj.

 

One last question, concerning the quality of the light calculation :

 

I am using the exact same function to calculate my light contribution but with forward lightning the color are all smooth on the back lightning, and when I turn on deferred lightning, I have got an ugly gradiant, is this a normal behavior of using deferred lightning ? Maybe due to some lost of precision in the depth ?

 

deferred.jpg

Share this post


Link to post
Share on other sites

It's a little hard to tell from the image, but it looks like you are using face normals to do the lighting calculation.  If it's a depth precision issue, what are your near/far values for the projection matrix?  Try scaling them to just fit your scene and see if that fixes the issue.  Since the depth will be distributed between near/far, the smaller those values are, the more precision you'll have.  If you have a far large view distance, then you sacrifice precision for distance.  If it's a normals issue, check what normals you are writing to your deferred buffer.  Make sure you're storing the normals per pixel, not per vertex/face.

Share this post


Link to post
Share on other sites

Thank, it look like it was a normal precision problem and not a depth problem.

 

I switched my normal storing buffer from a R11G11B10_Float to an R16G16B16A16_Float and removed my normal transformation (normal.xyz * 0.5 + 0.5) and now it's rendering all smooth.

 

You put me in the right direction ;)

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement