World position reconstruction from Depth (almost done)

Started by
0 comments, last by arkangel2803 14 years, 10 months ago
Hi all I am trying to make a world position reconstruction algorithm and keep the concepts easy and clear. I had already done the entire algorithm but I have a little misunderstood concept that makes algorithm fails in some cases. First of all, I want to post the algorithm, it’s a world position reconstruction algorithm in a deferred shading pipeline: 1.- First of all, we need to store Depth of a pixel in one channel of our G-Buffer, my G-Buffer is composed of 4 Render Targets which every Render Target have 128 bits. In my case, and for depth storage purpose, I use a D3DFMT_G32R32F Render Target to store a 32 bits Depth in a world space. Vertex Shader of G-Buffer: -----------------------------------

VS_OUTPUT RenderSceneVS	(
				float4 inPos  : POSITION, 
				float4 inNormal : NORMAL,
				float2 inTex0 : TEXCOORD0,
				float2 inTex1 : TEXCOORD1,
				float3 inTangent : TEXCOORD2,
				float3 inBinormal : TEXCOORD3
			)
{
	VS_OUTPUT output;
	output.pos = mul(float4(inPos.xyz,1.0f) , world_view_proj);
	output.pos_world.xyz = mul( float4(inPos.xyz,1.0f),VertexTransform ).xyz;

	// ...
	
	return output;
}
Pixel Shader of G-Buffer: --------------------------------

RETURN_MRT RenderScenePS(
			        float4 inTex0m10	: TEXCOORD0,
				float4 inTexL1L2	: TEXCOORD1,
				float4 inTexDistL1L2	: TEXCOORD2,
				float4 pos_world	: TEXCOORD3,
				float3 tangent		: TANGENT,
				float4 norm		: NORMAL,
				float3 binormal		: BINORMAL,
				float fSign		: VFACE
				)
{
	Ret.OutColor0 = I_ComputeAlbedoRT(i);

	Ret.OutColor1.x = length(i.PositionPixel - eye_pos);
	Ret.OutColor1.y = 0.0f;
	Ret.OutColor1.z = 0.0f;
	Ret.OutColor1.w = 0.0f;

	Ret.OutColor2 = I_ComputeNormalRT(i);
	Ret.OutColor3 = I_ComputeLightRT(i);

	return(Ret);
}
Just for clarify, I found that if you have a 2 channel Render Target, ie: D3DFMT_G32R32F, you need to assign all 4 channels in the pixel shader, if not, compiler will fail to compile. Another comment, as you can see, I am trying to keep the Depth in world space, so I compute the length between world position of pixel and world position of eye (camera’s position). When we are done with G-Buffer, we pass to the next step, Light Transport step, or another step for testing purpose, but one thing is clear, in this step, we need to reconstruct the world position of a pixel with his Depth from the previous step. Now I we want to reconstruct world position for every pixel in our Frame Buffer, we need to trace a vector, this vector starts on eye_pos (position of the eye of the camera) and goes through the world position of the pixel in our Camera’s Near plane. We know the eye_pos value, so we need to know the world position of the pixel laying in Camera’s Near Plane. We have to ways here as far as I know. First involves to pass from Near Plane pixel Tex Coord to a world position, so we need to knows FOV of the camera to compute the width and height of the near plane, then transforms the coord of the pixel to coords of the view cube, and then, with a few more steps, Finlay knows the Cameras Near Plane pixel world pos. (buff….) Second is a more easy way and 100% effective, as far as I know. We can take the coords of 4 corners of Camera’s Near Plane in view space (correct me if I’m wrong) and then, multiply each of these positions by Inverse of ViewProjection matrix coming from cameras information. Look at the code:

//---------------------------------------------------------------------------------------------------------------
/// \brief Hace un update de las posiciones en world del FullQuad
/// \param inNeedAdjust Si es cierto, se aplica un re-ajuste en las posiciones a interpolar
//---------------------------------------------------------------------------------------------------------------
void CCore::UpdateFullScreenQuad(bool inNeedAdjust)
{
	D3DXVECTOR3 vA,vB,vC,vD;
	D3DXVECTOR4 vTemp;
	D3DXMATRIX mMInverse;
	TEXTUREDVERTEX *pBuff;
	HRESULT hr;

    // account for DirectX's texel center standard:
    //float u_adjust = 0.5f / (float)inWidth;
    //float v_adjust = 0.5f / (float)inHeight;

	D3DXMatrixInverse(&mMInverse,NULL,&(m_pCameraActive->GetViewProjMatrix()));

	vA = D3DXVECTOR3(-1.0f,-1.0f, 0.0f);
	vB = D3DXVECTOR3(-1.0f, 1.0f, 0.0f);
	vC = D3DXVECTOR3( 1.0f, 1.0f, 0.0f);
	vD = D3DXVECTOR3( 1.0f,-1.0f, 0.0f);

	D3DXVec3Transform(&vTemp,&vA,&mMInverse);
	vA.x = vTemp.x; 
	vA.y = vTemp.y;
	vA.z = vTemp.z;
	D3DXVec3Transform(&vTemp,&vB,&mMInverse);
	vB.x = vTemp.x; 
	vB.y = vTemp.y;
	vB.z = vTemp.z;
	D3DXVec3Transform(&vTemp,&vC,&mMInverse);
	vC.x = vTemp.x; 
	vC.y = vTemp.y;
	vC.z = vTemp.z;
	D3DXVec3Transform(&vTemp,&vD,&mMInverse);
	vD.x = vTemp.x; 
	vD.y = vTemp.y;
	vD.z = vTemp.z;


    if (m_pFullScreenQuad)
    {
        hr = m_pFullScreenQuad->Lock(0, 0,(void**)&pBuff, 0);

		pBuff[0].x = -1.0f; pBuff[0].y = -1.0f; pBuff[0].z = 0.0f;
		pBuff[1].x = -1.0f; pBuff[1].y =  1.0f; pBuff[0].z = 0.0f;
		pBuff[2].x =  1.0f; pBuff[2].y =  1.0f; pBuff[2].z = 0.0f;
		pBuff[3].x =  1.0f; pBuff[3].y = -1.0f; pBuff[3].z = 0.0f;

		pBuff[0].PlaneWorldPos.x = vA.x; pBuff[0].PlaneWorldPos.y = vA.y; pBuff[0].PlaneWorldPos.z = vA.z; 
		pBuff[1].PlaneWorldPos.x = vB.x; pBuff[1].PlaneWorldPos.y = vB.y; pBuff[1].PlaneWorldPos.z = vB.z; 
		pBuff[2].PlaneWorldPos.x = vC.x; pBuff[2].PlaneWorldPos.y = vC.y; pBuff[2].PlaneWorldPos.z = vC.z; 
		pBuff[3].PlaneWorldPos.x = vD.x; pBuff[3].PlaneWorldPos.y = vD.y; pBuff[3].PlaneWorldPos.z = vD.z; 

        m_pFullScreenQuad->Unlock();
    }
}

As we can see on this code, we take the positions of 4 corners, and multiply each one by inverse of the ViewProjection matrix ie: inverse(View_Matrix * Projection_Matrix); So now, we have all corners on world space. Now, we only need to pass them to the shader of ‘next step’ and let Vertex Shader interpolate them over all pixels of frame buffer. So we end with a pixel shader that receives a float3 component that stores world space position of that pixel ready to play with her. Then, we only need to make a vector from eye_pos (Camera’s world position) to FrameBuffer pixel’s position, normalize it, multiply by Depth (stored in G-Buffer), and then, sum to this the eye_pos vector. So we end with real world position of this pixel. Vertex Shader of ‘next step’ ------------------------------------

VS_OUTPUT RenderTexturedVS(
				float4 inPos		: POSITION, 
				float2 inTex0		: TEXCOORD0,
				float3 inPlaneWPos	: TEXCOORD1
		           )
{
	VS_OUTPUT output;

	output.pos = mul( inPos , world_view_proj );
	output.tex0 = inTex0;
	output.planeWPos = inPlaneWPos;

	return output;
}
Pixel Shader of ‘next step’ ------------------------------------

float4 RenderTexturedPS	(
				float2 inTex0 : TEXCOORD0,
				float3 inPlaneWPos : TEXCOORD1
			) : COLOR
{
	float4 diffuseafter = tex2D(LdrAfterSampler, inTex0);
	float4 BuffDist = tex2D(BuffDistSampler, inTex0);
	float4 Normal = tex2D(NormalRTSampler, inTex0);
	float Depth = tex2D(DepthSampler, inTex0).x;
	
	float3 vWorldPos = (normalize(inPlaneWPos - eye_pos) * Depth) + eye_pos;


	return (diffuseafter);
}
But I don’t know why, I have problems with this algorithm in next cases: If I chose in camera’s values these distances: NearDist = 1.0f, FarDist = 1500.0f all goes perfect If I chose in camera’s values these distances: NearDist = 0.1f, FarDist = 1500.0f, al goes wrong, world position starts to bend in a some strange maner, like if world position depends on cameras position, and orientation. If I chose in camera’s values these distances: NearDist = 2.0f, FarDist = 1500.0f, all goes wrong as well, bending world positions in a different factor, but bending effect appears. Which is the problem of near distance being different of 1.0f and stores Depth as a world space distances from camera to pixel’s world position? Thanks for your time, I hope this methodology helps someone having problems with Depth reconstruction like I had few days ago :) and of course, if we can fix the last problem, it would be perfect ;) Thanks again LLORENS
Advertisement
Bump!

If someone else can help me about the last question, i will much apreciate it :)

Thanks

LLORENS

This topic is closed to new replies.

Advertisement