# Normal Mapping With Deferred Rendering

This topic is 3255 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

I understand how normal mapping works with forward rendering and i think i know how i am supposed to do it in deferred rendering, however i cant find any decent articles. If you can please check out the source code in the final post of this thread, as it is what i followed roughly. This appears wrong. It looks like he is transforming a tangent space normal into tangent space with that TBN matrix. If i am correct then the question becomes, how do i do this in deferred rendering where the normal g-buffer normals must be in world space? I have been looking into converting the inverse transpose of the TBN matrix but there is no inverse intrinsic to do so, so i am a bit stumped. My object pass vertex shader:
VS_OUTPUT vs_main( VS_INPUT In )
{
VS_OUTPUT Out;

Out.Pos						= mul(float4(In.Pos,1.0), matWorldViewProjection);
Out.PosInWorld				= mul(float4(In.Pos,1.0), matWorld);
Out.Normal					= mul(float4(In.Normal,1.0), matWorld);
Out.TangentWorldMatrix[0]	= mul(float4(In.Tangent,1.0f), matWorld).xyz;
Out.TangentWorldMatrix[1]	= mul(float4(In.Binormal,1.0f), matWorld).xyz;
Out.TangentWorldMatrix[2]	= mul(float4(In.Normal,1.0f), matWorld).xyz;
Out.Tex						= In.Tex;

return Out;
}
My object pass pixel shader:
PS_OUTPUT ps_main( VS_OUTPUT In )
{
PS_OUTPUT Out;

float4		SampledNormal = tex2D(NormalMapSampler, In.Tex);
SampledNormal = normalize(2.0f * (SampledNormal - 0.5f));

float3 WorldNormal = normalize(mul(SampledNormal, In.TangentWorldMatrix));

Out.PosInWorld		= float4(In.PosInWorld,1.0f);

if ( HasNormalMap == true )
{
Out.NormalInWorld	= float4(WorldNormal,1.0f);
}
else
{
Out.NormalInWorld	= float4(In.Normal,1.0f);
}

Out.Albedo			= tex2D(TextureMapSampler, In.Tex);

return Out;
}
Another thing i have stumbled accross just now is the hint that i should be filling the normal g buffer with view space normals and doing lighting in the shade pass in view space. Should this be the case? Any help appreciated, thanks. [Edited by - Dave on July 20, 2009 2:43:41 PM]

##### Share on other sites
Hi,

Your shader code looks correct but I've 3 questions:
* Does it work for flat surfaces (i.e. when HasNormalMap is false)?
* Is your vertex declaration for bumped surfaces correct? Did you clone your mesh with TBN basis vertex declaration?
* Why don't you pack your normal data? Look at the snippet below:
PS_DEFERRED_OUT PS_Deferred (VS_OUTPUT vsIn){    PS_DEFERRED_OUT psOut = (PS_DEFERRED_OUT)0;    //...    //Do your normal calculations (with TBN basis or not) and obtain psOut.NormalInWorld    //...    psOut.NormalInWorld = 0.5 * (1 + psOut.NormalInWorld);    //...    return psOut;    //In deferred pass, you can obtain real world-space normal with this formula:   // float3 n = 2 * SampledNormalFromGBuffer - 1.0f;}

Quote:
 Another thing i have stumbled accross just now is the hint that i should be filling the normal g buffer with view space normals and doing lighting in the shade pass in view space. Should this be the case?

Filling G-Buffer with View-Space normal vectors is meaningful if you want to reconstruct normal.z. I mean, if you need an extra field in your GBuffer, you can store ViewSpaceNormal.xy and you can reconstruct ViewSpaceNormal.z by using ViewSpaceNormal.z = sqrt(1 - dot (ViewSpaceNormal.xy, ViewSpaceNormal.xy)); formula in deferred pass.

Hope this helps.
Rohat.

##### Share on other sites
* Does it work for flat surfaces (i.e. when HasNormalMap is false)?

Everything looks fine with the normals straight from the models, applied to the g buffer in world space.

* Is your vertex declaration for bumped surfaces correct? Did you clone your mesh with TBN basis vertex declaration?

I run this accross all models passed in. It appears to generate information and doesn't return any errors, nor are any passed back from d3d.

void	ModelManager::GenerateTangents( Model* pNewModel ){	D3DXMesh	pMesh( pNewModel->GetMesh() );	D3DVERTEXELEMENT9	EndElement = {0xFF,0,D3DDECLTYPE_UNUSED,0,0,0};	D3DVERTEXELEMENT9 CurrentDeclaration[MAX_FVF_DECL_SIZE];	pMesh->GetDeclaration(&CurrentDeclaration[0]);	//	Find the end of the declaration and insert space for tangents.	UInt32 EndIndex( 0 );	for ( UInt32 Index( 0 ); Index < MAX_FVF_DECL_SIZE; ++Index )	{		if ( CurrentDeclaration[ Index ].Type == D3DDECLTYPE_UNUSED )		{			EndIndex = Index;			break;		}	}	CurrentDeclaration[ EndIndex + 2 ] = EndElement;	D3DVERTEXELEMENT9& rTangentElement( CurrentDeclaration[EndIndex] );	D3DVERTEXELEMENT9& rBiNormalElement( CurrentDeclaration[EndIndex+1] );	D3DVERTEXELEMENT9& rPreviousElement( CurrentDeclaration[EndIndex-1] );	rTangentElement.Offset = rPreviousElement.Offset + 12;	rTangentElement.Method = D3DDECLMETHOD_DEFAULT;	rTangentElement.Stream = rPreviousElement.Stream;	rTangentElement.Type = D3DDECLTYPE_FLOAT3;	rTangentElement.Usage = D3DDECLUSAGE_TANGENT;	rTangentElement.UsageIndex = 0;	rBiNormalElement.Offset = rTangentElement.Offset + 12;	rBiNormalElement.Method = D3DDECLMETHOD_DEFAULT;	rBiNormalElement.Stream = rPreviousElement.Stream;	rBiNormalElement.Type = D3DDECLTYPE_FLOAT3;	rBiNormalElement.Usage = D3DDECLUSAGE_BINORMAL;	rBiNormalElement.UsageIndex = 0;	D3DXMesh NewMesh;	HRESULT Result = pMesh->CloneMesh( D3DXMESH_VB_MANAGED | D3DXMESH_IB_MANAGED, &CurrentDeclaration[0], m_Device, &NewMesh );	pNewModel->SetMesh( NewMesh );	DWORD* pCharBuffer( new DWORD[3 * NewMesh->GetNumFaces()] );	NewMesh->GenerateAdjacency( 0.001f, pCharBuffer );	D3DXCleanMesh( D3DXCLEAN_BACKFACING, NewMesh, pCharBuffer, &NewMesh, pCharBuffer, NULL );	static UInt32 TexStage(0);	static UInt32 TangentStage(0);	static UInt32 BinormalStage(0);	Result = D3DXComputeTangent( pNewModel->GetMesh(), TexStage, TangentStage, BinormalStage, TRUE, pCharBuffer );	if ( Result == D3DERR_INVALIDCALL )		Result = Result;	else if ( Result == D3DXERR_INVALIDDATA )		Result = Result;}

* Why don't you pack your normal data? Look at the snippet below:

I am not ready to optimise yet, but thanks.

##### Share on other sites
A TBN matrix actually transforms from tangent-space to object-space...not the other way around. Typically with forward rendering you take your TBN matrix and rotate it by your world matrix (so that your TBN matrix will now transform from tangent-space to world-space) and then transform your light direction and eye direction by the transpose of the TBN matrix. The shader code usually looks like this in samples:

tbnMatrix = float3x3(mul(In.Tangent, WorldMatrix), mul(In.Binormal, WorldMatrix), mul(In.Normal, WorldMatrix));Out.EyeDirection = mul(tbnMatrix, eyeDirection);Out.LightDirection = mul(tbnMatrix, lightDirection);

The part that most people miss (and most people never mention in their sample code) is that when calcluation the Eye/Light directions the order of the vector and the matrix is reversed from what you normally do (usually you do mul(vector, matrix). This is ultimately the same as taking the transpose of a matrix and doing the transformation in the normal order, like this:

tangentToWorld = float3x3(mul(In.Tangent, WorldMatrix), mul(In.Binormal, WorldMatrix), mul(In.Normal, WorldMatrix));worldToTangent = transpose(tangentToWorld);Out.EyeDirection = mul(eyeDirection , worldToTangent);Out.LightDirection = mul(lightDirection , worldToTangent);

The reason taking the transpose works is that for an orthogonal transformation matrix, the transpose is equal to its inverse. In most cases your tangent basis will be orthogonal...or at least close enough to it.

As far as using view-space instead of world-space...in most cases it's used because it can be easier or cheaper to reconstruct view-space position from depth, and it allows you to simply some parts of the lighting calculations. It doesn't really matter too much, as long as you keep track of what coordinate space you're in and make sure that everything matches.

##### Share on other sites
Quote:
 Original post by programci_84Filling G-Buffer with View-Space normal vectors is meaningful if you want to reconstruct normal.z. I mean, if you need an extra field in your GBuffer, you can store ViewSpaceNormal.xy and you can reconstruct ViewSpaceNormal.z by using ViewSpaceNormal.z = sqrt(1 - dot (ViewSpaceNormal.xy, ViewSpaceNormal.xy)); formula in deferred pass.

Be careful with that. You can't assume that the sign of z is always negative in view-space (or positive, if you're using right-handed coordinates) when you're using perspective projection or normal maps. Insomniac has some pictures showing the errors you can get here.

##### Share on other sites
Ok so here is the ultimate screenshot of doom:

As you can see the normal mapping isn't quite right. Looks like the tangent space normal isn't being converted to world space correctly. The demo of this is it rotating and its clear that something is wrong because the normal map on each side of the cube is different.

MJP, i'm sure the answer is in your post somewhere :D.

##### Share on other sites
Crap! I actually saw the problem before in your code, and completely forgot to mention it by the time I was done rambling. This part right here:

Out.Normal = mul(float4(In.Normal,1.0), matWorld);Out.TangentWorldMatrix[0] = mul(float4(In.Tangent,1.0f), matWorld).xyz;Out.TangentWorldMatrix[1] = mul(float4(In.Binormal,1.0f), matWorld).xyz;Out.TangentWorldMatrix[2] = mul(float4(In.Normal,1.0f), matWorld).xyz;

You don't want your w component to be 1.0 when you're transforming your Normal, Binormal, or Tangent (or any normalized direction vector, for that matter). The reason why is because if w is 1.0, then transforming it by a 4x4 matrix will cause it to be translated by the matrix as well. However you don't want to translate direction vectors, you just want to rotate them. Try this instead:

Out.Normal = mul(In.Normal, matWorld);Out.TangentWorldMatrix[0] = mul(In.Tangent, matWorld).xyz;Out.TangentWorldMatrix[1] = mul(In.Binormal, matWorld).xyz;Out.TangentWorldMatrix[2] = Out.Normal;

##### Share on other sites
Hey,

I'm afraid that didn't appear to have any affect at all. Here is the updated shader:
VS_OUTPUT vs_main( VS_INPUT In ){	VS_OUTPUT Out;	Out.Pos						= mul(float4(In.Pos,1.0), matWorldViewProjection);	Out.PosInWorld				= mul(float4(In.Pos,1.0), matWorld);	Out.Normal					= mul(In.Normal, matWorld);	Out.TangentWorldMatrix[0]	= mul(In.Tangent, matWorld).xyz;	Out.TangentWorldMatrix[1]	= mul(In.Binormal, matWorld).xyz;	Out.TangentWorldMatrix[2]	= Out.Normal;	Out.Tex						= In.Tex;		return Out;}PS_OUTPUT ps_main( VS_OUTPUT In ){	PS_OUTPUT Out;		float4		SampledNormal = tex2D(NormalMapSampler, In.Tex);	SampledNormal = normalize(2.0f * (SampledNormal - 0.5f));		float3 WorldNormal = normalize(mul(SampledNormal, In.TangentWorldMatrix));		Out.PosInWorld		= float4(In.PosInWorld,1.0f);	if ( HasNormalMap == true )	{		Out.NormalInWorld	= float4(WorldNormal,1.0f);	}	else	{		Out.NormalInWorld	= float4(In.Normal,1.0f);	}	Out.Albedo			= tex2D(TextureMapSampler, In.Tex);   	return Out;}

Thanks for the help,

##### Share on other sites
Hmmm...the only other issue I see with your code is that you're not normalizing your Normal/Binormal/Tangent after interpolation before using it to transform the normal sampled from the normal map.

##### Share on other sites
Quote:
 Original post by MJPHmmm...the only other issue I see with your code is that you're not normalizing your Normal/Binormal/Tangent after interpolation before using it to transform the normal sampled from the normal map.

I could be wrong, but that wouldn't account for the wrong 'direction' look for the normal map would it?

• 18
• 11
• 16
• 9
• 49
• ### Forum Statistics

• Total Topics
631395
• Total Posts
2999780
×