# Where to calculate TBN matrix?

This topic is 1666 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

Hi,

In my normal mapping implementation I currently calculate the TBN matrix in my vertex shader and pass it to the pixel shader.

There I take the normal map normal and using the TBN matrix transform it to world space, for lighting calculations.

All works fine.

Now I was wondering, in some implementations I've seen, people send the normal/binormal/tangent from the VS to the PS and build the TBN matrix in the pixel shader. Or somewhere in between, do the world transformation of those 3 vectors in the VS, send to PS and normalize there + build TBN matrix.

I've tried out both approaches but with no visible difference.

Can someone tell me why it would be beneficial to create the TBN matrix in a pixel shader instead of the vertex shader?

(the final normal is still for the pixel, because I transform the normal from the normal map in the pixel shader)

Note: when I calculate the TBN matrix in the VS, I also normalize the end resulting normal (normalize(mul(normalMapNormal, TBN)).

##### Share on other sites

Now I was wondering, in some implementations I've seen, people send the normal/binormal/tangent from the VS to the PS and build the TBN matrix in the pixel shader.

What do you mean by "build the TBN matrix"? Like this?:

float3x3 tbn = float3x3(input.Tangent, input.Binormal, input.Normal);

That's a no-op, it doesn't make a difference where you do it. It doesn't change anything or add any shader instructions - it's just syntactic sugar so you can do mul(normalMapNormal, tbn) instead of float3(dot(normalMapNormal, input.Tangent), dot(normalMapNormal, input.Binormal), dot(normalMapNormal, input.Normal).

##### Share on other sites
Also binormal is wrongly used term, it is bitangent.

##### Share on other sites

Thanks.

That makes sense.

So basically it doesn't matter if I send 3 float3's from the VS to the PS or 1 float3x3.

And how about multiplying the VS input normal/tangent/bitangent with the world matrix?

I'd say I do that in the VS because then it doesn't need to be executed per pixel (and the result is the same I suppose?)

Same for normalizing the result of normal/tanget/bitangent multiplied with the world matrix.

Would you do those 2 actions in the VS?

##### Share on other sites

So basically it doesn't matter if I send 3 float3's from the VS to the PS or 1 float3x3.

Correct.

And how about multiplying the VS input normal/tangent/bitangent with the world matrix?
I'd say I do that in the VS because then it doesn't need to be executed per pixel (and the result is the same I suppose?)

Yes, exactly. Do it in the VS for this reason.

Same for normalizing the result of normal/tanget/bitangent multiplied with the world matrix.
Would you do those 2 actions in the VS?

Interpolated unit vectors don't retain their unit length (consider what would happen if they pointed 180 degrees from each other and were weighted 0.5 each). So you'll need to re-normalize in the pixel shader if you want correct results (you'll probably want to keep normalizing in the vertex shader too).

Edited by phil_t

##### Share on other sites

Thanks, I've got it in place.

Here it is (just the relevant code snippets):

struct VS_OUTPUT
{
float4 	Pos			: POSITION0;
float3 	NormalWorld		: TEXCOORD1;
float2 	TexCoord		: TEXCOORD2;
float3 	wPos			: TEXCOORD3;
float3 	ViewDir		: TEXCOORD4;
float3	BiNormalWorld	: TEXCOORD5;
float3	TangentWorld	: TEXCOORD6;
};

VS_OUTPUT VS_function(VS_INPUT input)
{
VS_OUTPUT Out = (VS_OUTPUT)0;

float4 worldPosition = mul(input.Pos, World);
Out.Pos = mul(worldPosition, ViewProj);

Out.TexCoord = input.TexCoord;
Out.wPos = worldPosition.xyz;
Out.ViewDir = normalize(CameraPos - Out.wPos.xyz);

// Worldspace to Tangent space for normalMapping
Out.TangentWorld	= normalize(mul(input.Tangent, (float3x3)World));
Out.BiNormalWorld	= normalize(mul(input.Binormal, (float3x3)World));
Out.NormalWorld	= normalize(mul(input.Normal, (float3x3)World));

return Out;
}

// part of the pixel shader

float4 PS_function(VS_OUTPUT input): COLOR0
{
float4 textureColor = tex2D(textureSampler, input.TexCoord);

float3x3 worldToTangent = float3x3(	normalize(input.TangentWorld),
normalize(input.BiNormalWorld),
normalize(input.NormalWorld));

float3 normalMap = normalize(2.0 * (tex2D(normalMapSampler, input.TexCoord).xyz) - 1.0);
normalMap = normalize(mul(normalMap, worldToTangent));



##### Share on other sites
8 normalize calls and each one involves a square root. In the frank d. Luna's implementation (which i use), it is only 1 normalize call in pixel shader.

##### Share on other sites

Although it's not directly related to the TBN matrix, I want to point out that you can't compute the View vector in the vertex shader and interpolate that like you're doing. It doesn't interpolate linearly in world space. To get correct view vectors, you need to - unfortunately - send world space position in an interpolator, and then compute the view vector in the pixel shader.

##### Share on other sites
Thanks both.
@Newtechnology: how would you do that with just one normalize in the PS, you skip normalizing the vectors when they come into the PS? (normal, bitangent and tangent)

@OsmanB: but do I need the view vector per pixel instead of per vertex?

##### Share on other sites

Thanks both.
@Newtechnology: how would you do that with just one normalize in the PS, you skip normalizing the vectors when they come into the PS? (normal, bitangent and tangent)

Yeah, you 'construct' the TBN matrix using non-normalized N, B and T variables. Then when you use this matrix to transform your normal-map value into a normal, it won't be normalized either -- so you normalize this final value only.
It's probably very slightly less accurate, but is much cheaper!

1. 1
2. 2
Rutin
19
3. 3
4. 4
5. 5

• 13
• 26
• 10
• 11
• 9
• ### Forum Statistics

• Total Topics
633736
• Total Posts
3013600
×