# 3D Result calculated against WorldInverseTranspose is not normailzed

## Recommended Posts

I have a very simple vertex/pixel shader for rendering a bunch of instances with a very simple lighting model.

When testing, I noticed that the instances were becoming dimmer as the world transform scaling was increasing. I determined that this was due to the fact that the the value of float3 normal = mul(input.Normal, WorldInverseTranspose); was shrinking with the increased scaling of the world transform, but the unit portion of it appeared to be correct. To address this, I had to add normal = normalize(normal);

I do not, for the life of me, understand why. The WorldInverseTranspose contains all of the components of the world transform (SetValueTranspose(Matrix.Invert(world * modelTransforms[mesh.ParentBone.Index]))) and the calculation appears to be correct as is.

Why is the value requiring normalization? under);

);

float4 CalculatePositionInWorldViewProjection(float4 position, matrix world, matrix view, matrix projection)
{
float4 worldPosition = mul(position, world);
float4 viewPosition = mul(worldPosition, view);
return mul(viewPosition, projection);
}

{

matrix instanceWorldTransform = mul(World, transpose(input.InstanceTransform));

output.Position = CalculatePositionInWorldViewProjection(input.Position, instanceWorldTransform, View, Projection);

float3 normal = mul(input.Normal, WorldInverseTranspose);
normal = normalize(normal);

float lightIntensity = -dot(normal, DiffuseLightDirection);
output.Color = float4(saturate(DiffuseColor * DiffuseIntensity).xyz * lightIntensity, 1.0f);

output.TextureCoordinate = SpriteSheetBoundsToTextureCoordinate(input.TextureCoordinate, input.SpriteSheetBounds);

return output;
}

{
return Texture.Sample(Sampler, input.TextureCoordinate) * input.Color;
}

Edited by OpaqueEncounter

##### Share on other sites

If your world matrix contains scaling, the normals will be scaled, which ends up scaling the lighting. Normalisation is required if you use scaling.

The inverse-transpose is required if you use non-uniform scaling, as it makes sure that normals are scaled in such a way that they become the normal of the newly scaled faces. It does not remove the need for normalisation.

Also, normalisation should always be performed in the pixel shader even without scaling, as the interpolation of three unit-length vertex normals is unlikely to still be unit-length.

##### Share on other sites
On ‎11‎/‎21‎/‎2017 at 2:07 AM, Hodgman said:

If your world matrix contains scaling, the normals will be scaled, which ends up scaling the lighting. Normalisation is required if you use scaling.

The inverse-transpose is required if you use non-uniform scaling, as it makes sure that normals are scaled in such a way that they become the normal of the newly scaled faces. It does not remove the need for normalisation.

Also, normalisation should always be performed in the pixel shader even without scaling, as the interpolation of three unit-length vertex normals is unlikely to still be unit-length.

Thanks for clarifying. This was the gap in my understanding.

## Create an account

Register a new account

• 40
• 15
• 10
• 23
• 19
• ### Similar Content

• By recp
Hello,
How can I get center of scene or node or model? What is best way to do this? Scene structure:
Scene   |   o - Node[s]            |            o - Model[s] // Mesh                       |                       o - Primitive[s] // Sub-Mesh                                    |                                    o local AABB and world AABB I'm using AABB's center as center of primitive and I'm combining all AABB boxes to build an AABB for scene. When I visualized the boxes it seems work as expected.
But I need to get center of scene, node or model for apply rotation around center. Because I'm using a trackball for rotating attached node or model. Currently I'm using scene's AABB's center as rotation point (pivot), for single object it is working. After rotation is completed center of primitive remains same which it should be, I think. But if I load a scene which contains multiple models or primitives, after rotation is completed center of scene's AABB is moving (I'm using that as center of scene). Because every time rotation is completed, new AABB is calculated for scene by combining all sub AABB boxes. I think this may be a normal because there is no balance between AABB boxes while rotating. For instance if I use two same CUBE without rotations center of new scene's AABB remains same.
My solution (it seems work for now):
I created new center member (vec3) in scene struct:
scene->center    = vec3(0); scene->primCount = 0; for prim in primitivesInFrustum    scene->center += prim->aabb->center;    scene->primCount++; scene->center = scene->center / scene->primCount Now I'm using this center as center of scene instead of scene->aabb->center and it seems work.
My question is that what is best way to get center of scene, node or model? How do you get the center for rotation? Any suggestions? Am I doing right?
Thanks