• Advertisement

# 3D Result calculated against WorldInverseTranspose is not normailzed

## Recommended Posts

I have a very simple vertex/pixel shader for rendering a bunch of instances with a very simple lighting model.

When testing, I noticed that the instances were becoming dimmer as the world transform scaling was increasing. I determined that this was due to the fact that the the value of float3 normal = mul(input.Normal, WorldInverseTranspose); was shrinking with the increased scaling of the world transform, but the unit portion of it appeared to be correct. To address this, I had to add normal = normalize(normal);

I do not, for the life of me, understand why. The WorldInverseTranspose contains all of the components of the world transform (SetValueTranspose(Matrix.Invert(world * modelTransforms[mesh.ParentBone.Index]))) and the calculation appears to be correct as is.

Why is the value requiring normalization? under);

);

float4 CalculatePositionInWorldViewProjection(float4 position, matrix world, matrix view, matrix projection)
{
float4 worldPosition = mul(position, world);
float4 viewPosition = mul(worldPosition, view);
return mul(viewPosition, projection);
}

VertexShaderOutput VS(VertexShaderInput input)
{
VertexShaderOutput output;

matrix instanceWorldTransform = mul(World, transpose(input.InstanceTransform));

output.Position = CalculatePositionInWorldViewProjection(input.Position, instanceWorldTransform, View, Projection);

float3 normal = mul(input.Normal, WorldInverseTranspose);
normal = normalize(normal);

float lightIntensity = -dot(normal, DiffuseLightDirection);
output.Color = float4(saturate(DiffuseColor * DiffuseIntensity).xyz * lightIntensity, 1.0f);

output.TextureCoordinate = SpriteSheetBoundsToTextureCoordinate(input.TextureCoordinate, input.SpriteSheetBounds);

return output;
}

float4 PS(VertexShaderOutput input) : SV_Target
{
return Texture.Sample(Sampler, input.TextureCoordinate) * input.Color;
}

Edited by OpaqueEncounter

#### Share this post

##### Share on other sites
Advertisement

If your world matrix contains scaling, the normals will be scaled, which ends up scaling the lighting. Normalisation is required if you use scaling.

The inverse-transpose is required if you use non-uniform scaling, as it makes sure that normals are scaled in such a way that they become the normal of the newly scaled faces. It does not remove the need for normalisation.

Also, normalisation should always be performed in the pixel shader even without scaling, as the interpolation of three unit-length vertex normals is unlikely to still be unit-length.

#### Share this post

##### Share on other sites
On ‎11‎/‎21‎/‎2017 at 2:07 AM, Hodgman said:

If your world matrix contains scaling, the normals will be scaled, which ends up scaling the lighting. Normalisation is required if you use scaling.

The inverse-transpose is required if you use non-uniform scaling, as it makes sure that normals are scaled in such a way that they become the normal of the newly scaled faces. It does not remove the need for normalisation.

Also, normalisation should always be performed in the pixel shader even without scaling, as the interpolation of three unit-length vertex normals is unlikely to still be unit-length.

Thanks for clarifying. This was the gap in my understanding.

## Create an account or sign in to comment

You need to be a member in order to leave a comment

## Create an account

Sign up for a new account in our community. It's easy!

Register a new account

## Sign in

Already have an account? Sign in here.

Sign In Now

• Advertisement
• Advertisement
• ### Popular Tags

• Advertisement

• 10
• 14
• 9
• 9
• 11
• ### Similar Content

• Hi,
Can anyone point me into good direction how to resolve this?
I have flat mesh made from many quads (size 1x1 each) each split into 2 triangles. (made procedural)
What i want to achieve is : "merge" small quads into bigger ones (show on picture 01), English is not my mother language and my search got no result... maybe i just form question wrong.
i have array[][] where i store "map" information, for now i'm looking for blobs of same value in it -> and then for each position i create 1 quad. and on end create mesh from all.
is there any good algorithm for creating mesh between random points on same plane? less triangles better. Or "de-tesselate" this to bigger/less triangles/quads?
Also i would like to find "edges" and create "faces" between edge points (picture 02 shows what i want to achieve).
No need for whole code, just if someone can point me in good direction would be nice.
Thanks

• Hi,

I am working on a project where I'm trying to use Forward Plus Rendering on point lights. I have a simple reflective scene with many point lights moving around it. I am using effects file (.fx) to keep my shaders in one place. I am having a problem with Compute Shader code. I cannot get it to work properly and calculate the tiles and lighting properly.

Is there anyone that is wishing to help me set up my compute shader?
Thank you in advance for any replies and interest!

• Hi
I have a procedurally generated tiled landscape, and want to apply 'regional' information to the tiles at runtime; so Forests, Roads - pretty much anything that could be defined as a 'region'. Up until now I've done this by creating a mesh defining the 'region' on the CPU and interrogating that mesh during the landscape tile generation; I then add regional information to the landscape tile via a series of Vertex boolean properties. For each landscape tile vertex I do a ray-mesh intersect into the 'region' mesh and get some value from that mesh.

For example my landscape vertex could be;
struct Vtx { Vector3 Position; bool IsForest; bool IsRoad; bool IsRiver; } I would then have a region mesh defining a forest, another defining rivers etc. When generating my landscape veretexes I do an intersect check on the various 'region' meshes to see what kind of landscape that vertex falls within.

My ray-mesh intersect code isn't particularly fast, and there may be many 'region' meshes to interrogate, and I want to see if I can move this work onto the GPU, so that when I create a set of tile vertexes I can call a compute/other shader and pass the region mesh to it, and interrogate that mesh inside the shader. The output would be a buffer where all the landscape vertex boolean values have been filled in.

The way I see this being done is to pass in two RWStucturedBuffer to a compute shader, one containing the landscape vertexes, and the other containing some definition of the region mesh, (possibly the region might consist of two buffers containing a set of positions and indexes). The compute shader would do a ray-mesh intersect check on each landscape vertex and would set the boolean flags on a corresponding output buffer.

In theory this is a parallelisable operation (no one landscape vertex relies on another for its values) but I've not seen any examples of a ray-mesh intersect being done in a compute shader; so I'm wondering if my approach is wrong, and the reason I've not seen any examples, is because no-one does it that way. If anyone can comment on;
Is this a really bad idea ? If no-one does it that way, does everyone use a Texture to define this kind of 'region' information ? If so - given I've only got a small number of possible types of region, what Texture Format would be appropriate, as 32bits seems really wasteful. Is there a common other approach to adding information to a basic height-mapped tile system that would perform well for runtime generated tiles ? Thanks
Phillip
• By GytisDev
Hello,
without going into any details I am looking for any articles or blogs or advice about city building and RTS games in general. I tried to search for these on my own, but would like to see your input also. I want to make a very simple version of a game like Banished or Kingdoms and Castles,  where I would be able to place like two types of buildings, make farms and cut trees for resources while controlling a single worker. I have some problem understanding how these games works in the back-end: how various data can be stored about the map and objects, how grids works, implementing work system (like a little cube (human) walks to a tree and cuts it) and so on. I am also pretty confident in my programming capabilities for such a game. Sorry if I make any mistakes, English is not my native language.
Thank you in advance.
• Advertisement