# What's the point in multiplying something with InvView?

This topic is 4883 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

Hi, here comes another question: there is a Metal shader example in RenderMonkey having this vertex shader in asm:
vs.1.1

dcl_position   v0
dcl_normal     v1
dcl_texcoord   v2
dcl_tangent    v3
dcl_binormal   v4

// output vertex position
m4x4 oPos, v0, c0

// output texture coordinates
mov oT0, v2

// output normal vector
mov oT2, v1

// find the light position in object space
mov  r0, c9
m4x4 r1, r0, c5

// find light vector
sub  r1, r1, v0

// normalize light vector
dp3  r2.x, r1, r1
rsq  r2.x, r2.x
mul  r1.xyz, r1.xyz, r2.xxx

// output light vector
mov oT1, r1

// find the eye position in object space
mov  r0, c4
m4x4 r2, r0, c5

// find eye vector
sub  r2, r2, v0

// normalize eye vector
dp3  r3.x, r2, r2
rsq  r3.x, r3.x
mul  r2.xyz, r2.xyz, r3.xxx

// output eye vector
mov oT4, r2

// find half angle vector
mul r3, r3, c10.yyy

// normalize half angle vector
dp3  r4.x, r3, r3
rsq  r4.x, r4.x
mul  r3.xyz, r3, r4.xxx

mov r3.xy, v1.xy

// normalize half angle vector
dp3  r4.x, r3, r3
rsq  r4.x, r4.x
mul  r3.xyz, r3, r4.xxx

// output half angle vector vector
mov oT3, r3


I have transformed it to HLSL, and I think it does the same (at least, it looks the same after transforming the pixel shader too)
float4x4 view_proj_matrix;
float4 eye;
float4x4 inv_view_matrix;
float4 light;

struct VS_INPUT
{
float4 Position : POSITION0;
float3 Normal   : NORMAL;
float2 Texcoord : TEXCOORD0;
float3 Tangent  : TANGENT;
float3 Binormal : BINORMAL;
};

struct VS_OUTPUT
{
float4 Position : POSITION0;
float2 Texcoord : TEXCOORD0;
float3 LightVec : TEXCOORD1;
float3 Normal   : TEXCOORD2;
float3 HalfAngle: TEXCOORD3;
float3 Eye      : TEXCOORD4;
};

VS_OUTPUT vs_main( VS_INPUT Input )
{
VS_OUTPUT Output;

Output.Position = mul(view_proj_matrix, Input.Position);
Output.Normal = normalize(Input.Normal);
Output.Texcoord = Input.Texcoord;
Output.LightVec = normalize(mul(inv_view_matrix, light) - Input.Position);
Output.Eye = normalize(mul(inv_view_matrix, eye) - Input.Position);
Output.HalfAngle = normalize((Output.LightVec + Output.Eye) / 2);
Output.HalfAngle.xy = Input.Normal.xy;
Output.HalfAngle = normalize(Output.HalfAngle);
return( Output );
}


Now the thing I don't understand is the Output.LightVec = normalize(mul(inv_view_matrix, light) - Input.Position); Output.Eye = normalize(mul(inv_view_matrix, eye) - Input.Position); what does multiplying with the InvView and substracting Pos mean visually? I just can't imagine what will happen with my vectors after this. Thanks for your answers, kp

##### Share on other sites
1) Transforming by a matrix transforms FROM one space TO another; for example the "world" matrix usually transforms FROM object space TO world space. Transforming by the inverse of a matrix transforms the other way, for example FROM world space TO object space.

2) From your variable name, "inv_view_matrix" sounds like it's the inverse of the view matrix; however, the comments in the original assembly shader say "find the light position in object space".

I suspect (without working out the math in too much depth) that the matrix that the original shader is using is actually the inverse of the WORLD matrix and your name for it is slightly misleading.

The inverse of the WORLD matrix will transform from world space coordinates into object (aka model) space coordinates.

Object space is where your untransformed vertex positions and normals live, i.e. Input.Position (a point) and Input.Normal (a vector) are in object space.

3) The "eye" and "light" variables are points in world space; eye telling you the position of the camera/viewer, and light telling you the position of the light source.

4) The multiplications by the inverse matrices transform "eye" and "light" from world space into object space.

5) Subtracting one point from another gets you a vector, but only if the two points are in the same space - which is why they're transformed into object space so they're in the same space as the vertex position.

6) So finally, subtracting the vertex position (in object space) from the light position (in object space) gets you a vector which points from the vertex position towards the light. Likewise with the eye position.

7) The vertex normals are in object space, as are the basis vectors for your tangent space. The vectors computed above are also in object space; computations involving two vectors require that both vectors are in the same space; additionally, performing lighting in object space means you don't need to transform the normals.

##### Share on other sites
Hi,

what you say it totally logical. but now I've checked it again, and the original shader uses inv_view, which is a predefined variable that RenderMonkey sets.
here is the constant mapping:
c0..c3: view_proj_matrix
c4 : eye
c5..c8: inv_view_matrix
c9 : light
c10 : commonConst(0, 0.5, 1, 2)

Then I tried setting World or WorldView instead of the original View and the inverted ones for inv_view, but no luck.. the effect isn't even similar to what should it look like.

So I really don't know how does it work..

kp

1. 1
2. 2
3. 3
Rutin
14
4. 4
5. 5

• 9
• 9
• 11
• 11
• 23
• ### Forum Statistics

• Total Topics
633674
• Total Posts
3013275
×