Jump to content

  • Log In with Google      Sign In   
  • Create Account

Why transform Normal par WorldViewInvertTranspose in Nvidia Cg Shader


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
4 replies to this topic

#1 ClementLuminy   Members   -  Reputation: 121

Like
0Likes
Like

Posted 29 December 2006 - 06:10 AM

Hello. I'm wondering why we have to multiply the Normal of a vertex ( expressed in objectd space ) by the "invert transpose World view matrix" ... Most of the sample which are using NVidia Cg language to do very simple lighting ( per-vertex directional lighting ) use the following pseudo code in order to transform input data to the correct space: =========== float4 WorldPos = mul( Input.Pos, WorldMatrix ); float3 WorldNormal = mul( Input.Normal, (float3x3) WorldViewIT ); =========== For me, with my very poor mathematics skill :) this pseudo code is mode "logic": =========== float4 WorldPos = mul( Input.Pos, WorldMatrix ); float3 WorldNormal = mul( Input.Normal, (float3x3)WorldMatrix ); =========== you transform the normal the same way you the position ( execpt that it's a vector and not a position ..) So, what's the mathematic tricks behind this normal transformation ??? Why do i have to use the WorldViewIT ( And Why Invert Transpose moreover !!!! ) instead of the simple world Matrix? Thanks a lot for your help !

Sponsor:

#2 _swx_   Members   -  Reputation: 989

Like
0Likes
Like

Posted 29 December 2006 - 03:50 PM

http://www.cs.utah.edu/classes/cs6610/lectures/MathematicsOfPerPixelLighting.pdf has a good explanation

#3 wyrzy   Members   -  Reputation: 430

Like
1Likes
Like

Posted 29 December 2006 - 05:31 PM

The above link does a decent explanation, but, an alternate (algebraic) way of thinking about the problem is that is v . n == 0 is Model Space (where n is the normal and v is the vertex), then v' . n' should equal 0 in View Space.

Let V be the matrix that transforms verticies and
N be the matrix that transforms normals, then

v' . n' == 0
=> (v')T * n' == 0 (where (v')T denotes v' transposed and * is matrix mul)
=> (V(v))T * N(n) == 0 (substitution)
=> (vT * VT) * N(n) == 0 (expand the transpose operator)
=> vT * (VT * N) * n == 0 (matrix multiplication is associative)

since vT * n == 0, then
VT * N == I
=> VT^(-1) * VT * N == VT^(-1)
=> N == VT^(-1) (the inverse transpose of the model view matrix)


Now, it should also be noted that if your model view matrix only contains rigid body rotations (generally translations and rotations), then VT^(-1) == V (because the transpose of a rigid body transformation is its inverse).

#4 grimreaper85   Members   -  Reputation: 156

Like
0Likes
Like

Posted 29 December 2006 - 06:20 PM

So the inverse transpose is only needed if you have scaling in your WorldMatrix ?

otherwise you can just use (float3x3)WorldMatrix ?

#5 ClementLuminy   Members   -  Reputation: 121

Like
0Likes
Like

Posted 30 December 2006 - 12:46 AM

Hello !

Thanks for your help.

The things is that in the NVidia document listed above, every computation happen in eye space...

This seams to me pretty strange because we have to transform any other information expresed in wolrd space into eye space ( i.e: light direction .... )

So why do they do all their computations in eye space ? why don't they do them in World space ? this solution seams to me much more simpler, but i think i missunderstood something...

Another things about the mathematical explanation ....
can you re explain it one time, but in a more detailed way ..?..?

Thanks again!

Clement




Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS