Interpolation of a Matrix

Started by
2 comments, last by Dmytry 19 years, 3 months ago
hey there, i've been reading alot about quaternions... There's one thing am sure they are sure about (the writer's who didit): "it is not intuitive" So I was thinking that maybe we can overcome the problem of Interpolating a Matrix... I use analogies to think so here it comes: If i want to interpolate from 2 to 2*3 in a time interval [0,1] exponentially i would: x = 2 * 3 ^ t; // == 2 * Exp( Ln( 3 ) * t ) Then I can apply this with Matrices but here arises some crucial problems: - Can We Find the Ln & Exp of a Matrix (Antiderivetive of the inverse of a matrix) - Is it computationally sane to do so? Well using analogies, i thought i can do exponentiation as follows - Exp( M ) = Identity() + M + M*M/2! + M*M*M/3! + M*M*M*M/4! + .... This can thoroughly be optimized.. Check this calculus matrix thingy am not sure of: Derivative of for example M*M with respect to M is as I think: D(M*M)/D(M) = lim (h -> ( matrix full of zeroes ) ) of ( [(M+h)*(M+h) - M*M]*Inverse(h) ) = lim(...) of (2*M*h*Inverse(h)) == 2*M.. If what am doing is insane please report me to "hopital l'geek" SAVE ME!
Advertisement
Quaternions are intuitive if you work with 'em much enough. AFAIK, quaternions was "invented" before matrices and vector operations.

Quaternions works very much like matrices. Almost everything is similar( obviously except turning vector by quaternion.)
That is, rotation quaternion are combined using multiplication. Like matrices, (AB)-1=B-1A-1 . And so-on.
As about matrix exponentiation,
clickster.
Note 10 .

AFAIK, matrix logarithm is quite hard to compute.

as about derivative of matrix, i don't think you do it right (also what about non-square matrices?). You probably need to learn some multiderivative calculus...

Consider 2x2 matrix function
[F(M)1,1;F(M)1,2;F(M)2,1;F(M)2,2]=L*[M1,1;M1,2;M2,1;M2,2]
where L is 4x4 matrix, and [a;b;c;d] is 4x1 matrix with a b c d in column.
That is, every component of resulting matrix is linear combination of all components of input matrix.

To describle derivatives you need 16 numbers.

I guess it is possible to define something like "matrix nabla".
Quote:Original post by BrainCodingArts
Well using analogies, i thought i can do exponentiation as follows
- Exp( M ) = Identity() + M + M*M/2! + M*M*M/3! + M*M*M*M/4! + ....
This can thoroughly be optimized..


Yes. The above has the obvious properties of exponentiation, e.g. Exp(M1 + M2) = Exp(M1) Exp(M2), but it also has physical significance. E.g. if you define natural log Ln as the inverse of Exp then

If R is a rotation matrix Ln(R) is a skew-symmetric matrix S which directly encodes the angle and (in 3D) axis of rotation. You then have:

R = Exp(S)

Expressing the rotation matrix in terms of S. This calculation is far too expensive to do directly in practice, but it can used to derive/justify more efficient formuae.

One parallel is complex numbers: eit = cos(t) + i sin(t) shows how exponentiation generates rotations in the complex plane. Matrix exponentiation in 2D can be shown to be equivalent, but matrices also generalise to higher dimensions.
John BlackburneProgrammer, The Pitbull Syndicate
Quote:
Exp(M1 + M2) = Exp(M1) Exp(M2)

if exp(M1) can be non-symmetric, it can not be true.
Because A+B=B+A but for non-symmetric matrices, AB!=BA

Look at (10) and (11) on Mathworld. Analogies with real numbers don't necessarily works with matrices and other non-commutative things because of AB?=BA

This topic is closed to new replies.

Advertisement