Archived

This topic is now archived and is closed to further replies.

Shnoutz

Eigen values, eigen vectors

Recommended Posts

They can be used to find OBB orientation, for rigid body physics and many more cool things BUT : What the heck are they anyway? I followed a linear algebra course at college but they where not covered in that course... I read about it on the internet but I dont understand how its working. How can we compute them from a 3x3, 4x4, NxN matrix? What exactly do they represent? Is there a fast way to compute them in c++? Thanks

Share this post


Link to post
Share on other sites
Although it seems very likely that you could google this pretty easily, I'll say a few words about them.

Eigenvalues are scalars that hold in the following equation

AX = LX

where A is an NxN matrix, X is a vector (or maybe just any matrix?), and L is a (possibly complex) number. To solve for the eigenvalues find the roots of the polynomial

p(L) = det(A-LI)

where I is an NxN identity matrix.

Basically it is saying that multiplication of the matrix A is the same thing as multiplication of a certain scalar(s). Eigenvectors are the vectors which span the eigenspace, that is, the solution space of this equation

(A-LI)=0.

I'm very surprised to hear you say that this wasn't introduced in a linear algebra course at all.

Please anyone correct me if I've made a mistake.

Elijah

[edited by - etaylor27 on May 6, 2003 1:00:30 PM]

Share this post


Link to post
Share on other sites
I google that already information already, but I do not figure how to something usefull with it, for example : what
does mean "to solve for the eigenvalues find the roots of the polynomial"?

I would like an example.

Not a single bit of eigen stuff was covered in my course, it was not even mentionned in my textbook...

Share this post


Link to post
Share on other sites
quote:
Original post by etaylor27
I'm very surprised to hear you say that this wasn't introduced in a linear algebra course at all.

It's not supposed to be covered in the first course, in Quebec.

Here's the full derivation:

MX = lX

X is an unknown column matrix and l is an unknown scalar. How do you solve this?

MX - lX = 0
MX - (lI)*X = 0 [try this identity yourself if you're not convinced]
(M - lI) * X = 0 [distributivity]

By "0", I mean a column-matrix, like X, whose elements are all 0. Let M' = M - lI. We have, then:

M' * X = 0

How can we solve this? This is like an equation system (in the case where X = [x y] transposed):

ax + by = 0
cx + dy = 0

This system always has a trivial solution of x = 0 and y = 0 (i.e.: X = 0). In "most" cases, this is the only solution. However, X = 0 is not the solution we're looking for; it's the trivial one. We're looking for the other solutions, those that are not null. If you try to solve for y, you'll get, at some point:

(ad - bc) * y = 0

This involves that either y = 0 or (ad - bc) = 0. y = 0 is the trivial solution we already know, so we consider (ad - bc). Surely, you've recognized here the determinant of a 2x2 matrix. It's not a conincidence; the equation system has a non-zero solution only if the determinant of M' is 0 (remember, we had M' * X = 0). In fact, if its determinant is 0, it has an infinity of solutions, since we can always multiply X by a constant and maintain the equality.

Now, we want det(M') = 0. Since M' = M - lI, then we want det(M - lI) = 0. Finding the determinant of M' is pretty straightforward, although it is tricky for large matrices. The determinant will be a polynomial in l. Finding its roots allows you to find the values of l that satisfies det(M') = 0. These are the eigenvalues.

Once you have l (there may be multiple values), you can plug it back in (M - lI) * X = 0, and solve for X, like a regular equation system. There are as many eigenvector "families" as there are eigenvalues (I use the term family because, as mentionned above, if X is an eigenvector, then kX is also one).

That's it. If you don't understand everything, try to do it yourself. If you've had the first linear algebra course in a cegep, you should be able to do all of the intermediate steps on your own.

Cédric

[edited by - cedricl on May 6, 2003 10:31:33 PM]

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
Geometrically, the eigenvalue represents the degree of inflation (scale) in a matrix. A matrix with eigenvalue 1 will transform any mesh into another mesh with the same volume; i e it''ll only contain translation and rotation.

Share this post


Link to post
Share on other sites
quote:
Original post by Anonymous Poster
Geometrically, the eigenvalue represents the degree of inflation (scale) in a matrix. A matrix with eigenvalue 1 will transform any mesh into another mesh with the same volume; i e it''ll only contain translation and rotation.

What if there is more than one eigenvalue?

Cédric

Share this post


Link to post
Share on other sites
quote:
Original post by etaylor27
where A is an NxN matrix, X is a vector (or maybe just any matrix?), and L is a (possibly complex) number. To solve for the eigenvalues find the roots of the polynomial

p(L) = det(A-LI)

where I is an NxN identity matrix.

[edited by - etaylor27 on May 6, 2003 1:00:30 PM]


Your probably saying the same thing but I thought the characteristic poly is det(LI-A) = 0. eigenvalues are the solutions L for this equation.

Share this post


Link to post
Share on other sites
*scratches head*

Hmm, lets see here.

Ax = cA, ya, ya, thats right.
Matrix times vector = scalar(s)*matrix.

Solve the characteristic polynomial to get the characteristic values ala eigenvalues, ya, ya.
Remember complex eigenvalues are possible here.

det(aI-A) = 0 <==> det(A-aI), FYI.

Okay, no discussion of eigenSPACE here, time to confuse the issue.
Theres something called an eigenspace associated with each sq. matrix.

This is what you might think of as a ''scaling'' space.

The number of eigenvalues is the dimension of this space.
The dimension of this space is directly related to the N in the description NxN matrix.

If you take eigenvalues e1, e2, ..., eM; they form the basis for the eigenspace. .

Math-gurus, corrections ?



~V''lion

Bugle4d

Share this post


Link to post
Share on other sites
V''lion,

I believe what you mean is that the eigenVECTORS form the basis for the space, not the eigenvalues. And also, isn''t there something to be said of the dimension of the eigenspace when there is an eigenvalue of multiplicity > 1 ?

Elijah

Share this post


Link to post
Share on other sites
quote:
Original post by Vlion
Ax = cA, ya, ya, thats right.



No it''s not. The eigenvalues,c, for the matrix A satisfy the equation: Ax = cx , as etaylor27 pointed out correctly.

For each eigenvalue, ci, there exists a corresponding eigenvector. As has been pointed out, the eigenvectors of a matrix A represent an orthogonal basis for the space spanned by A.

How else are eigenvalues and eigenvectors useful?

In dynamic systems, the eigenvalues of the phase space matrix relate the divergence (+ve values) and convergence (-ve values) of neighbouring trajectories.

For any matrix A, the eigenvalues represent the amplitute of each mode of the matrix (as the AP mentioned)... and are thus related to such descriptions as Fourier Series.

Eigenvalues and eigenvectors can be used to find solutions for differential equations.

They find usage in approximation and filtering techniques. By re-expressing a matrix using only a selected subset of its eigenvalues, we can filter out modes of our choice from the system.

There are many, many more uses for eigenvalues and eigenvectors. Too many for me to describe in one post (and certainly more than I actually know about)! Get a decent book on linear algebra and do a lot of reading in the library!

Cheers,

Timkin

Share this post


Link to post
Share on other sites