Archived

This topic is now archived and is closed to further replies.

Determinant and Matrix inversion

This topic is 5026 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi there, I just implemented some easy math stuff and I am wondering how I now could it make more efficient. What are the fastest algorithms for calculating determinants and the inverse of small square matrices like 2x2, 3x3 and 4x4 matrices doing no SIMD or 3DNow assembler programming? And maybe another question. If you write a function like this: Matrix3& Matrix3Transpose(const Matrix3& mtxIn, Matrix3& mtxOut) { // Branch here to avoid overrides if (&mtxIn != &mtxOut) { // No problem - no additional object } else { Matrix temp; // Write everything to temp and the copy to mtxOut mtxOut = temp; // Can we get rid of this? } } Do you branch here for the case that the adress of mtxIn equals mtxOut or is there a way to get rid of the temporary object and the copy? Best regards -Dirk

Share this post


Link to post
Share on other sites
The first thing you'll probably find is some technique in which you step through the matrix diagonally which I can never seem to remember.

There's also a simple recursive algorithm for computing determinants. I'll describe it in a second, but be warned that I'm pretty sure there's a faster algorithm (I'll look for it). Anyway, here goes:

For a square matrix...

For each element in the top row, there is an associated square "sub-matrix" (for lack of a better term), that consists of the original matrix but with the row and column containing that element removed.

The determinant of the matrix is equal to the first element in the top row times the determinant of its corresponding "submatrix" minus the second element of the top row times the determinant of its corresponding "submatrix" plus the third element times its submatrix's determinant, minus the fourth... etc... The signs alternate, plus, minus, plus, minus, etc.

Now, If you started with a 6x6 matrix, then each of the "submatrices" you're dealing with will be 5x5. How do you compute them? The same way; recursively! So then you deal with 4x4 and then 3x3 matrices. Finally, when you get down to 2x2 matrices, you know how to compute the determinant.

Now I'm going to go look for the algorithm I REALLY like...

[edited by - TerranFury on December 16, 2003 1:37:17 PM]

Share this post


Link to post
Share on other sites
The algorithm you described is called cofactor expansion or LaPlace expansion. This is what I implemented. For a 3x3 matrix you end up with 9 muls and 5 adds. For the inverse I tried the Gauss Jordan algoritm but using the loops. I wonder if you can unroll the loops for small matrices and get a performance gain.
And the moment I use the determinant and the adjoints matrix to find the inverse. Premultiplication for 4x4 matrice seems to give a sligth performance gain. Any better ideas?

-Dirk

Share this post


Link to post
Share on other sites
Sounds to me like you''ve got a better handle on this stuff than I do. I now remember that the algorithm that I said I preferred is actually for multiplication, not finding determinants. I''m curious for myself about your question though, so I''m doing a little googling, and if anything interesting turns up I''ll let you know.

Share this post


Link to post
Share on other sites
Not sure about inverses now, but there does seem to be a rather simple technique for quickly computing determinants (versus LaPlace expansion, which is O(n!)).

I found a simple, rather informal description here:
http://acm.uva.es/p/v6/684.html
It brings back dusty memories of tenth-grade math.

Is this Gaussian elimination?

[edited by - TerranFury on December 16, 2003 2:58:39 PM]

Share this post


Link to post
Share on other sites
I think that is cofactor expansion. When I calculated determinants for my math exam I did some easy row or column manipulation such that I get a lot of zero elements in a row or column and then expanded to this row/col. Through the zeros you can get rid of a lot of subdeterminants.

Share this post


Link to post
Share on other sites
Another trick which seems promising is described here: http://mathworld.wolfram.com/LUDecomposition.html

Since the two triangular matrices each have a row which has only one nonzero element, its determinant is the same as that of its corresponding minor. By the same token its minor''s deteminant is the determinant of ITs minor. So one should be able to perform LU decomposition n times to compute the determinant.

Share this post


Link to post
Share on other sites
Gauss Jordan and LUDecomposition are more suited for big matrices. Search for BLAS and LAPACK if your interssted in that stuff. Blitz++ and MTL are also very interessting. From Blitz++ I got the idea which allows you doing this, what is quite cool for checking your routines:

Matrix mtx(3,2);

mtx = 1,2,3,
4,5,5;

This is really handy

-Dirk

Share this post


Link to post
Share on other sites
DonDickieD,

If you want the easy way to calculate the inverse matrix, you need to make sure the matrix is orthogonal. If it is, then all you need to do is transpose it. Do a google search on orthogonal matrices and you''ll find lots of hits, or find a linear algebra book, its sure to have it in there.

-brad

Share this post


Link to post
Share on other sites
You should know:

1.) The inverse or an orthonormal matrix is its transpose. This holds not true if it is only orthogonal, what you have written.

3.) You can''t transform any arbitrary matrix into an orthonormal matrix. So this is only useful in some special cases. You can use a SVD (Singular Value Decomposition) which transforms a matrix A such that: A = u*w*v; With u and v being orthonormal matrices and w being a diagonal matrix. You can then find the inverse building the transposes of u and v and inverting the elements of w. Backmultiplication gives you the inverse. This is cool for huge linear equation systems since you can easily find out if the system is singular by checking the elements of w to be NONZero. But this is not very useful for small matrices.

So I really know how to find an inverse of a matrix. What I want to know is the fastes way for small matrices until a dimension of 6. So any help is appreciated, which is a little bit more constructive then suggestion to buy a linear algebra book ;-)


-Dirk

Share this post


Link to post
Share on other sites
@TerranFury:

This isn''t useful for my question, but I found a at least numerical very stable way of finding an inverse using SVD. You can even find a so called PseudoInverse for overdetermined linear equation system. If you interessed look here for a good introduction. Don''t be afraid, I read it at least 3 or 4 times and did some cross reading as well in Numerical Recipes and the internet.

Here is the link:
http://www.uwlax.edu/faculty/will/svd/index.html

BTW: You can also find the eigenvalues of a square matrix using SVD. Mindstretching, but worth the effort IMHO :-)

-Dirk

Share this post


Link to post
Share on other sites
Well it might be obvious to you and just a lazy naming but maybe you missed a crucial point:

quote:
Original post by Galapaegos
If you want the easy way to calculate the inverse matrix, you need to make sure the matrix is orthogonal. If it is, then all you need to do is transpose it.



That´s not true. I think we agree that 2*I (I being the unit matrix) is orthogonal and that (2*I)t (t for transpose) is certainly not the inverse of 2*I (2*I * 2*I = 4*I).
At = A^-1 is true for orthoNORMAL matrices (I even think it´s true only for orthonormal matrices but I couldn´t prove that right now).

However you can easily factorize an orthogonal matrix into a diagonal matrix and an orthonormal matrix.
Let
M = (v1 ... vn)
be an orthogonal matrix with column vectors vi. You can easily (by writing it out) see that
M = (n1 ... nn ) * diag(|v1| ... |vn|) = N*D
with ni := vi / |vi| being the normal vector in direction of vi and diag(...) being the diagonal matrix created by the n values |vi|. N and D stand for normalized matrix and diagonal matrix respectively.
Now you get
M^-1 = D^-1 * N^-1 = D^-1 * Nt.
The inverse of the diagonal matrix (D^-1) is the matrix formed by the inverse of the corresponding matrix entries in the original matrix D.


EDIT:
quote:

So I really know how to find an inverse of a matrix. What I want to know is the fastes way for small matrices until a dimension of 6. So any help is appreciated, which is a little bit more constructive then suggestion to buy a linear algebra book ;-)



I´d case branch, write out all the equations by hand and then try to code them in efficiently (computing factors that appear often only once and such).
Imho a book on LA won´t be much of a help anyways since LA (the way I learned it) doesn´t care about efficiency of algorithms on the computer. A book/article/... on numerical maths might help more.

[edited by - Atheist on December 16, 2003 8:34:26 PM]

Share this post


Link to post
Share on other sites
Thats how I did remember it. I just looked into a quite famous book for engineers: "Advanced Engineering Mathematics" and it says that the matrices only have to be orthogonal. I have to look up how it is defined actually, but I just read in Numerical recipes the a square matrix can be row or column orthogonal, so I think that I will have a closer look.

-Dirk

Share this post


Link to post
Share on other sites
@Atheist,

I just looked up the theorem in my book: Q^t*Q = I, if Q is orthogonal. An orthogonal matrix is an nxn matrix where the column vectors are an orthonormal set of R^n. I prooved all this crap for a class, don''t tell me i''m wrong when i''m right.

@DonDickieD,

I mentioned the orthogonality of matrices because that is one of the simpler ways to calculate the inverse. Doesn''t work for all of them of course, but for a wide majority of the matrices you will encounter you can do this. Also you can orthogonalize matrices with some algorithms. (which is what alot of game developers do when they need an inverse matrix, well if they did the math themselves).

-brad

Share this post


Link to post
Share on other sites
Just my little contribution:

For determinant calculation:
cofactors are exponential. A better way is to do a gauss elimination to a triangular form. The determinant of a triangular matrix would be the multiplication of the elements on the diagonal. This would be O(n^3) instead of exponential.

For calculating the inverse:
This equals solving the system Ax=b whith several different b''s. For this LU decomposition works well. O(n^3) for getting U and L and O(N^2) for each b.

Notes:
Gaussian elimination and LU decomposition need to have partial pivoting in order to avoid division by 0 error. Besides, they are numericaly unstable. This means that it''s very easy for an error to propagate and give a not-so-good answer. Householder / QR decomposition doesnt have this shortcomming but are very little used because they are O(n^3) but whith higher constants. QR decomposition is a decomposition of A where Q is orthogonal and R upper triangular.



Share this post


Link to post
Share on other sites
quote:
Original post by Galapaegos
@Atheist,

I just looked up the theorem in my book: Q^t*Q = I, if Q is orthogonal. An orthogonal matrix is an nxn matrix where the column vectors are an orthonormal set of R^n. I prooved all this crap for a class, don''t tell me i''m wrong when i''m right.

-brad


Well, now I looked it up, too. You´re right: A matrix is called orthoGONAL when the columns are orthoNORMAL.
This naming convention seems rather inconsistent to me. Especially since you´ve got no sensible name left for what I called orthogonal in the first place and since it seems I wasn´t the only one being confused by this. 2*I sounds very orthogonal to me and certainly is a special case as I described (well, in the 2I case it´s even diagonal but I think you know what I mean).
But orthogonal matrix <=> orthonormal columns seems to be the "official" one.

Share this post


Link to post
Share on other sites
hey i found this link for all types of numerical algorithms... ill be using these in my Matrix class files...

http://www.library.cornell.edu/nr/cbookcpdf.html

ill be coming back here to ask lots of doubts ... new with c++!!! :D

||student + artist + photographer||

Share this post


Link to post
Share on other sites
About the Orthogonal <=> Orthonormal confusion

Orthonormal implies Orthogonal
but
Orthogonal does not imply Orthonormal

by def..

Orthonormal matrices are orthogonal matrices with determinant = 1

this distinction is very important in linear algebra and transform theory coz the normalization is a very important part of the properties of the basis functions

Share this post


Link to post
Share on other sites