Sign in to follow this  
SteveDeFacto

Inverse of 4x4 matrix

Recommended Posts

coderx75    435
Here's a BASIC implementation used for inverting an ODE matrix for rendering in OpenGL.

SUB PhysicsInvertMatrix (Dest AS SINGLE PTR, Source AS SINGLE PTR)
DIM X AS INTEGER
DIM Y AS INTEGER
DIM Index AS INTEGER
DIM Minor(11) AS SINGLE
DIM Adjoint(11) AS SINGLE
DIM AS SINGLE Determinant = Source[0] * (Source[5] * Source[10] - Source[9] * Source[6]) - _
Source[4] * (Source[1] * Source[10] - Source[9] * Source[2]) + _
Source[8] * (Source[1] * Source[6] - Source[5] * Source[2])
DIM AS SINGLE DetRec = 1.0 / Determinant

Minor(0) = Source[5] * Source[10] - Source[9] * Source[6]
Minor(1) = Source[4] * Source[10] - Source[8] * Source[6]
Minor(2) = Source[4] * Source[9] - Source[8] * Source[5]
Minor(4) = Source[1] * Source[10] - Source[9] * Source[2]
Minor(5) = Source[0] * Source[10] - Source[8] * Source[2]
Minor(6) = Source[0] * Source[9] - Source[8] * Source[1]
Minor(8) = Source[1] * Source[6] - Source[5] * Source[2]
Minor(9) = Source[0] * Source[6] - Source[4] * Source[2]
Minor(10) = Source[0] * Source[5] - Source[4] * Source[1]

'Shouldn't I be multiplying each of these by DetRec and...
Adjoint(0) = Minor(0)
Adjoint(1) = -Minor(4)
Adjoint(2) = Minor(8)
Adjoint(4) = -Minor(1)
Adjoint(5) = Minor(5)
Adjoint(6) = -Minor(9)
Adjoint(8) = Minor(2)
Adjoint(9) = -Minor(6)
Adjoint(10) = Minor(10)

'...getting rid of these loops?
FOR Y = 0 TO 2
FOR X = 0 TO 2
Index = Y * 4 + X
Dest[Index] = DetRec * Adjoint(Index)
NEXT X
NEXT Y

Dest[3] = Source[3]
Dest[7] = Source[7]
Dest[11] = Source[11]
END SUB

Share this post


Link to post
Share on other sites
SteveDeFacto    109
Quote:
Original post by coderx75
Here's a BASIC implementation used for inverting an ODE matrix for rendering in OpenGL.
*** Source Snippet Removed ***


a little confused what is going on in that function...

Share this post


Link to post
Share on other sites
coderx75    435
Quote:
Original post by SteveDeFacto
Quote:
Original post by coderx75
Here's a BASIC implementation used for inverting an ODE matrix for rendering in OpenGL.
*** Source Snippet Removed ***


a little confused what is going on in that function...

Yeah, me too. It works though. It takes two pointers to a source and a destination matrix. Here's an example of it's use:

DIM AS SINGLE PTR rotation = dBodyGetRotation (BodyID)
DIM AS SINGLE inverse(12)

PhysicsInvertMatrix (@inverse(0), rotation)

*I'm using SINGLE (float in C++) to match the parameters of the function though this wouldn't be recommended when using ODE functions (use dReal instead) but that's a different topic.

I painstakingly pieced this function together from what information I could glean from information online about inverse matrices. I've read the proof but this was two years ago and I have absolutely no idea how this works anymore. There's plenty of info out there if you really want to get into the specifics. But, for all intents and purposes, this does the job.

Share this post


Link to post
Share on other sites
Kambiz    758
For such problems there is a very nice book called Numerical Recipes. The older version is available online.

Quote:
Original post by coderx75
Here's a BASIC implementation used for inverting an ODE matrix for rendering in OpenGL.
*** Source Snippet Removed ***


The indexes go from 0 to 11 in that algorithm, so it can't compute the inverse of a general 4x4 matrix. It probably assumes that the lat row has a know from, maybe 0,0,0,1 .

Share this post


Link to post
Share on other sites
coderx75    435
Quote:
Original post by Kambiz
For such problems there is a very nice book called Numerical Recipes. The older version is available online.

Quote:
Original post by coderx75
Here's a BASIC implementation used for inverting an ODE matrix for rendering in OpenGL.
*** Source Snippet Removed ***


The indexes go from 0 to 11 in that algorithm, so it can't compute the inverse of a general 4x4 matrix. It probably assumes that the lat row has a know from, maybe 0,0,0,1 .

Yes, this is for use with ODE matrices which are 4x3. For most purposes, a set of 0, 0, 0, 1 for the missing row works.

@kloffy: Excellent paper. I hadn't come across that one before but it explains it better than anything I had read.

Share this post


Link to post
Share on other sites
bladerunner627    291
mathworld has a really good article on this http://mathworld.wolfram.com/MatrixInverse.html

This can be easily solved using Cramer's rule. Find the determinant of the matrix and invert the result then multiply that by the transposed matrix of cofactors.

Another relevant article: http://en.wikipedia.org/wiki/Cramer's_rule

Hope that helps!

Share this post


Link to post
Share on other sites
sooner123    269
Solving by using the cofactors is actually much harder and laborious than solving it directly.

Inverting a square matrix is the same as solving a system of four equations. This can be easily done by appending the identity matrix and converting the left half to reduced row echelon form.

Example:

Find inverse of:

[ a b c d ]
| e f g h |
| i j k l |
[ m n o p ]

1. So append the identity matrix:

[ a b c d | 1 0 0 0 ]
| e f g h | 0 1 0 0 |
| i j k l | 0 0 1 0 |
[ m n o p | 0 0 0 1 ]

2. Convert the left half to the identity matrix:

[ 1 0 0 0 | a' b' c' d' ]
| 0 1 0 0 | e' f' g' h' |
| 0 0 1 0 | i' j' k' l' |
[ 0 0 0 1 | m' n' o' p' ]

The right hand side is your matrix inverse:

[ a' b' c' d' ]
| e' f' g' h' |
| i' j' k' l' |
[ m' n' o' p' ]

Step 2 is the meat and potatoes of your problem, but is far less work than finding the cofactor matrix.

Share this post


Link to post
Share on other sites
bladerunner627    291
Quote:
Original post by sooner123
Solving by using the cofactors is actually much harder and laborious than solving it directly.

Inverting a square matrix is the same as solving a system of four equations. This can be easily done by appending the identity matrix and converting the left half to reduced row echelon form.


The op didn't really specify whether this was for implementation or if he was to do it by hand. If doing by hand using an augmented matrix as you've shown is the easiest but trying to do the same with code isn't very practical.

Since this is under General Programming I'd assume he wants a method ideal for implementation.

Share this post


Link to post
Share on other sites
sooner123    269
Quote:
Original post by bladerunner627
The op didn't really specify whether this was for implementation or if he was to do it by hand. If doing by hand using an augmented matrix as you've shown is the easiest but trying to do the same with code isn't very practical.

Since this is under General Programming I'd assume he wants a method ideal for implementation.


Going to have to disagree with you there.

I don't see anything impractical about treating the matrix as a straightforward system of four equations in four unknowns.

It seems to me that it would be much easier to code than dealing with cofactors and transposes.

But in terms of execution time, there's no question that the way I suggested is better than cofactors.

I can't really see a reason for using the cofactor approach other than learning a bit more deeply about linear algebra. Which, don't get me wrong, is a wonderful thing. It's just that the final implementation would be quicker to write and quicker to execute the other way.

Share this post


Link to post
Share on other sites
bladerunner627    291
I'll admit I've never implemented the routine you've mentioned but for the very same reason you make your suggestion over co-factors, it seems more difficult to implement.

Here's my implementation, I've never thoroughly benchmarked but it's sufficient for my needs.

i, j, k, and l are the matrix row vectors.


Vec4<T> i, j, k, l;

T Determinant()
{
T x = -l.x*(i.y*(j.z*k.w-j.w*k.z) - i.z*(j.y*k.w-j.w*k.y) + i.w*(j.y*k.z-j.z*k.y));
T y = l.y*(i.x*(j.z*k.w-j.w*k.z) - i.z*(j.x*k.w-j.w*k.x) + i.w*(j.x*k.z-j.z*k.x));
T z = -l.z*(i.x*(j.y*k.w-j.w*k.y) - i.y*(j.x*k.w-j.w*k.x) + i.w*(j.x*k.y-j.y*k.x));
T w = l.w*(i.x*(j.y*k.z-j.z*k.y) - i.y*(j.x*k.z-j.z*k.x) + i.z*(j.x*k.y-j.y*k.x));
return (x + y + z + w);
}

Mat4x4<T> Adjoint()
{
Mat4x4<T> adj;

// Transposed matrix of cofactors
// First row
adj.i.x = (j.y*(k.z*l.w-k.w*l.z) - j.z*(k.y*l.w-k.w*l.y) + j.w*(k.y*l.z-k.z*l.y));
adj.i.y = -(i.y*(k.z*l.w-k.w*l.z) - i.z*(k.y*l.w-k.w*l.y) + i.w*(k.y*l.z-k.z*l.y));
adj.i.z = (i.y*(j.z*l.w-j.w*l.z) - i.z*(j.y*l.w-j.w*l.y) + i.w*(j.y*l.z-j.z*l.y));
adj.i.w = -(i.y*(j.z*k.w-j.w*k.z) - i.z*(j.y*k.w-j.w*k.y) + i.w*(j.y*k.z-j.z*k.y));

// Second row
adj.j.x = -(j.x*(k.x*l.w-k.w*l.z) - j.z*(k.x*l.w-k.w*l.x) + j.w*(k.x*l.z-k.z*l.x));
adj.j.y = (i.x*(k.z*l.w-k.w*l.z) - i.z*(k.x*l.w-k.w*l.w) + i.w*(k.x*l.z-k.z*l.x));
adj.j.z = -(i.x*(j.z*l.w-j.w*l.z) - i.z*(j.x*l.w-j.w*l.x) + i.w*(j.x*l.z-j.z*l.x));
adj.j.w = (i.x*(j.z*k.w-j.w*k.z) - i.z*(j.x*k.w-j.w*k.x) + i.w*(j.x*k.z-j.z*k.x));

// Third row
adj.k.x = (j.x*(k.y*l.w-k.w*l.y) - j.y*(k.x*l.w-k.w*l.x) + j.w*(k.x*l.y-k.y*l.x));
adj.k.y = -(i.x*(k.y*l.w-k.w*l.y) - i.y*(k.x*l.w-k.w*l.x) + i.w*(k.x*l.y-k.y*l.x));
adj.k.z = (i.x*(j.y*l.w-j.w*l.y) - i.y*(j.x*l.w-j.w*l.x) + i.w*(j.x*l.y-j.y*l.x));
adj.k.w = -(i.x*(j.y*k.w-j.w*k.y) - i.y*(j.x*k.w-j.w*k.x) + i.w*(j.x*k.y-j.y*k.x));

// Fourth Row
adj.l.x = -(j.x*(k.y*l.z-k.z*l.y) - j.y*(k.x*l.z-k.z*l.x) + j.z*(k.x*l.y-k.y*l.x));
adj.l.y = (i.x*(k.y*l.z-k.z*l.y) - i.y*(k.x*l.z-k.z*l.x) + i.z*(k.x*l.y-k.y*l.x));
adj.l.z = -(i.x*(j.y*l.z-j.z*l.y) - i.y*(j.x*l.z-j.z*l.x) + i.z*(j.x*l.y-j.y*l.x));
adj.l.w = (i.x*(j.y*k.z-j.z*k.y) - i.y*(j.x*k.z-j.z*k.x) + i.z*(j.x*k.y-j.y*k.x));

return adj;
}

// Inverse using Kramer's rule.
Mat4x4<T> Invert()
{
Mat4x4<T> adj = Adjoint();
T invdet = 1;

// If the determinant is 0 there is no inverse.
// 1/0 == undefined.

if(Determinant() != 0)
invdet = 1/T(Determinant());

Mat4x4<T> inv;
inv.i = adj.i * invdet;
inv.j = adj.j * invdet;
inv.k = adj.k * invdet;
inv.l = adj.l * invdet;

return inv;
}




There's a lot of room for optimization.

If the OP is going for speed/efficiency I'd recommend the looking into LU decomposition.

Share this post


Link to post
Share on other sites
arbitus    440
Quote:
Original post by bladerunner627
I'll admit I've never implemented the routine you've mentioned but for the very same reason you make your suggestion over co-factors, it seems more difficult to implement.

Here's my implementation, I've never thoroughly benchmarked but it's sufficient for my needs.

i, j, k, and l are the matrix row vectors.

*** Source Snippet Removed ***
There's a lot of room for optimization.

If the OP is going for speed/efficiency I'd recommend the looking into LU decomposition.


Take a look at Dave Eberly's approach posted earlier. It is the same as yours, but allows you to reduce the number of operations a bit. As far as I know, it is the method with the fewest mathematical operations while maintaining precision and operating on a generic 4x4 matrix.

Share this post


Link to post
Share on other sites
Steve132    433
Quote:
Original post by SteveDeFacto
Can someone show me how to do this? I tried to google it but I could not understand anything I read...


More importantly, is WHY are you doing this. If you are just frustrated because you read somewhere that you need to invert a matrix to do graphics, then stop worrying about how it works and use a library like Eigen. It is much less error prone and will work and work fast.

If you want to actually understand how matrix inversion is implemented so you can understand it better, go to your local library, get a textbook on linear algebra, read it and do the practice problems.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this