Sign in to follow this  
Quak

matrix inverse

Recommended Posts

Hi, i tried to code a function that returns a adjoint matrix to use the determinant method to copute the inverse of the 4x4 matrix. I planned everything on paper and then wrote the function that looks like this:
matrix4 matrix4::Adjoint( void ) const
{
        float det01 = z.z * w.w - w.z * z.w;
	float det02 = z.y * w.w - w.y * z.w;
	float det03 = z.y * w.z - w.y * z.z;
	float det04 = z.x * w.w - w.x * z.w;
	float det05 = z.x * w.z - w.x * z.z;
	float det06 = z.x * w.y - w.x * z.y;
	float det07 = y.z * w.w - w.z * y.w;
	float det08 = y.y * w.w - w.y * y.w;
	float det09 = y.y * w.z - w.y * y.z;
	float det10 = y.x * w.w - w.x * y.w;
	float det11 = y.x * w.z - w.x * y.z;
	float det12 = y.x * w.y - w.x * y.y;
	float det13 = y.z * z.w - z.z * y.w;
	float det14 = y.y * z.w - z.y * y.w;
	float det15 = y.y * z.z - z.y * y.z;
	float det16 = y.x * z.w - z.x * y.w;
	float det17 = y.x * z.z - z.x * y.z;
	float det18 = y.x * z.y - z.x * y.y;

	return matrix4( (y.y * det01 - y.z * det02 + y.w * det03),
				   -(y.x * det01 - y.z * det04 + y.w * det05),
				    (y.x * det02 - y.y * det04 + y.w * det06),
				   -(y.x * det03 - y.y * det05 + y.z * det06),
				   -(x.y * det01 - x.z * det02 + x.w * det03),
				    (x.x * det01 - x.z * det04 + x.w * det05),
				   -(x.x * det02 - x.y * det04 + x.w * det06),
				    (x.x * det03 - x.y * det05 + x.z * det06),
				    (x.y * det07 - x.z * det08 + x.w * det09),
				   -(x.x * det07 - x.z * det10 + x.w * det11),
				    (x.x * det08 - x.y * det10 + x.w * det12),
				   -(x.x * det09 - x.y * det11 + x.z * det12),
				   -(x.y * det13 - x.z * det14 + x.w * det15),
				    (x.x * det13 - x.z * det16 + x.w * det17),
				   -(x.x * det14 - x.y * det16 + x.w * det18),
				    (x.x * det15 - x.y * det17 + x.z * det18) );
}

But it doesn't work. If i multiply the matrix with its inverse using the preciding adjoint function i don't get back the identity matrix. I doublechecked everything but could not find my mistake. I think i could have done something wrong with the - and +. Thanks for help Quak

Share this post


Link to post
Share on other sites
It would be difficult (for me at least) to pinpoint the error in your code without some examination. So I'll just ask a couple of questions:

1. Are you computing and dividing through by the determinant correctly in the inverse function itself?

2. Did you remember to transpose the cofactor matrix?

3. As you noted, are the signs correct? As you probably already know, the sign is positive when the sum of the indices is even, and negative when the sum of the indices is odd.

4. Are you not recognizing the result as identity because you're getting very small numbers in scientific notation rather than zeros? (Don't laugh - I've made that mistake.)

Also, making the adjoint a separate function adds some unnecessary work. All the information needed to compute the determinant is computed in the process of finding the adjoint, so IMO it's better to do it all in one function.

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
It's usually much faster to set up an augmented 4 x 8 matrix, the first 4 x 4 being A and the second 4x4 being the identity matrix.

Then use Gauss-Jordan elimination; the first 4x4 becomes identity, the second A^-1.

Share this post


Link to post
Share on other sites
Or just use Intel's code (there's code for Gaussian elimination and for Cramer's rule)

They say Cramer's rule is faster (for 4x4 matrix), which is quite believable knowing that branching is slow. Indeed, Cramer's rule have bad performance for bigger matrices(have timecomplexity of O(n!)) but 4x4 it's still quite small.

edit:texts like that are the great source of confusion.
Values in table are just computed taking this O(n!) literally, for example
(1E-12)*3!=6E-12
(1E-12)*4!= 2.4E-11
and so-on. Easy to see that it's just misunderstanding of O notation, e.g. not considering constants in timecomplexity expression.
edit: more on Gaussian elimination
(also, google it.)
edit: also your inverse doesn't look like it'll give inverse.

[Edited by - Dmytry on May 28, 2005 8:35:01 AM]

Share this post


Link to post
Share on other sites
Sorry to be seemingly asking the obvious but are you just multiplying what matrix4::Adjoint() returns with [the original] matrix as is to obtain the inverse? Something like m_inv = m * m.adjoint? I was not very clear on that.

Or are you doing adj m / |m| -> m-1? As you have to remember to further divide the adjoint matrix of m by the determinant of m to get the inverse matrix of m. I just thought Id point that out even though it seems obvious.

Sorry for wasting your time if such was not the case.

Share this post


Link to post
Share on other sites
Hi,
thanks for your answers.

@jyk:
1. i think so
2. i did
3. i checked again and they seem to be correct
4. no most of the numbers are > 1

Here is the whole inverse function:


//2*2 matrices
float det01 = z.z * w.w - w.z * z.w;
float det02 = z.y * w.w - w.y * z.w;
float det03 = z.y * w.z - w.y * z.z;
float det04 = z.x * w.w - w.x * z.w;
float det05 = z.x * w.z - w.x * z.z;
float det06 = z.x * w.y - w.x * z.y;
float det07 = y.z * w.w - w.z * y.w;
float det08 = y.y * w.w - w.y * y.w;
float det09 = y.y * w.z - w.y * y.z;
float det10 = y.x * w.w - w.x * y.w;
float det11 = y.x * w.z - w.x * y.z;
float det12 = y.x * w.y - w.x * y.y;
float det13 = y.z * z.w - z.z * y.w;
float det14 = y.y * z.w - z.y * y.w;
float det15 = y.y * z.z - z.y * y.z;
float det16 = y.x * z.w - z.x * y.w;
float det17 = y.x * z.z - z.x * y.z;
float det18 = y.x * z.y - z.x * y.y;
//determinant
float det = x.x * ( y.y * det01 - y.z * det02 + y.w * det03 )
-x.y * ( y.x * det01 - y.z * det04 + y.w * det05 )
+x.z * ( y.x * det02 - y.y * det04 + y.w * det06 )
-x.w * ( y.x * det03 - y.y * det05 + y.z * det06 );
//intialize new matrix with transposed order ( compare with adjoint function above )
matrix4 tpadjoint( (y.y * det01 - y.z * det02 + y.w * det03),
-(x.y * det01 - x.z * det02 + x.w * det03),
(x.y * det07 - x.z * det08 + x.w * det09),
-(x.y * det13 - x.z * det14 + x.w * det15),
-(y.x * det01 - y.z * det04 + y.w * det05),
(x.x * det01 - x.z * det04 + x.w * det05),
-(x.x * det07 - x.z * det10 + x.w * det11),
(x.x * det13 - x.z * det16 + x.w * det17),
(y.x * det02 - y.y * det04 + y.w * det06),
-(x.x * det02 - x.y * det04 + x.w * det06),
(x.x * det08 - x.y * det10 + x.w * det12),
-(x.x * det14 - x.y * det16 + x.w * det18),
-(y.x * det03 - y.y * det05 + y.z * det06),
(x.x * det03 - x.y * det05 + x.z * det06),
-(x.x * det09 - x.y * det11 + x.z * det12),
(x.x * det15 - x.y * det17 + x.z * det18) );
//divide the transposed adjoint matrix by the determinant
return tpadjoint / det;






[Edited by - Quak on May 28, 2005 11:56:30 AM]

Share this post


Link to post
Share on other sites
strange...if i invert the inverse of my matrix again i get my original matrix back even if the matrix and its invers multiplied by each other dont yield the identity matrix. Any ideas ?

Share this post


Link to post
Share on other sites
Unlikely I suppose, but is there a bug in your multiplication code?

One thing I've done when testing my math library is to find code online that I know works, and then run various matrices through it and through my code, and compare results. Something like that might help you track down the problem...

Share this post


Link to post
Share on other sites
That is my code inverting a matrix using the adjoint matrix. You can compare this and yours and maybe be able to find the error...

//This Invert implementation uses the cofactor algorithm to calculate the inverted matrix
void matrix::Invert()
{
real d = 0;
matrix t;
t.mat[0][0] = + Minor(0, 0);
t.mat[0][1] = - Minor(0, 1);
t.mat[0][2] = + Minor(0, 2);
t.mat[0][3] = - Minor(0, 3);
t.mat[1][0] = - Minor(1, 0);
t.mat[1][1] = + Minor(1, 1);
t.mat[1][2] = - Minor(1, 2);
t.mat[1][3] = + Minor(1, 3);
t.mat[2][0] = + Minor(2, 0);
t.mat[2][1] = - Minor(2, 1);
t.mat[2][2] = + Minor(2, 2);
t.mat[2][3] = - Minor(2, 3);
t.mat[3][0] = - Minor(3, 0);
t.mat[3][1] = + Minor(3, 1);
t.mat[3][2] = - Minor(3, 2);
t.mat[3][3] = + Minor(3, 3);

//Calculate the determinant
d = d + mat[0][0] * t.mat[0][0] + mat[0][1] * t.mat[0][1] + mat[0][2] * t.mat[0][2]
+ mat[0][3] * t.mat[0][3];

t.Transpose();
t = t / d;

memcpy(this, &t, sizeof(*this));
}

//Returns the minor of the element M[row][col]
real matrix::Minor(uint row, uint col)
{
register int i, r = 0;
register int j, c = 0;
real m[3][3];

for(i = 0; i < 4; i++)
{
if(i == row) continue;
c = 0;
for(int j = 0; j < 4; j++)
{
if(j == col) continue;

m[r][c] = mat[i][j];
c++;
}
r++;
}

return m[0][0] * ( m[1][1]*m[2][2] - m[1][2]*m[2][1] ) -
m[0][1] * ( m[1][0]*m[2][2] - m[1][2]*m[2][0] ) +
m[0][2] * ( m[1][0]*m[2][1] - m[1][1]*m[2][0] );
}


Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this