This topic is 4537 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

Recommended Posts

Hi, I guess you know the algorithm to invert a matrix to invert a matrix based on its adjacant matrix e.g.: The transposed adjacent/adjoint matrix(adj(A)) |det(0,0),-det(1,0),det(2,0)| |-det(0,1),det(1,1),-det(2,1)| |det(0,2),-det(1,2),det(2,2)| A^-1 = 1/det(A) * adj(A) I have the following matrix 1 4 3 0 0 1 3 0 0 4 1 0 0 4 3 1 and as result I get A^-1 -11 8 9 0 0 1 -3 0 0 -4 1 0 0 8 9 -11 A^-1 * A results in -11 0 0 0 0 -11 0 0 0 0 -11 0 0 0 0 -11 although it should be the identity matrix Any idea what I am doing wrong? Here is the source code
//returns the determinant of the whole matrix
inline float32 det() const
{
int32 i=0;
int32 j=0;
float32 flResult = 0.0f;
float32 flRight = 1.0f;
float32 flLeft = 1.0f;

for(i=0;i<4;i++)
{
for(j=0;j<4;j++)
{
flRight *= mat[j][(i+j)%4];
flLeft *= mat[3-j][(i+j)%4];
}

flResult += flRight;
flResult -= flLeft;

flRight = flLeft = 1.0f;
}

return flResult;
};
//returns the determinant of the whole matrix
inline float32 det(uint32 k, uint32 l) const
{
int32 i=0;
int32 j=0;
float32 flResult = 0.0f;
float32 flRight = 1.0f;
float32 flLeft = 1.0f;
const static int32 indices[4][3] =
{
{1,2,3},
{0,2,3},
{0,1,3},
{0,1,2}
};

for(i=0;i<3;i++)
{
for(j=0;j<3;j++)
{
flRight *= mat[indices[k][j  ]][indices[l][(i+j)%3]];
flLeft	*= mat[indices[k][2-j]][indices[l][(i+j)%3]];
}

flResult += flRight;
flResult -= flLeft;

flRight = flLeft = 1.0f;
}

return flResult;
};
//invert non singular matrix with cramers det^-1 * adj(M) rule
inline bool invert()
{
uint32 i=0;
uint32 j=0;
float32 m[4][4];
float32 flSign;

float32 flInvDet = det();

//the matrix is not invertable if its det is zero
if( fabs(flInvDet) < EPSILON )
return false;

//need the inverse determinant;
flInvDet = 1.0f / flInvDet;

for(i=0;i<4;i++)
{
for(j=0;j<4;j++)
{
if((i+j)%2 == 0)
flSign = 1.0f;
else
flSign = -1.0f;
m[j] = flSign*flInvDet*det(j,i);
}
}

std::copy(&m[0][0],&m[4][0],&mat[0][0]);
return true;
};



Share on other sites
Just for reference, the result I get for A^-1 is:
[ +1.000 +0.000 +0.000 +0.000 ][ -0.727 -0.091 +0.364 -0.727 ][ -0.818 +0.273 -0.091 -0.818 ][ +0.000 +0.000 +0.000 +1.000 ]
I'm using an inverse function that hasn't been tested that thoroughly, but the product of this matrix and A does appear to be I. I didn't look at the code, but this particular algorithm can be somewhat difficult to debug :-) The one thing I'd suggest is to get a hold of an implementation you can be confident works, so you have something to compare your results to.

Share on other sites
well for the example you gave det(A) is -11, so seems likely you forgot to devide by det(A)...

edit: seems you do multiply by "flInvDet"... strange.

PS. You may be interested to know that using LU method for inversing is 5 times faster (not sure if only for big matrices)

Share on other sites
You mean the gaussian elimination algorithm where you transform the matrix to identity and vice versa with identity to inv(A).

You could could transform the matrix to top triangle matrix and get the determinant as the product of the diagonal elements, I ll probably implement this as well. flInvDet = 1/det . I ll play around with it

Share on other sites
My determinant calculation was wrong, the way I implemented it works for 3x3 matrices only (you know where you add up the product of the diagonals from top left to bottom rightand subtract the products of the diagonals bottom left to top right)

Do you know if there s an algorithm that does not have to store the indices of the minor matrices upon recursive calculation? For an arbitrary size square matrix?

Share on other sites
note - the way you calculated the determinant (3 diagonals minus 3 diagonals) is more simple to a human but is more calculations than necessary. There are a total of 18 multiplications and 6 additions in your method and 9 multiplications and 6 additions in the "regular" (recursivly calc det of minors) method for 3x3 matrix.

Even the "regular" method is too slow for large matrices - it is order O(n!) which is horrible for big "n". There is faster algorithm using LU decomposition (order O(n^3)) and maybe there are even faster methods I am not aware of. Perhaps for matrices of size 3x3 or 4x4 the regular method is faster.

The same goes for the algorithm you are using to inverse -
it is simple for humans, but is more calculations than necessary and is less stable (numerical errors are big).

If I am not mistaken, the "proper" way to inverse a matrix of arbitary size is by LU or LDU decomposition. These methods are faster (only 20% of the operations of the "human" method - gauss elimination) and are more numericaly stable (less dividing by small numbers). Again, perhaps for small matrices gaussian elimination is faster, I am not sure, but remember stability is sometimes more important than speed (what good is fast calc if you have big errors?).

PS. LU means writing the matrix A as L*U where L is lower triangular matrix and U is upper triangular; LDU is lower unit matrix times diagonal times upper unit matrix. The algorithm to do that is not hard and once you do that calculating the determinant or inversing is simple.

These are methods you dont learn in Linear Algebra because they are wierder than the ordinary inverse algorithm and are harder to explain, but you learn them in numerical analysis or such courses (where you learn to do efficient math on a computer).

sorry I cant help anymore...

1. 1
Rutin
42
2. 2
3. 3
4. 4
5. 5

• 9
• 27
• 20
• 14
• 14
• Forum Statistics

• Total Topics
633384
• Total Posts
3011598
• Who's Online (See full list)

There are no registered users currently online

×