Matrices Inverse problem
Can anyone help me with why this doesnt work?...The transpose and determinant functions work fine so its something with the actual inverse func
Matrix Matrix::Inverse()
{
Matrix Cofactors,Adjoint;
float subMatrix[4][4]={0,0,0,0,
0,0,0,0,
0,0,0,0,
0,0,0,0};
int row=0,column=0;
for(int r=0;r<4;r++)
{
for(int c=0;c<4;c++)
{
for(int xr=0;xr<4;xr++)
{
if(xr==r)
continue;
for(int xc=0;xc<4;xc++)
{
if(xc==c)
continue;
else
{
subMatrix[row][column]=Entry[xr][xc];
column++;
}
}
row++;column=0;
}
if((r+c)%2)
Cofactors.Entry[r][c]-=Determinant(subMatrix,3);
else
Cofactors.Entry[r][c]+=Determinant(subMatrix,3);
row=0;column=0;
}
}
Adjoint=Cofactors.Transpose();
float Det=Determinant(Entry,4);
return Matrix(Adjoint.Entry[0][0]/Det,Adjoint.Entry[0][1]/Det,Adjoint.Entry[0][2]/Det,Adjoint.Entry[0][3]/Det,
Adjoint.Entry[1][0]/Det,Adjoint.Entry[1][1]/Det,Adjoint.Entry[1][2]/Det,Adjoint.Entry[1][3]/Det,
Adjoint.Entry[2][0]/Det,Adjoint.Entry[2][1]/Det,Adjoint.Entry[2][2]/Det,Adjoint.Entry[2][3]/Det,
Adjoint.Entry[3][0]/Det,Adjoint.Entry[3][1]/Det,Adjoint.Entry[3][2]/Det,Adjoint.Entry[3][3]/Det);
}
Matrix Matrix::Transpose()
{
return Matrix(Entry[0][0],Entry[1][0],Entry[2][0],Entry[3][0],
Entry[0][1],Entry[1][1],Entry[2][1],Entry[3][1],
Entry[0][2],Entry[1][2],Entry[2][2],Entry[3][2],
Entry[0][3],Entry[1][3],Entry[2][3],Entry[3][3]);
}
float Determinant(float Matrix[4][4],int Dimen)
{
if (Dimen==2)
return ((Matrix[0][0]*Matrix[1][1])-(Matrix[1][0]*Matrix[0][1]));
float det=0;
int row=0,column=0;
float subMatrix[4][4]={0,0,0,0,
0,0,0,0,
0,0,0,0,
0,0,0,0};
for(int c=0;c[edited by - evil_ash on January 19, 2003 1:51:30 AM]
[edited by - evil_ash on January 19, 2003 1:52:07 AM]
Use [ source ] [/ source ] tags (without the spaces) to format the code nicely.
In what way doesn''t it work? does it crash? does it calculate the wrong answer? how do you know it''s wrong etc.
In what way doesn''t it work? does it crash? does it calculate the wrong answer? how do you know it''s wrong etc.
Don''t know why it doesn''t work, but I''m fairly sure that using the adjoint to find the inverse will be very slow. Look into row-reduction for finding the inverse.
Unless of course there was a specific reason you used the adjoint, in which case just ignore me
Joanus D''Mentia
---------------
Surely 100,000 lemmings couldn''t possibly be wrong!!!
Unless of course there was a specific reason you used the adjoint, in which case just ignore me
Joanus D''Mentia
---------------
Surely 100,000 lemmings couldn''t possibly be wrong!!!
Speed doesnt matter that much since the inverse is only calculated once at the beging on the program, but what is the row redecution method, since this isnt working
well, if you still want to know...
row-reduction (otherwise known as Gaussian elimination) is a method for solving a set of linear equations in the form M*A=B where
M = an RxC matrix of coefficients of the linear equations
A = a column vector of length C that contains the unknowns to solve
B = the RHS of the equations
so as an example, solving x1 and x2 in
would be the same as solving
which can be written as M*A=B. solving this for A gives A = M-1*B, so solving the system of linear equations is done by finding the inverse of M.
Now, row-reduction works like this. You start with an augmented matrix of M and the identity. (M really doesn''t need to be square to do row-reduction, but if it isn''t then you don''t have N equations and N unknown and therefore can''t solve it). The augmented matrix looks like this:
The process of solving this is through a series of row operations. The allowable operations are:
* Addition
* Subtraction
* Multiplication by a scalar
* Division by a scalar
* Swap 2 rows
These work on the entire row, i.e. row 1 would be
| M11 M12 M13 M14 | 1 0 0 0 |
For example, you could take row 1 and add it to row 2, or you could multiply row 1 by 5 and then subtract it from row 2. You keep doing this until you get the identity matrix on the LHS of the augmented matrix, and your inverse of M will be on the RHS. If you can''t get the identity on the LHS then there is no inverse for M.
This method really lends it self to programming and is much faster than using the adjoint by many orders of magnitude for large matricies (by large meaning 4x4 and upwards). 3x3 matricies would probably be around the same speed for both methods at a guess.
The fastest way of programming it is to work from left to right, working along the diagonal. For each diagonal element, multiply be a scalar to get that element to equal 1. Then subtract multiples of the row that diagonal is in from all of the other rows such that all the elements in that column are zerod. Then move on to the next diagonal element and keep doing this until you''ve done all of the diagonal elements. If you come across a zero in a diagonal then you can swap it with any row below it (but not above, since the 1''s have already been found on those rows). If all of the rows below this contain zeros as well, then you can''t continue and there is no inverse. Another thing you can do is check to see if there is already a 1 in one of the elements below the diagonal and just swap those 2 rows to save the multiplication.
Anyway, the worst case scenario for an N*N matrix would be 2*N2 multiplies and 2*N2*(N-1) subtractions (I think...I just worked that out then)
And thats row-reduction. Probably wasn''t the best explanation around so if that didn''t help then google for "Gaussian elimination" or any book on linear algebra should have it.
Joanus D''Mentia
---------------
The three phases of me...
The twit, the tool and the lonely fool.
row-reduction (otherwise known as Gaussian elimination) is a method for solving a set of linear equations in the form M*A=B where
M = an RxC matrix of coefficients of the linear equations
A = a column vector of length C that contains the unknowns to solve
B = the RHS of the equations
so as an example, solving x1 and x2 in
A*x1 + B*x2 = EC*x1 + D*x2 = F
would be the same as solving
| A B | * | x1 | = | E || C D | | x2 | | F |
which can be written as M*A=B. solving this for A gives A = M-1*B, so solving the system of linear equations is done by finding the inverse of M.
Now, row-reduction works like this. You start with an augmented matrix of M and the identity. (M really doesn''t need to be square to do row-reduction, but if it isn''t then you don''t have N equations and N unknown and therefore can''t solve it). The augmented matrix looks like this:
| M11 M12 M13 M14 | 1 0 0 0 || M21 M22 M23 M34 | 0 1 0 0 || M31 M32 M33 M34 | 0 0 1 0 || M41 M42 M43 M44 | 0 0 0 1 |
The process of solving this is through a series of row operations. The allowable operations are:
* Addition
* Subtraction
* Multiplication by a scalar
* Division by a scalar
* Swap 2 rows
These work on the entire row, i.e. row 1 would be
| M11 M12 M13 M14 | 1 0 0 0 |
For example, you could take row 1 and add it to row 2, or you could multiply row 1 by 5 and then subtract it from row 2. You keep doing this until you get the identity matrix on the LHS of the augmented matrix, and your inverse of M will be on the RHS. If you can''t get the identity on the LHS then there is no inverse for M.
This method really lends it self to programming and is much faster than using the adjoint by many orders of magnitude for large matricies (by large meaning 4x4 and upwards). 3x3 matricies would probably be around the same speed for both methods at a guess.
The fastest way of programming it is to work from left to right, working along the diagonal. For each diagonal element, multiply be a scalar to get that element to equal 1. Then subtract multiples of the row that diagonal is in from all of the other rows such that all the elements in that column are zerod. Then move on to the next diagonal element and keep doing this until you''ve done all of the diagonal elements. If you come across a zero in a diagonal then you can swap it with any row below it (but not above, since the 1''s have already been found on those rows). If all of the rows below this contain zeros as well, then you can''t continue and there is no inverse. Another thing you can do is check to see if there is already a 1 in one of the elements below the diagonal and just swap those 2 rows to save the multiplication.
Anyway, the worst case scenario for an N*N matrix would be 2*N2 multiplies and 2*N2*(N-1) subtractions (I think...I just worked that out then)
And thats row-reduction. Probably wasn''t the best explanation around so if that didn''t help then google for "Gaussian elimination" or any book on linear algebra should have it.
Joanus D''Mentia
---------------
The three phases of me...
The twit, the tool and the lonely fool.
Hows this?
[edited by - evil_ash on January 20, 2003 1:55:49 AM]
Matrix Matrix::Inverse(){ float AugMatrix[4][8]={Entry[0][0],Entry[0][1],Entry[0][2],Entry[0][3],1,0,0,0, Entry[1][0],Entry[1][1],Entry[1][2],Entry[1][3],0,1,0,0, Entry[2][0],Entry[2][1],Entry[2][2],Entry[2][3],0,0,1,0, Entry[3][0],Entry[3][1],Entry[3][2],Entry[3][3],0,0,0,1}; float rowBuffer[8]; for(int rc=0;rc<4;rc++) { if(AugMatrix[rc][rc]==0) { for(int s=rc+1;s<4;s++) { if(AugMatrix[s][rc]!=0) { memcpy(rowBuffer,&AugMatrix[rc][0],sizeof(float)*8); memcpy(&AugMatrix[rc][0],&AugMatrix[s][0],sizeof(float)*8); memcpy(&AugMatrix[s][0],rowBuffer,sizeof(float)*8); break; } else if(s==3) return *this; } } else { float scalar=1/AugMatrix[rc][rc]; for(int tC=0;tC<8;tC++) AugMatrix[rc][tC]*=scalar; for(int xr=0;xr<4;xr++) { float subscalar=AugMatrix[xr][rc]/AugMatrix[rc][rc]; for(int xc=0;xc<8;xc++) AugMatrix[xr][xc]-=AugMatrix[rc][xc]*subscalar; } } } return Matrix(AugMatrix[0][4],AugMatrix[0][5],AugMatrix[0][6],AugMatrix[0][7], AugMatrix[1][4],AugMatrix[1][5],AugMatrix[1][6],AugMatrix[1][7], AugMatrix[2][4],AugMatrix[2][5],AugMatrix[2][6],AugMatrix[2][7], AugMatrix[3][4],AugMatrix[3][5],AugMatrix[3][6],AugMatrix[3][7]);}
[edited by - evil_ash on January 20, 2003 1:55:49 AM]
almost....
when xr==rc you'll subtract a multiple of the row from itself and screw things up, so instead use this as the xr for loop
for(int xr=0;xr<4;xr++){ float subscalar=AugMatrix[xr][rc]/AugMatrix[rc][rc]; for(int xc=0;xc<8;xc++) AugMatrix[xr][xc]-=AugMatrix[rc][xc]*subscalar; }
when xr==rc you'll subtract a multiple of the row from itself and screw things up, so instead use this as the xr for loop
<br>for(int xr=0;xr<4;xr++) if(xr!=xc)<br> </pre> <br><br>apart from that it looks good!<br><br><br>Joanus D'Mentia<br>---------------<br>Serving Mars since 3069.<br><br><SPAN CLASS=editedby>[edited by - joanusdmentia on January 20, 2003 5:26:08 AM]</SPAN>
if(AugMatrix[rc][rc]==0)
&
if(AugMatrix[rc]!=0)<br><br>=><br><br>boom!<br><br>They''ll almost never be exactly zero.
&
if(AugMatrix[rc]!=0)<br><br>=><br><br>boom!<br><br>They''ll almost never be exactly zero.
Very true, it''s always a problem when the matrix is near-singular. If he''s just working with a homogenous 4x4 then I thinks it''s unlikely that any of the elements would be small enough to cause the inverse to ''explode'' to infinity...although you could be unlucky
evil_ash:
If you don''t want to risk a floating-point overflow then that''s something you''ll have to take into consideration... how you deal with near-singular matricies. The easiest way would be to just compare against an epsilon (some small but not ridiculously small number) instead of zero. The problem is if you are only using a homogenous 4x4 (as opposed to a general 4x4) then in theory there will always be an inverse... if you can rotate and translate then you can always translate back and rotate back. So what do you do in these cases? Sure you''ll have prevented the overflow from occurring, but you still don''t have the inverse that you needed to do that transform.
Also, if you are using a homogenous 4x4 and only do translates and rotates (i.e. no scales) then you can do it even faster by transposing the 3x3 rotation (this works since rotation matricies are orthogonal) and then multiplying the negative of the translation by this. That way you don''t have to calculate the inverse (well you do, but it''s just a transpose) and therefore wont have problems with the matrix being near-singular.
Joanus D''Mentia
---------------
Serving Mars since 3069.
evil_ash:
If you don''t want to risk a floating-point overflow then that''s something you''ll have to take into consideration... how you deal with near-singular matricies. The easiest way would be to just compare against an epsilon (some small but not ridiculously small number) instead of zero. The problem is if you are only using a homogenous 4x4 (as opposed to a general 4x4) then in theory there will always be an inverse... if you can rotate and translate then you can always translate back and rotate back. So what do you do in these cases? Sure you''ll have prevented the overflow from occurring, but you still don''t have the inverse that you needed to do that transform.
Also, if you are using a homogenous 4x4 and only do translates and rotates (i.e. no scales) then you can do it even faster by transposing the 3x3 rotation (this works since rotation matricies are orthogonal) and then multiplying the negative of the translation by this. That way you don''t have to calculate the inverse (well you do, but it''s just a transpose) and therefore wont have problems with the matrix being near-singular.
Joanus D''Mentia
---------------
Serving Mars since 3069.
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement