#### Archived

This topic is now archived and is closed to further replies.

# Matrices Inverse problem

This topic is 5631 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

##### Share on other sites
Use [ source ] [/ source ] tags (without the spaces) to format the code nicely.

In what way doesn''t it work? does it crash? does it calculate the wrong answer? how do you know it''s wrong etc.

##### Share on other sites
Don''t know why it doesn''t work, but I''m fairly sure that using the adjoint to find the inverse will be very slow. Look into row-reduction for finding the inverse.

Unless of course there was a specific reason you used the adjoint, in which case just ignore me

Joanus D''Mentia
---------------
Surely 100,000 lemmings couldn''t possibly be wrong!!!

##### Share on other sites
Speed doesnt matter that much since the inverse is only calculated once at the beging on the program, but what is the row redecution method, since this isnt working

##### Share on other sites
I just fixed it yaaayyyy!!!!

[edited by - evil_ash on January 19, 2003 3:31:47 AM]

##### Share on other sites
well, if you still want to know...

row-reduction (otherwise known as Gaussian elimination) is a method for solving a set of linear equations in the form M*A=B where

M = an RxC matrix of coefficients of the linear equations
A = a column vector of length C that contains the unknowns to solve
B = the RHS of the equations

so as an example, solving x1 and x2 in
A*x1 + B*x2 = EC*x1 + D*x2 = F

would be the same as solving
| A B | * | x1 | = | E || C D |   | x2 |   | F |

which can be written as M*A=B. solving this for A gives A = M-1*B, so solving the system of linear equations is done by finding the inverse of M.

Now, row-reduction works like this. You start with an augmented matrix of M and the identity. (M really doesn''t need to be square to do row-reduction, but if it isn''t then you don''t have N equations and N unknown and therefore can''t solve it). The augmented matrix looks like this:
| M11 M12 M13 M14 | 1 0 0 0 || M21 M22 M23 M34 | 0 1 0 0 || M31 M32 M33 M34 | 0 0 1 0 || M41 M42 M43 M44 | 0 0 0 1 |

The process of solving this is through a series of row operations. The allowable operations are:
* Subtraction
* Multiplication by a scalar
* Division by a scalar
* Swap 2 rows
These work on the entire row, i.e. row 1 would be
| M11 M12 M13 M14 | 1 0 0 0 |

For example, you could take row 1 and add it to row 2, or you could multiply row 1 by 5 and then subtract it from row 2. You keep doing this until you get the identity matrix on the LHS of the augmented matrix, and your inverse of M will be on the RHS. If you can''t get the identity on the LHS then there is no inverse for M.

This method really lends it self to programming and is much faster than using the adjoint by many orders of magnitude for large matricies (by large meaning 4x4 and upwards). 3x3 matricies would probably be around the same speed for both methods at a guess.

The fastest way of programming it is to work from left to right, working along the diagonal. For each diagonal element, multiply be a scalar to get that element to equal 1. Then subtract multiples of the row that diagonal is in from all of the other rows such that all the elements in that column are zerod. Then move on to the next diagonal element and keep doing this until you''ve done all of the diagonal elements. If you come across a zero in a diagonal then you can swap it with any row below it (but not above, since the 1''s have already been found on those rows). If all of the rows below this contain zeros as well, then you can''t continue and there is no inverse. Another thing you can do is check to see if there is already a 1 in one of the elements below the diagonal and just swap those 2 rows to save the multiplication.

Anyway, the worst case scenario for an N*N matrix would be 2*N2 multiplies and 2*N2*(N-1) subtractions (I think...I just worked that out then)

And thats row-reduction. Probably wasn''t the best explanation around so if that didn''t help then google for "Gaussian elimination" or any book on linear algebra should have it.

Joanus D''Mentia
---------------
The three phases of me...
The twit, the tool and the lonely fool.

##### Share on other sites
Hows this?

    Matrix Matrix::Inverse(){	float AugMatrix[4][8]={Entry[0][0],Entry[0][1],Entry[0][2],Entry[0][3],1,0,0,0,		                   Entry[1][0],Entry[1][1],Entry[1][2],Entry[1][3],0,1,0,0,						   Entry[2][0],Entry[2][1],Entry[2][2],Entry[2][3],0,0,1,0,						   Entry[3][0],Entry[3][1],Entry[3][2],Entry[3][3],0,0,0,1};	float rowBuffer[8];	for(int rc=0;rc<4;rc++)	{		if(AugMatrix[rc][rc]==0)		{				for(int s=rc+1;s<4;s++)			{				if(AugMatrix[s][rc]!=0)				{					memcpy(rowBuffer,&AugMatrix[rc][0],sizeof(float)*8);					memcpy(&AugMatrix[rc][0],&AugMatrix[s][0],sizeof(float)*8);					memcpy(&AugMatrix[s][0],rowBuffer,sizeof(float)*8);					break;				}				else if(s==3)					return *this;			}					}		else		{						float scalar=1/AugMatrix[rc][rc];			for(int tC=0;tC<8;tC++)				AugMatrix[rc][tC]*=scalar;			for(int xr=0;xr<4;xr++)			{				float subscalar=AugMatrix[xr][rc]/AugMatrix[rc][rc];				for(int xc=0;xc<8;xc++)					AugMatrix[xr][xc]-=AugMatrix[rc][xc]*subscalar;						}		}				}	return Matrix(AugMatrix[0][4],AugMatrix[0][5],AugMatrix[0][6],AugMatrix[0][7],				  AugMatrix[1][4],AugMatrix[1][5],AugMatrix[1][6],AugMatrix[1][7],				  AugMatrix[2][4],AugMatrix[2][5],AugMatrix[2][6],AugMatrix[2][7],				  AugMatrix[3][4],AugMatrix[3][5],AugMatrix[3][6],AugMatrix[3][7]);}

[edited by - evil_ash on January 20, 2003 1:55:49 AM]

##### Share on other sites
almost....
for(int xr=0;xr<4;xr++){    float subscalar=AugMatrix[xr][rc]/AugMatrix[rc][rc];    for(int xc=0;xc<8;xc++)        AugMatrix[xr][xc]-=AugMatrix[rc][xc]*subscalar;			}

when xr==rc you'll subtract a multiple of the row from itself and screw things up, so instead use this as the xr for loop
[code]for(int xr=0;xr<4;xr++) if(xr!=xc)

apart from that it looks good!

Joanus D'Mentia
---------------
Serving Mars since 3069.

[edited by - joanusdmentia on January 20, 2003 5:26:08 AM]

##### Share on other sites
if(AugMatrix[rc][rc]==0)
&
if(AugMatrix[s][rc]!=0)

=>

boom!

They''ll almost never be exactly zero.

##### Share on other sites
Very true, it''s always a problem when the matrix is near-singular. If he''s just working with a homogenous 4x4 then I thinks it''s unlikely that any of the elements would be small enough to cause the inverse to ''explode'' to infinity...although you could be unlucky

evil_ash:
If you don''t want to risk a floating-point overflow then that''s something you''ll have to take into consideration... how you deal with near-singular matricies. The easiest way would be to just compare against an epsilon (some small but not ridiculously small number) instead of zero. The problem is if you are only using a homogenous 4x4 (as opposed to a general 4x4) then in theory there will always be an inverse... if you can rotate and translate then you can always translate back and rotate back. So what do you do in these cases? Sure you''ll have prevented the overflow from occurring, but you still don''t have the inverse that you needed to do that transform.

Also, if you are using a homogenous 4x4 and only do translates and rotates (i.e. no scales) then you can do it even faster by transposing the 3x3 rotation (this works since rotation matricies are orthogonal) and then multiplying the negative of the translation by this. That way you don''t have to calculate the inverse (well you do, but it''s just a transpose) and therefore wont have problems with the matrix being near-singular.

Joanus D''Mentia
---------------
Serving Mars since 3069.

• 17
• 11
• 12
• 9
• 49
• ### Forum Statistics

• Total Topics
631395
• Total Posts
2999758
×