• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.

Archived

This topic is now archived and is closed to further replies.

Succinct

What's your favorite Arbitrarry Matrix Inversion routine?

5 posts in this topic

Here's one I came up with myself using my linear algebra books from college.
  //---------------------------------------------------------------------------

Matrix Matrix::Inverse( void ) const
{
    // INVERSE(M) = ADJUGATE(M)/det(M);


    // step 1 - determine the minor matrices

    float MinorMatrix[4][4][3][3];
    for( int RowMajor = 0; RowMajor < 4; RowMajor++ )
        for( int ColMajor = 0; ColMajor < 4; ColMajor++ )
        {
            int RowMinor = 0;
            for( int i = 0; i < 4; i++ )
                if( i != RowMajor )
                {
                    int ColMinor = 0;
                    for( int j = 0; j < 4; j++ )
                        if( j != ColMajor )
                        {
                            MinorMatrix[RowMajor][ColMajor][RowMinor][ColMinor]
                               = m[i][j];

                            ColMinor++;
                        }

                    RowMinor++;
                }
        }

    // step 2 - calc minors and cofactors

    float Cofactor[4][4];
    for( int i = 0; i < 4; i++ )
        for( int j = 0; j < 4; j++ )
        {
            // alias a pointer into the 3x3 minor matrix arrays

            float (*mm)[3] = MinorMatrix[i][j];

            float Minor =
                mm[0][0]*(mm[1][1]*mm[2][2] - mm[1][2]*mm[2][1])
              - mm[1][0]*(mm[0][1]*mm[2][2] - mm[0][2]*mm[2][1])
              + mm[2][0]*(mm[0][1]*mm[2][2] - mm[0][2]*mm[2][1]);

            Cofactor[i][j] = Minor*(1 - 2*( (i + j) & 1 ) );
        }

    // step 3 - calc determinant

    float Determinant =
        m[0][0]*Cofactor[0][0] + m[1][0]*Cofactor[1][0]
      + m[2][0]*Cofactor[2][0] + m[3][0]*Cofactor[3][0];

    // verify that matrix is non-singular

    if( Determinant == 0 )
        return Zero;

    // step 4 - calc adjugate matrix

    float Adjugate[4][4];
    for( int i = 0; i < 4; i++ )
        for( int j = 0; j < 4; j++ )
            Adjugate[i][j] = Cofactor[j][i];

    // step 5 - return the inverse

    return Matrix( Adjugate )*(1.0/Determinant);
}  
Now, I was reading through my book, and it seems there are other, faster ways of doing it, such as Gaussian elimination using pivotal reduction, but I've also read the plain old Gaussian elimination (w/o using pivotal reduction) is even longer. What's your favorite Arbitrary Matrix Inversion routine? [/source] Thank you for your bandwidth. -- Succinct ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Edited by - Succinct on July 24, 2001 10:04:29 AM
0

Share this post


Link to post
Share on other sites
I hate to rain on your parade, but...

It appears that you are using Cramer''s rule to do your inverse, with the method of cofactors used to find the determinants. Unfortunately, you''ve stumbled onto the slowest method possible for finding the inverse of a general N x N matrix. It *is* damned elegant, but it ain''t fast at all. I mean, it''s not too horribly slow for the 4 x 4 matrix your routine deals with, but it is way slower than Gaussian Elimination and other methods. In general, you should *NEVER* *EVER* use this method! Its only really a decent choice if you''re dealing with 3x3 or smaller matrices. Really.

Finding the inverse is similar to solving Ax = b, so lets look at Cramer''s rule for solving just for x and b being vectors, just to see how bad it is.

This method requires that you ultimately evaluate (N+1) determinants of order N. Using the method of cofactors as you are doing, each determinant requires (N factorial - 1) additions and sum from k=1 to N-1 of (N factorial/K factorial) multiplications. For your 4x4 system, the operation count is then:

  
operation count = (4+1)*doc,


where doc is the count of operations to determine one determinant, or:

  
doc = (4*3*2*1-1) + (4*3*2*1)/(1) + (4*3*2*1)/(2*1) + (4*3*2*1)/(3*2*1)

or,

doc = 63 operations *per* determinant


And the total operation count is 5*63 = 315 using your method for a 4 x 4 matrix.

Compare with simple Gaussian elimination that requires pow(N,3)/3 + pow(N,2) - N/3 multiplies/divides and pow(N,3)/3 + pow(N,2)/2 - 5*N/6 add/subtracts for:

operation count to solve a 4x4 system using simple Gaussian Elimination:

  
4*4*4/3 + 4*4 - 4/3 + 4*4*4/3 + 4*4/2 - 5*4/6 = 62 operations.


So simple Gaussian elimination is 5 times faster than your code.

Lets look further at the method you use. If the matrix was a 20 x 20 matrix instead, you''d end up calculatin 21 determinants of 20 x 20 matrices. The cost of *each* determinant is 2.4E+18 operations. That would take 73 *YEARS* for *EACH* determinant on a computer that could do 1000 MFLOPS, for a total of 21 * 73 = 1533 years to solve the system once. You might note that a 1Ghz Pentium III can do on the order of 1000 MFLOPS so your Pentium III would take more than one and a half millenia to solve the system using Cramer''s rule. Gaussian elimination on the other hand could solve the same system in 5910 operations, or 6 hundred thousandths of a second on the same machine.

So, Gaussian Elimination is quite significantly faster than your method, even for a 4x4 matrix. GE with pivoting or partial pivoting is more robust than standard GE, though standard GE is a bit faster.

There are methods for finding the inverse that are faster than Gaussian Elimination too, such as LU decomposition or QR factorization.

I hope you''re not upset that I point this out to you. Its only in the interest of education, .

Question. Why invert the matrix when you don''t have to? And my answer is, *don''t* invert if you don''t have to.

If you''re solving a system Ax = b, don''t invert. You don''t have to, plus its slow and potentially unstable due to floating point roundoff. One faster way is to do an LU decomposition on A and apply it to b.

If you are dealing with transformation matrices, the inverse can be found directly after you break the matrix into rotation, scale, and translate components. I haven''t counted operations to use this approach, but its more intuitive than using a generic algorithm to find the inverse and although possibly slower than GE, it should certainly still be way cheaper than Cramer''s rule. The inverse of a pure rotation transformation matrix (a 3x3 matrix) for example is just simply equal to the transpose of the original 3x3 rotation matrix and so no computation or recalculation at all is required. (More work required for a full 4 x 4 matrix with rotation, scaling, and translation.)

Some references for you:

Gene Golub and Charles van Loan: "Matrix Computations"
Steven Leon: "Linear Algebra with Applications"
For transformation matrices specifically, the OpenGL red book is pretty good---see appendices, plus David Eberly''s "3D Game Engine Design" has information on inverting transformation matrices, although the book is difficult to read.

The Steven Leon reference is more elementary, and has some straightforward discussion of operation counts and computational expense.


Graham Rhodes
Senior Scientist
Applied Research Associates, Inc.
0

Share this post


Link to post
Share on other sites
I''ve always liked Guass-Jordon elimation, since it can be used to solve a large variety of problems depending on the solution, or reason for lack-there-of. You use GJE in LU decomp if I remember correctly.

Magmai Kai Holmlor
- Not For Rent
0

Share this post


Link to post
Share on other sites
In 3D you can just take the outer product of pairs of rows or columns and scale by the determinant (which is just the inner product of one of these results with the other row). Elegant and fast on a machine which accelerates common vector operations.

So in 4D I just used the 4D analog of vector calculus, in particular the outer and inner products of 4D Clifford/geometric algebra, in a similar way. As I left it it was not especially fast but it could easily be optimised to do the very parallel products on any available vector unit.
0

Share this post


Link to post
Share on other sites
You make a good point johnb. Machines that vectorize certain operations can make normally slow methods appealing.

Graham Rhodes
Senior Scientist
Applied Research Associates, Inc.
0

Share this post


Link to post
Share on other sites
I'm glad we have moderators like you guys around here, I'll tell ya! Senior Scientist? Now that's a title !

Well, I have to say, that was exactly the kind of reply I'd hoped for. From what I'd read, Gaussian elimination with (partial) pivotal reduction would be magnitudes faster to implement. I read that after doing mine, though, so I just stuck it out.

I honestly just wanted to see if I could do it. I've tried it before, when I was doing completely unaccelerated, 100% software raytracers. I just didn't have enough math resources to figure it out though.

I'm not planning on using my matrix stuff for rendering. I'm using OpenGL for that. I'll need it for picking and culling, yes, but there's not much inversion going on there, so optimizing for rendering isn't going to be completely applicable. I was planning on using my matrix routines to -eventually- implement a physics engine. I've only got honors HS physics from a couple of years ago under my belt, so I'm a little rusty and a little under-educated, but that's exactly how I started out w/ programming and graphics a couple of years ago. I don't know how much use a 4-d square matrix will be for physics, but I just wanted to see if I could do it.

I could. Not the best way, and I thank you for your guys' analysis and recommendations for making it better, but, still, I figured it out with no schooling (I didn't take the Linear Algebra class from where the book came, it's actually a friends), so I'm pretty content.

Now I'm going to work on making it faster.

Vectorizing it was a neat concept when I'd first heard about it. Have you guys used the union trick yet? You can union the float [16] with 4,4d vectors (Right,Up,In,Position) and access the matrix via named variables. It really gives meaning to Vector( m[0],m[4],m[8],m[12] )!

Thank all of you for your bandwidth,
-- Succinct

Edited by - Succinct on July 25, 2001 2:34:20 PM
0

Share this post


Link to post
Share on other sites