Quick Way To Invert Matrix

Started by
3 comments, last by xycsoscyx 7 years, 8 months ago

I have a look at the XMMatrixInverse but don't understand how it works ?

He use a determinant , who know what it is ? And who can tell how to use it to compute inverse matrix?

Advertisement

Hi,

Take a look here for the maths of how to invert a matrix.

However, i wouldn't really dig deep into the XMMatrix* functions. They should be considered blackboxed, and they also can and do use intrinsics, which means MMX etc instructions to speed them up. This means you won't see actual floating point maths operations if you dig deep enough, just a bunch of inline assembler.

Also XMMatrixInverse etc work very well and are scalable, don't consider rewriting them unless you have a really good reason!

Hope this helps! :)

He use a determinant , who know what it is ?


I do. Think of a 3x3 square matrix as a linear mapping from a R^3 to R^3. Linear mappings have the feature that volumes are scaled by them. The determinant is the scale that the volumes are multiplied by.


And who can tell how to use it to compute inverse matrix?


The most obvious connection is Cramer's rule.

For advanced users: There are other ways in which the determinant is connected to the inverse. One I learned about recently is that the gradient of the determinant (as a function of n^2 variables) is the transpose of the inverse times the determinant squared, or something like that. It turns you can use that fact together with automatic differentiation to compute the inverse in a really whacky way.

For advanced users: There are other ways in which the determinant is connected to the inverse. One I learned about recently is that the gradient of the determinant (as a function of n^2 variables) is the transpose of the inverse times the determinant squared, or something like that. It turns you can use that fact together with automatic differentiation to compute the inverse in a really whacky way.

If you transpose rotation matrix you get inverse rotation matrix. Ofc it cannot contain translation / scaling / etc.

For advanced users: There are other ways in which the determinant is connected to the inverse. One I learned about recently is that the gradient of the determinant (as a function of n^2 variables) is the transpose of the inverse times the determinant squared, or something like that. It turns you can use that fact together with automatic differentiation to compute the inverse in a really whacky way.

If you transpose rotation matrix you get inverse rotation matrix. Ofc it cannot contain translation / scaling / etc.

Rather, the transpose of a matrix is equal to the inverse only if the matrix is orthogonal.

This topic is closed to new replies.

Advertisement