Matrix Multiplication Order

Started by
12 comments, last by Six222 10 years, 6 months ago


To me, that looks like you need to change your multiplication nesting.

Matrix multiplication is transitive, so the nesting cannot be wrong.

After sleeping and resetting my brain, I don't really know how I came up with my answer.

That said, don't you mean matrix multiplication is associative? X(YZ) = (XY)Z

Hello to all my stalkers.

Advertisement


That said, don't you mean matrix multiplication is associative? X(YZ) = (XY)Z

Yes, of course. My mistake.

Regarding compiler optimizations, it's the same across the CPU and GPU -- where the GPU can pack one column per register to optimize multiplications, the CPU can also pack one column per SSE register.

However, column-major isn't always the most efficient. When you have a vector (Vector4, float4, etc), you can either interpret it as a "column vector" (i.e. a Matrix4x1 - 4 rows and 1 column) or as a "row vector" (i.e. a Matrix1x4 - 1 row and 4 columns).

Depending on which interpretation you use, the correct way to set up your transformation matrices is different, e.g. whether you put the translation in the 4th row or 4th column of a transform matrix.

If you're treating your vectors as column vectors, then if you store your matrix values in column-major storage order, the compiler will be able to perform the above mentioned optimizations.

Likewise if you're treating your vectors as row vectors then the compiler can produce better code if your matrix values are stored in row-major storage order.

The choice is arbitrary, but it completely changes the correct order of multiplication when concatenating transform matrices, so it's best not to mix and match which interpretation you're using.

When multiplying matrices, the number of columns in the left matrix has to match the number of rows in the right matrix. That means you can multiply a 2x3 with a 3x4, but it's not valid to multiply a 3x4 with a 2x3.

So if you've got a 1x3 vector, you can transform it with a 3x4 matrix (with the vector on the left and the matrix on the right only).

And if you've got a 3x1 vector, you can transform it with a 4x3 matrix (with the vector on the right and the matrix on the left only).

Typically, mathematicians are taught to use the column-vector convention, where you write your vectors like this:

rChaZ76.png

...but for some reason, early computer graphics packages often used the row-vector convention, where you'd write v = | x y z |...

In the fixed-function days, GL chose the column-vector convention, whereas D3D chose the row-vector convention, hence D3DX's matrix classes still show this legacy.

These days though, you're free to choose either convention, and to choose either storage layout in GLSL/HLSL (you can even mix it up and use column-vectors and create matrices designed to transform column-vectors, but store them in row-major order).

It's confusing when some researchers are using one convention and others using a different convention, so these days I think most people are switching over to using column vectors.

Confusingly, you can mix both conventions in the one application, and many D3D applications do do this.

If you use D3DX to generate a matrix that's designed to transform a row vector (and is stored in row-major order), and you copy it over to HLSL (which by default is using column-major storage order), then that mix-up of the wrong storage convention being specified is actually just the same as transposing the matrix.

If you've got a matrix that was designed to transform row-vectors, and you transpose it, you end up with a matrix that's now designed to transform column-vectors (now with the opposite multiplication order required).

All this cancelling out means that you can be writing CPU-side code that's using one convention (and one correct order of multiplication), but when you send the variables over to the GPU, everything still works except now the correct order of multiplication has changed, because you've basically swicthed interpretation of your vectors and silently transposed your matrices.

I wouldn't recommend this, as it's confusing, but many people end up doing this without realizing...

Thanks for the great replies guys!

This topic is closed to new replies.

Advertisement