Matrix Multiplication Order

Started by
12 comments, last by Six222 10 years, 6 months ago

I have a question about matrix multiplication order in HLSL. For example in C++ I do the following:


XMMATRIX matFinal = matWorld * matView * matProj;

This works correctly and I just upload the final matrix to the GPU and do


position = mul(matWorld, position)

But when I transfer each matrix individually and try and to the multiplication in the shader I doesn't work... Example:


float4x4 matFinal = mul(mul(matWorld, matView), matProj);
position = mul(matFinal, position);

If anyone could explain or point me in the right direction that would be great smile.png

Thanks.

Advertisement

To me, that looks like you need to change your multiplication nesting. That said, I could be horribly wrong, so if it doesn't work part 1 of the solution is probably to ignore me =)

float4x4 matFinal = mul(matWorld, mul(matView, matProj));

Hello to all my stalkers.

To me, that looks like you need to change your multiplication nesting.

That didn't work :(

Try this:


float4x4 matFinal = mul(matProj, mul(matView, matWorld));

But it's probably better to transform vertex thrice:


position = mul(matWorld, position);
position = mul(matView, position);
position = mul(matProj, position);
Your ordering looks wrong. By default HLSL uses column major matrices, while in C they are row major.

The code should be
mul(proj, mul(view, world)).

Your ordering looks wrong. By default HLSL uses column major matrices, while in C they are row major.

The code should be
mul(proj, mul(view, world)).

Ah that worked, thanks! Is there any reason why the DirectX math library uses row major and then decides to switch in HLSL to column?

Ah that worked, thanks! Is there any reason why the DirectX math library uses row major and then decides to switch in HLSL to column?

I don't know, probably legacy reasons.

You can put '#pragma pack_matrix(row_major)' at the start of your shader, then you can use the same multiplication order as in C, but then you also need to change the multiplication order of the position and the matrix.

You can change it in your shaderflags before compiling the shader effect file

See http://msdn.microsoft.com/en-us/library/windows/desktop/gg615083(v=vs.85).aspx


Ah that worked, thanks! Is there any reason why the DirectX math library uses row major and then decides to switch in HLSL to column?

Because DX math is for C / C++, and these languages, like most others, use row-major as "natural" layout: The elements of a row are consecutive in memory, assuming that the 1st index of a float[j] is the row, and the 2nd is the column.

But the HLSL compiler by default assumes that one register (4 floats) contains one column (column-major). Assuming row vectors, the order of multiplication is this:

v * M

Here, the compiler can create very efficient code, because the multiplication is just 4 dp4 (dot product) instructions. However, the C code packed the matrix "wrong", and setting the shader constants puts a row into one register, not a column, and so the calculation yields nonsense.

You have 3 options:

1. Change the order of multiplication, like already mentioned. This makes the compiler assume a column vector:

M * v

So you actually "cheat" by making an implicit matrix transpose.

2. Use the already mentioned compiler option so the compiler assumes that a register contains a row, not a column.

However, the problem is that the compiler must create less efficient code here (4 vector * scalar and addition); this needs 3 instructions more.

Option 3:

Transpose the matrix before setting it as shader constant. Now it is actually column-major, the multiplication order is the same as it is in your C++ code, and the GPU code is optimal.


To me, that looks like you need to change your multiplication nesting.

Matrix multiplication is associative, so the nesting cannot be wrong.

This topic is closed to new replies.

Advertisement