Mess With Matrix Multiplication

Started by
7 comments, last by Hodgman 7 years, 8 months ago

I am currently refactoring the code and i bumped once again on the shader. There is something definitely wrong. I read the "mul" documentation, the row/column matrix/vector and in my opinion, the math in my shader is wrong, but suddenly it turns out that it renders properly...

So, inside my shader file i have


cbuffer cbPerFrameViewProjection : register(b1)
{
    column_major float4x4 PV;
};

cbuffer cbPerObjectWorldMatrix : register(b0)
{
    column_major float4x4 World;
};

Where "PV" is set once at the beginning of a frame, and world is per object.

The multiplication inside vertex shader


float4x4 wvp = mul(World, PV);
output.Pos = mul(wvp,inPos);

I set the constant buffers like that:

Camera:


&XMMatrixTranspose(cam.GetViewMatrix()*cam.GetProjectionMatrix())

And for every object:


&XMMatrixTranspose(world.GetTransform())

Now the maths: C++ stores matrices as row major, hlsl as column.

Camera ((view row * projection row) transposed) equals to (((projection column * view column) transposed) transposed) which is (projection column * view column).

World row transposed equals to world column.

So - both constant buffers are fed a column matrices.

Then, inside shader, to count WVP transposed (which in column order is PVW) i should do mul(PV,World) from constant buffers data to get the right answer, but NOT mul(World,PV) as i do now - and the second (wrong in my opinion) solution is correct. Why is that?

And second problem.

Why is that correct


output.Pos = mul(inPos, wvp);

And this is wrong??


output.Pos = mul(wvp, inPos);

HLSL uses column matrices, and as far as i know from theory, to transform a vector you multiply matrix * vector.

Could someone explain what is goin on here? What am i missing in the matrix mess?

Thanks in advance

Advertisement

There's a lot going on here that I am not qualified to comment on but whether a matrix is 'row major' or 'column' major doesn't affect the maths. there are no row/column major matrices. That comes purely just from how they are arranged in memory but there are tricks you can do such as multiplying in reverse order and transposing and stuff which does work because of how they are laid out in memory but it is far too confusing for me.

And second problem. Why is that correct output.Pos = mul(inPos, wvp); And this is wrong?? output.Pos = mul(wvp, inPos); HLSL uses column matrices, and as far as i know from theory, to transform a vector you multiply matrix * vector. Could someone explain what is goin on here? What am i missing in the matrix mess?

The reason you can't do this is because the 2 matrices don't match up, the number of columns in the first must be the same as the number of rows in the second. I guess the vector inPos is seen as a matrix that is 4 columns and 1 row. The matrix is presumably 4x4 so the #columns of the first matches the #rows of the second. With the second version the number of columns in the first is still 4 but the number of rows in the second is now only one so you can't do it. I don't know if it is true that inPos is seen as a matrix with 4 columns and 1 row but that's the only way you could have it at the start of the multiplication with a 4x4 matrix. I think things being colum/row major just confuses matter but for a vector it is quite important to know if it is seen as a single row or a single column and that can be independent of how it is laid out in memory.

The matrix you use if you have a row vector rather than a column vector would be a transpose of each other to actually give you the results you wanted.

Disclaimer, I don't use HLSL I am just looking at that as a mathematical point of view.

Interested in Fractals? Check out my App, Fractal Scout, free on the Google Play store.

Well, i know a lot of linear algebra, the matrices, transpositions, multiplication order etc. I suspect hlsl does something implicitly i dont know about...

Well, i know a lot of linear algebra, the matrices, transpositions, multiplication order etc. I suspect hlsl does something implicitly i dont know about...

I think it all comes down to how it sees vectors. I always see them as columns but I guess (judging from the order you had to use when multiplying) that it sees them as rows. Compared to GLSL where my vectors are always the second param to a multiplication with a matrix suggesting they are seen as columns there.

Interested in Fractals? Check out my App, Fractal Scout, free on the Google Play store.

There's a lot going on here that I am not qualified to comment on but whether a matrix is 'row major' or 'column' major doesn't affect the maths. there are no row/column major matrices. That comes purely just from how they are arranged in memory

Yes, and no :)
The HLSL column_major/row_major keywords do just define how the data is arranged in memory, and the XMMatrix* functions do use row_major memory arrangement, but there is also a mathematical concept.
e.g. given two 2x2 matrices containing two basis vectors, A and B:
?     ?    ?     ?
?Ax Ay? vs ?Ax Bx?
?Bx By?    ?Ay By?
?     ?    ?     ?
The first one stores basis vectors in rows, so you could say it's using row-major math.
The second one stores basis vectors in columns, so you could say it's using column-major math.

When inspected as a 1D array in memory:
The first (row-major math) in row-major storage order is the same as the second (column-major math) in column-major storage order: Ax, Ay, Bx, By.
The first (row-major math) in row-major storage order is the same as the second (column-major math) in column-major storage order: Ax, Bx, Ay, By.

When writing out your matrix code, the storage order should mostly be invisible to you. The only time you should care about storage order is when getting two different matrix libraries (or languages -- XM<->HLSL in this case) to talk to each other and share data correctly.
However, the mathematical convention (whether the basis vectors are in the rows or columns) is a major concern to how you write out your code. Opposite conventions will have opposite multiplication orders.

But yep, the OP is mixing up these two different concepts. Your XM and HLSL code may be using different storage order, but that is irrelevant to you. It's the mathematical conventions that dictate the correct multiplication order to use - and you have not switched mathematical conventions in HLSL.

For example, let's say you're using row-major mathematical conventions, and have this matrix:
?     ?
?Ax Ay?
?Bx By?
?     ?
XM uses row-major storage, so it expects this to be stored in memory as: Ax, Ay, Bx, By.
HLSL uses column-major storage, so it expects it to be stored in memory as: Ax, Bx, Ay, By, which is why you have to transpose your data before giving it to HLSL - purely because of the 1D storage order convention.
However, both XM and HLSL are both still working with the same mathematical matrix as drawn above in 2D. That mathematical matrix has not been transposed!
On the other hand, if you failed to transpose your data before handing it to HLSL, the mathematical matrix would become transposed, because HLSL would be interpreting the data incorrectly :lol:

There's a lot going on here that I am not qualified to comment on but whether a matrix is 'row major' or 'column' major doesn't affect the maths. there are no row/column major matrices. That comes purely just from how they are arranged in memory

Yes, and no :)
The HLSL column_major/row_major keywords do just define how the data is arranged in memory, and the XMMatrix* functions do use row_major memory arrangement, but there is also a mathematical concept.
e.g. given two 2x2 matrices containing two basis vectors, A and B:

?     ?    ?     ?
?Ax Ay? vs ?Ax Bx?
?Bx By?    ?Ay By?
?     ?    ?     ?
The first one stores basis vectors in rows, so you could say it's using row-major math.
The second one stores basis vectors in columns, so you could say it's using column-major math.

When inspected as a 1D array in memory:
The first (row-major math) in row-major storage order is the same as the second (column-major math) in column-major storage order: Ax, Ay, Bx, By.
The first (row-major math) in row-major storage order is the same as the second (column-major math) in column-major storage order: Ax, Bx, Ay, By.

When writing out your matrix code, the storage order should mostly be invisible to you. The only time you should care about storage order is when getting two different matrix libraries (or languages -- XM<->HLSL in this case) to talk to each other and share data correctly.
However, the mathematical convention (whether the basis vectors are in the rows or columns) is a major concern to how you write out your code. Opposite conventions will have opposite multiplication orders.

But yep, the OP is mixing up these two different concepts. Your XM and HLSL code may be using different storage order, but that is irrelevant to you. It's the mathematical conventions that dictate the correct multiplication order to use - and you have not switched mathematical conventions in HLSL.

For example, let's say you're using row-major mathematical conventions, and have this matrix:

?     ?
?Ax Ay?
?Bx By?
?     ?
XM uses row-major storage, so it expects this to be stored in memory as: Ax, Ay, Bx, By.
HLSL uses column-major storage, so it expects it to be stored in memory as: Ax, Bx, Ay, By, which is why you have to transpose your data before giving it to HLSL - purely because of the 1D storage order convention.
However, both XM and HLSL are both still working with the same mathematical matrix as drawn above in 2D. That mathematical matrix has not been transposed!
On the other hand, if you failed to transpose your data before handing it to HLSL, the mathematical matrix would become transposed, because HLSL would be interpreting the data incorrectly :lol:

Thanks for clarifying that. I did notice that to swap between using a row/column vector required transposing the transform to get the correct answer on my little scrap of paper that I checked with.

Interested in Fractals? Check out my App, Fractal Scout, free on the Google Play store.

And i checked on paper with proper transposing, and still it turns out that the "good" order is wrong according to math. It turns out that mul(a,b) does b*a and not a*b...

I swear I wrote a GLSL shader last weekend and reversed the order of that multiplication operation from the GLSL example I was referencing to no ill effect. I won't guarantee that, but I've always done it in the direction you call correct in HLSL and then I had this GLSL example that did it backwards. Of course GLSL may compile slightly different, but I reversed it compared to the GLSL example.

As you know, matrix multiplication is not commutative. So, reverse order with caution.

In that particular case though, I'm not sure there's a reason you can't reverse the order. If you do and get a different result, you know it ain't right. ;-)

As far as row or column major: obviously you have to stick to one and not switch mid process. Otherwise, it should be the the same thing. Just don't mix and match. Pick one and use it exclusively.

It turns out that mul(a,b) does b*a and not a*b...

You mean the HLSL mul function? No, it does compute a*b.
If it really did compute b*a, then the "//ok" lines below would give errors, while the "//error" lines would give truncation warnings:
	float2x4 _2x4;
	float4x2 _4x2;
	float4x3 _4x3;
	float3x1 _3x1;
	float2x2 result1 = mul(_2x4,_4x2);//ok
	float4x4 result2 = mul(_4x2,_2x4);//ok
	float2x3 result3 = mul(_2x4,_4x3);//ok
	float4x4 result4 = mul(_4x3,_2x4);//error - 4x3 * 2x4 isn't defined
	float4   result5 = mul(_4x3,_3x1);//ok
	float3x3 result6 = mul(_3x1,_4x3);//error - 3x1 * 4x3 isn't defined

This topic is closed to new replies.

Advertisement