• Advertisement
Sign in to follow this  

Matrix multiplication

This topic is 4327 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Ok, so i have my matrices set up like so
class vec4
{
    float x, y, z, w;
};

class mat4
{
    vec4 x, y, z, w;
};



and within the matrix, x, y, z, and w all represent vector basis. Now, this is how i've defined my matrix multiplication.
	const mat4 operator * (const mat4 & m) const
	{
		return transpose(
							mat4
							(
								m * x,
								m * y,
								m * z,
								m * w
							)
						);				
	}



and matrix * vector
	const vec4 operator * (const vec4 & v) const{return vec4(v * x, v * y, v * z, v * w);}

Now, it works in some cases. But... A * B does not equal transpose(B) * A but shouldnt the two be equal? (well, A is a translation matrix, B is a rotation matrix) cheers -Dan

Share this post


Link to post
Share on other sites
Advertisement
Quote:
Original post by Ademan555
Now, it works in some cases. But...

A * B does not equal transpose(B) * A

but shouldnt the two be equal? (well, A is a translation matrix, B is a rotation matrix)
In general, AB != BTA.

Share this post


Link to post
Share on other sites
I guess i just want to somehow verify that my results are "correct", but my mind is swimming because of the whole variability of column major vs row major, and then the fact that openGL vs d3d store their matrices differently in linear memory

cheers
-Dan

Share this post


Link to post
Share on other sites
Quote:
Original post by Ademan555
I guess i just want to somehow verify that my results are "correct", but my mind is swimming because of the whole variability of column major vs row major, and then the fact that openGL vs d3d store their matrices differently in linear memory

That is incorrect. OpenGL uses column vectors and column major ordering, and D3D uses row vectors and row major ordering. In sum both (the respective vector kind and the ordering) annul each other, so that both OpenGL and D3D have the same matrix memory layout!

EDIT: Example w/ indices removed; see some posts below.

[Edited by - haegarr on April 20, 2006 5:01:08 AM]

Share this post


Link to post
Share on other sites
Quote:
Original post by Ademan555
I guess i just want to somehow verify that my results are "correct", but my mind is swimming because of the whole variability of column major vs row major, and then the fact that openGL vs d3d store their matrices differently in linear memory

cheers
-Dan


[edit:
edited to change "column/row major" to "column/row vector", since it turned out they're not the same thing afterall! Actually, considering the quoted post, I assume it's a detail that eludes the OP too.
]

Consider two matrices A and B that you want to multiply. A has dimension "m X n" ("m" rows & "n" columns) and B has "k X l".
This multiplication is allowed if and only if n == k.
Then you can proceed with it and the result will be a new matrix of dimension "m X l".
Here is where the difference with row-vector and column-vecotr format lies.
The former transforms a 4d vector "v" as a 1X4 matrix, while the latter considers it to be a 4X1 matrix. So if you had a 4X4 matrix "A", in row-vector format only the product v*A would be acceptable, and would yield a row vector, thus a 1X4 matrix (verify this). Similarly, with column-vector format, only A*v would be defined, and it would yield a column vector, or 4X1 matrix.

The rule to actually finding these products (in any format) is that:
in the product A*B of two matrices A,B, the element at the i-th row and j-th column must be the dot product of the i-th row of A with the j-th column of B.
Stick to this rule and you won't even have to consider the vector convention to get the right result.

As for the internal representation in memory, it's something you'll have to decide on your own, I don't think it is *directly* related to the vector format used. It depends on the actual implementation of your matrix multiplication routine. (e.g. if you'll be using SSE and stuff)

[Edited by - someusername on April 20, 2006 5:13:33 AM]

Share this post


Link to post
Share on other sites
The problem with the OP's code snippet is that it mixes up column and row vectors.

In the case of column vectors you have the matrix/vector product in the form

[ x y z w ] * v

and in the case of row vectors you have the matrix/vector product in the form

[ x ]
[ ]
[ y ]
v * [ ]
[ z ]
[ ]
[ w ]


I assume the OP tries to "misuse" the matrix/vector product to compute the matrix/matrix product. So I expect something like

const mat4 operator * (const mat4 & m) const
{
mat4 transposedM = transpose(m);
return (
mat4 (
transposedM * x,
transposedM * y,
transposedM * z,
transposedM * w
)
);
}


(w/o haven proven it, but should be correct) to be the thing Ademan555 is looking for. That has nothing special to do w/ a mathematical correspondence but simply with code reuse.

[Edited by - haegarr on April 20, 2006 5:18:06 AM]

Share this post


Link to post
Share on other sites
Quote:
Original post by someusername
Consider two matrices A and B that you want to multiply. A has dimension "m X n" ("m" rows & "n" columns) and B has "k X l".
This multiplication is allowed if and only if n == k.
Then you can proceed with it and the result will be a new matrix of dimension "m X l".
Here is where the difference with row-major and column-major format lies.
The former transforms a 4d vector "v" as a 1X4 matrix, while the latter considers it to be a 4X1 matrix. So if you had a 4X4 matrix "A", in row-major format only the product v*A would be acceptable, and would yield a row vector, thus a 1X4 matrix (verify this). Similarly, with column-major format, only A*v would be defined, and it would yield a column vector, or 4X1 matrix.
Just to clarify for the OP, row- vs. column-major and row vs. column vectors are two different issues. I think someusername may have misused the terms in his example above, but it's an easy mistake to make (I can say that from experience). Anyway, the issue he's talking about is row vs. column vectors; row- vs. column-major refers to how the matrix is laid out in memory.

Share this post


Link to post
Share on other sites
Quote:
Original post by haegarr
Quote:
Original post by Ademan555
I guess i just want to somehow verify that my results are "correct", but my mind is swimming because of the whole variability of column major vs row major, and then the fact that openGL vs d3d store their matrices differently in linear memory

That is incorrect. OpenGL uses column vectors and column major ordering, and D3D uses row vectors and row major ordering. In sum both (the respective vector kind and the ordering) annul each other, so that both OpenGL and D3D have the same matrix memory layout!

E.g. the x,y,z components of the translational part are always at indices 12, 13, 14.


I don't have any experience with openGL but doesn't it represent matrices as unidimensional arrays of floats, providing standard "array index" access to them, with the the elements of a matrix M arranged as:

[ M[0] M[4] M[8] M[12] ]
[ M[1] M[5] M[9] M[13] ]
[ M[2] M[6] M[10] M[14] ]
[ M[3] M[7] M[11] M[15] ]

?

Doesn't this mean if you setup a pointer to the first element M[0] (or M._11 in d3d), the following snippet would read its elements in columns?

float *pf = &M[0];
for( int i=1; i<=16; i++)
{
float f = *pf;
pf++;
}


In D3D the very same snippet would read them in rows instead.
How can they be arranged the same way in memory then?

Share this post


Link to post
Share on other sites
Quote:
Original post by jyk
Quote:
Original post by someusername
Consider two matrices A and B that you want to multiply. A has dimension "m X n" ("m" rows & "n" columns) and B has "k X l".
This multiplication is allowed if and only if n == k.
Then you can proceed with it and the result will be a new matrix of dimension "m X l".
Here is where the difference with row-major and column-major format lies.
The former transforms a 4d vector "v" as a 1X4 matrix, while the latter considers it to be a 4X1 matrix. So if you had a 4X4 matrix "A", in row-major format only the product v*A would be acceptable, and would yield a row vector, thus a 1X4 matrix (verify this). Similarly, with column-major format, only A*v would be defined, and it would yield a column vector, or 4X1 matrix.
Just to clarify for the OP, row- vs. column-major and row vs. column vectors are two different issues. I think someusername may have misused the terms in his example above, but it's an easy mistake to make (I can say that from experience). Anyway, the issue he's talking about is row vs. column vectors; row- vs. column-major refers to how the matrix is laid out in memory.

Oops! Sorry about that, I thought it was essentially the same thing. I'll edit the post.

Share this post


Link to post
Share on other sites
Quote:
Original post by someusername
I don't have any experience with openGL but doesn't it represent matrices as unidimensional arrays of floats, providing standard "array index" access to them, with the the elements of a matrix M arranged as:
...
In D3D the very same snippet would read them in rows instead.
How can they be arranged the same way in memory then?

They are the same because you have taken the "major" thing into account, but not the "vector" thing.

In OpenGL you use column vectors. That looks like this:

[ Xx Yx Zx Wx ]
[ Xy Yy Zy Wy ]
[ Xz Yz Zz Wz ]
[ Xw Yw Zw Ww ]

Furthurmore it uses column major order, i.e. going along the columns first. This yields in a memory layout
{ Xx Xy Xz Xw Yy ... Ww }

Now, D3D uses row vectors. Hence a matrix looks like this

[ Xx Xy Xz Xw ]
[ Yx Yy Yz Yw ]
[ Zx Zy Zz Zw ]
[ Wx Wy Wz Ww ]

Furthurmore it uses row major indexing, i.e. going first along the rows. That yields in a memory layout:
{ Xx Xy Xz Xw Yy ... Ww }

Both indexing schemes are the same! That is a nice effect, since it allows to define a matrix class that works well for both GL and D3D. E.g. the translational part is ever stored at the indices 12, 13, and 14.

Share this post


Link to post
Share on other sites
I see what you mean. Row/column major and vector conventions cancel out each other.
When it comes to matrices representing geometric transformations this is convenient.
But what about an arbitrary matrix? What if you want to define (for own purposes) the matrix

[ 1 2 3 4 ]
[ 5 6 7 8 ]
[ 9 10 11 12]
[ 13 14 15 16]
?

Wouldn't openGL write this is memory as:
[ 1  5  9  13  2  6  10  14  3  7  11  15  4  8  12  16 ]


while DX would write it as
[ 1  2  3  4  5  6  7  8  9  10  11  12  13  14  15  16]
?

Both indexing schemes are the same when it comes to common translational/rotational/scaling matrices.

Share this post


Link to post
Share on other sites
Quote:
Original post by someusername
I see what you mean. Row/column major and vector conventions cancel out each other.
When it comes to matrices representing geometric transformations this is convenient.
But what about an arbitrary matrix? What if you want to define (for own purposes) the matrix

[ 1 2 3 4 ]
[ 5 6 7 8 ]
[ 9 10 11 12]
[ 13 14 15 16]
?

You will fail if and only if you choose either row vector and column major order or else column vector and row major order. But then you will fail in both GL and D3D. E.g. your scheme above shows row major order. Let us assume you use it with row vectors; all is okay. If you use the same scheme with column vectors you have to transpose the matrix before supplying it to GL or D3D. Notice please that is "mathematically" totally okay to use the above scheme with column vectors; it is just only not compatible with GL/D3D's conventions (and hence you would need the transpose op to make it compatible).

It plays no role whether the components of the matrix are interpreted w.r.t. geometry or something else. For each use of a matrix a component has a meaning w.r.t. its position inside the matrix.

Quote:
Original post by someusername
Wouldn't openGL write this is memory as:
[ 1  5  9  13  2  6  10  14  3  7  11  15  4  8  12  16 ]


while DX would write it as
[ 1  2  3  4  5  6  7  8  9  10  11  12  13  14  15  16]
?

Nope. The numbers in the scheme define an ordinal number and hence the sequential arrangement of the components. Hence the order is definitely 1 2 3 ... 16. Neither GL nor D3D does a re-arrangement. They use the sequence as is.


This is actually a nice mind bender, isn't it? ;)

Share this post


Link to post
Share on other sites
Quote:
Original post by haegarr
This is actually a nice mind bender, isn't it? ;)


It sure is! I totally neglected the fact that you'd have to "pre-adjust" a matrix to take into account the way it will be transformed by the specific API. Without this step, they would be essentially different matrices, so there's no point in checking whether they have the same print in memory or not.

Interesting though, that openGL and d3d ultimately write matrices in memory the same way. Is there some practical reason for this? As far as I know, to make use of SIMD instructions in matrix-vector multiplication, it's usually the vectors that must be aligned in a specific way (SoA/AoS, I don't quite recall), not the matrices. (but then again, a matrix is four vectors...)
Anyone know if there is a specific reason for this?

Share this post


Link to post
Share on other sites
Quote:
Original post by haegarr
The problem with the OP's code snippet is that it mixes up column and row vectors.

In the case of column vectors you have the matrix/vector product in the form

[ x y z w ] * v

and in the case of row vectors you have the matrix/vector product in the form

[ x ]
[ ]
[ y ]
v * [ ]
[ z ]
[ ]
[ w ]


I assume the OP tries to "misuse" the matrix/vector product to compute the matrix/matrix product. So I expect something like
*** Source Snippet Removed ***
(w/o haven proven it, but should be correct) to be the thing Ademan555 is looking for. That has nothing special to do w/ a mathematical correspondence but simply with code reuse.


Interesting, I had come up with that same solution just "fooling around" with the equations, it did seem to work, but i can't really say that it "worked in any more cases than without the transpose. I'm gonna test it against a few matrix multiplication properties.

thanks everyone
-Dan

HRM, it fails transpose(A * B) == transpose(B) * transpose(A)

straight from mathworld

[Edited by - Ademan555 on April 20, 2006 11:30:52 AM]

Share this post


Link to post
Share on other sites
Quote:
Original post by Ademan555
HRM, it fails transpose(A * B) == transpose(B) * transpose(A)

So let's have a deeper look.

The matrix multiplication is defined so that the component at (n,m) of the result is the dot-product of the n-th row vector of the left matrix and the m-th column vector of the right matrix. As an example:

[ a b ] [ e f ] [ ae+bg af+bh ]
[ ] * [ ] = [ ]
[ c d ] [ g h ] [ ce+dg cf+dh ]

Assuming you are using column vectors, then

[ ae+bg af+bh ]
[ ] =: [ x y ]
[ ce+dg cf+dh ]

where

[ [ e ] ]
[ [ a b ] * [ ] ]
[ [ g ] ]
x := [ ]
[ [ e ] ]
[ [ c d ] * [ ] ]
[ [ g ] ]

[ [ f ] ]
[ [ a b ] * [ ] ]
[ [ h ] ]
y := [ ]
[ [ f ] ]
[ [ c d ] * [ ] ]
[ [ h ] ]

Since you are using column vectors, [ a b ] and [ c d ] are column vectors of the transpose of the left matrix!
The prescription for the 4D case seems me

mat4 temp = transpose( left );
mat4(
vec4( temp.x * right.x , temp.y * right.x , temp.z * right.x , temp.w * right.x ),
vec4( temp.x * right.y , temp.y * right.y , temp.z * right.y , temp.w * right.y ),
vec4( temp.x * right.z , temp.y * right.z , temp.z * right.z , temp.w * right.z ),
vec4( temp.x * right.w , temp.y * right.w , temp.z * right.w , temp.w * right.w )
);


On the other hand, assuming you are using row vectors, then

[ ae+bg af+bh ] [ x ]
[ ] =: [ ]
[ ce+dg cf+dh ] [ y ]

where

[ [ e ] [ f ] ]
x := [ [ a b ] * [ ] [ a b ] * [ ] ]
[ [ g ] [ h ] ]

[ [ e ] [ f ] ]
y := [ [ c d ] * [ ] [ c d ] * [ ] ]
[ [ g ] [ h ] ]

Since you are using row vectors, [ e g ] and [ f h ] are row vectors of the transpose of the right matrix!
The prescription for the 4D case seems me

mat4 temp = transpose( right );
mat4(
vec4( left.x * temp.x , left.x * temp.y , left.x * temp.z , left.x * temp.w ),
vec4( left.y * temp.x , left.y * temp.y , left.y * temp.z , left.y * temp.w ),
vec4( left.z * temp.x , left.z * temp.y , left.z * temp.z , left.z * temp.w ),
vec4( left.w * temp.x , left.w * temp.y , left.w * temp.z , left.w * temp.w )
);

What is, if I see it right, is correctly the transpose of the solution above.


In my post above I've suggested the first solution (transposing the left matrix). That seems correct for column vectors. For row vectors it seems necessary to transpose the right matrix.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement