Jump to content
  • Advertisement
Sign in to follow this  
Ademan555

Matrix multiplication

This topic is 4504 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Ok, so i have my matrices set up like so
class vec4
{
    float x, y, z, w;
};

class mat4
{
    vec4 x, y, z, w;
};



and within the matrix, x, y, z, and w all represent vector basis. Now, this is how i've defined my matrix multiplication.
	const mat4 operator * (const mat4 & m) const
	{
		return transpose(
							mat4
							(
								m * x,
								m * y,
								m * z,
								m * w
							)
						);				
	}



and matrix * vector
	const vec4 operator * (const vec4 & v) const{return vec4(v * x, v * y, v * z, v * w);}

Now, it works in some cases. But... A * B does not equal transpose(B) * A but shouldnt the two be equal? (well, A is a translation matrix, B is a rotation matrix) cheers -Dan

Share this post


Link to post
Share on other sites
Advertisement
Quote:
Original post by Ademan555
Now, it works in some cases. But...

A * B does not equal transpose(B) * A

but shouldnt the two be equal? (well, A is a translation matrix, B is a rotation matrix)
In general, AB != BTA.

Share this post


Link to post
Share on other sites
To complete jyk's post with the corresponding equality formula:
A * B = ( ( A * B )T )T = ( BT * AT )T

Share this post


Link to post
Share on other sites
I guess i just want to somehow verify that my results are "correct", but my mind is swimming because of the whole variability of column major vs row major, and then the fact that openGL vs d3d store their matrices differently in linear memory

cheers
-Dan

Share this post


Link to post
Share on other sites
Quote:
Original post by Ademan555
I guess i just want to somehow verify that my results are "correct", but my mind is swimming because of the whole variability of column major vs row major, and then the fact that openGL vs d3d store their matrices differently in linear memory

That is incorrect. OpenGL uses column vectors and column major ordering, and D3D uses row vectors and row major ordering. In sum both (the respective vector kind and the ordering) annul each other, so that both OpenGL and D3D have the same matrix memory layout!

EDIT: Example w/ indices removed; see some posts below.

[Edited by - haegarr on April 20, 2006 5:01:08 AM]

Share this post


Link to post
Share on other sites
Quote:
Original post by Ademan555
I guess i just want to somehow verify that my results are "correct", but my mind is swimming because of the whole variability of column major vs row major, and then the fact that openGL vs d3d store their matrices differently in linear memory

cheers
-Dan


[edit:
edited to change "column/row major" to "column/row vector", since it turned out they're not the same thing afterall! Actually, considering the quoted post, I assume it's a detail that eludes the OP too.
]

Consider two matrices A and B that you want to multiply. A has dimension "m X n" ("m" rows & "n" columns) and B has "k X l".
This multiplication is allowed if and only if n == k.
Then you can proceed with it and the result will be a new matrix of dimension "m X l".
Here is where the difference with row-vector and column-vecotr format lies.
The former transforms a 4d vector "v" as a 1X4 matrix, while the latter considers it to be a 4X1 matrix. So if you had a 4X4 matrix "A", in row-vector format only the product v*A would be acceptable, and would yield a row vector, thus a 1X4 matrix (verify this). Similarly, with column-vector format, only A*v would be defined, and it would yield a column vector, or 4X1 matrix.

The rule to actually finding these products (in any format) is that:
in the product A*B of two matrices A,B, the element at the i-th row and j-th column must be the dot product of the i-th row of A with the j-th column of B.
Stick to this rule and you won't even have to consider the vector convention to get the right result.

As for the internal representation in memory, it's something you'll have to decide on your own, I don't think it is *directly* related to the vector format used. It depends on the actual implementation of your matrix multiplication routine. (e.g. if you'll be using SSE and stuff)

[Edited by - someusername on April 20, 2006 5:13:33 AM]

Share this post


Link to post
Share on other sites
The problem with the OP's code snippet is that it mixes up column and row vectors.

In the case of column vectors you have the matrix/vector product in the form

[ x y z w ] * v

and in the case of row vectors you have the matrix/vector product in the form

[ x ]
[ ]
[ y ]
v * [ ]
[ z ]
[ ]
[ w ]


I assume the OP tries to "misuse" the matrix/vector product to compute the matrix/matrix product. So I expect something like

const mat4 operator * (const mat4 & m) const
{
mat4 transposedM = transpose(m);
return (
mat4 (
transposedM * x,
transposedM * y,
transposedM * z,
transposedM * w
)
);
}


(w/o haven proven it, but should be correct) to be the thing Ademan555 is looking for. That has nothing special to do w/ a mathematical correspondence but simply with code reuse.

[Edited by - haegarr on April 20, 2006 5:18:06 AM]

Share this post


Link to post
Share on other sites
Quote:
Original post by someusername
Consider two matrices A and B that you want to multiply. A has dimension "m X n" ("m" rows & "n" columns) and B has "k X l".
This multiplication is allowed if and only if n == k.
Then you can proceed with it and the result will be a new matrix of dimension "m X l".
Here is where the difference with row-major and column-major format lies.
The former transforms a 4d vector "v" as a 1X4 matrix, while the latter considers it to be a 4X1 matrix. So if you had a 4X4 matrix "A", in row-major format only the product v*A would be acceptable, and would yield a row vector, thus a 1X4 matrix (verify this). Similarly, with column-major format, only A*v would be defined, and it would yield a column vector, or 4X1 matrix.
Just to clarify for the OP, row- vs. column-major and row vs. column vectors are two different issues. I think someusername may have misused the terms in his example above, but it's an easy mistake to make (I can say that from experience). Anyway, the issue he's talking about is row vs. column vectors; row- vs. column-major refers to how the matrix is laid out in memory.

Share this post


Link to post
Share on other sites
Quote:
Original post by haegarr
Quote:
Original post by Ademan555
I guess i just want to somehow verify that my results are "correct", but my mind is swimming because of the whole variability of column major vs row major, and then the fact that openGL vs d3d store their matrices differently in linear memory

That is incorrect. OpenGL uses column vectors and column major ordering, and D3D uses row vectors and row major ordering. In sum both (the respective vector kind and the ordering) annul each other, so that both OpenGL and D3D have the same matrix memory layout!

E.g. the x,y,z components of the translational part are always at indices 12, 13, 14.


I don't have any experience with openGL but doesn't it represent matrices as unidimensional arrays of floats, providing standard "array index" access to them, with the the elements of a matrix M arranged as:

[ M[0] M[4] M[8] M[12] ]
[ M[1] M[5] M[9] M[13] ]
[ M[2] M[6] M[10] M[14] ]
[ M[3] M[7] M[11] M[15] ]

?

Doesn't this mean if you setup a pointer to the first element M[0] (or M._11 in d3d), the following snippet would read its elements in columns?

float *pf = &M[0];
for( int i=1; i<=16; i++)
{
float f = *pf;
pf++;
}


In D3D the very same snippet would read them in rows instead.
How can they be arranged the same way in memory then?

Share this post


Link to post
Share on other sites
Quote:
Original post by jyk
Quote:
Original post by someusername
Consider two matrices A and B that you want to multiply. A has dimension "m X n" ("m" rows & "n" columns) and B has "k X l".
This multiplication is allowed if and only if n == k.
Then you can proceed with it and the result will be a new matrix of dimension "m X l".
Here is where the difference with row-major and column-major format lies.
The former transforms a 4d vector "v" as a 1X4 matrix, while the latter considers it to be a 4X1 matrix. So if you had a 4X4 matrix "A", in row-major format only the product v*A would be acceptable, and would yield a row vector, thus a 1X4 matrix (verify this). Similarly, with column-major format, only A*v would be defined, and it would yield a column vector, or 4X1 matrix.
Just to clarify for the OP, row- vs. column-major and row vs. column vectors are two different issues. I think someusername may have misused the terms in his example above, but it's an easy mistake to make (I can say that from experience). Anyway, the issue he's talking about is row vs. column vectors; row- vs. column-major refers to how the matrix is laid out in memory.

Oops! Sorry about that, I thought it was essentially the same thing. I'll edit the post.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!