Advertisement Jump to content
Sign in to follow this  

confuse with row major and column major matrix multiplication in hlsl

This topic is 1758 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

i realize that in d3dx math library or xnamath library, there matrix is saved row major in memory.

in effect framework, when we set a matrix parameter by effect interface setvarible, it will transpose the target matrix. if i am not use effect framework, i must set shader parameter by SetVertexConstantF...


but when i use effect framework, i must transform vertices by this code:

OutPosition = mul( InPosition, mViewProj ); 

in normal shader code without effect framework, i must use:

OutPosition = mul( mViewProj, InPosition );

if i change the order of this , it will work incorrectly.


i found some article, they sait that hlsl store matrix as column major.

if it is true, i think i must use pre-multiply matrix:

OutPosition = mul( mViewProj, InPosition );

but when i debug this in pix or nsight, i found that the marix parameter is store in register as it in the memory, for example:

if mViewProj is:

x0 y0 z0 w0

x1 y1 z1 w1

x2 y2 z2 w2

x3 y3 z3 w3

in vertex shader register:

c0:x0 y0 z0 w0

c1:x1 y1 z1 w1

c2:x2 y2 z2 w2

c3:x3 y3 z3 w3


it looks like store as row major. why?



is this conflict with the basic rules of matrix multiplication:

   column major: matrix * vec

   row major : vec * matrix.






Share this post

Link to post
Share on other sites

To reiterate, majorness, or how multi-dimensional arrays are stored in memory, is a completely separate concept from vector type, a mathematical construct which dictates the product and process of matrix multiplication, and separate again from handedness, or the selection of 3 mutually perpendicular vectors to form an orthonormal basis.

Share this post

Link to post
Share on other sites

It is true that from a logical point of view, row-major and column-major storage order has nothing to do with the matrix product. But things are messy...


An existing implementation relies on a well defined storage order, and passing in matrices in the other order will work as if the matrices are transposed. That means the result will be

    AT * BT

where the desired result was the mathematically different A * B.


In the OP, one of the matrices is a vector, and the other actually a matrix. Vectors and their transpose are stored identically regardless of row- or column-major layout. D3D traditionally uses row vectors and hence the desired product looks like:

   v * M

Because the result is a vector and hence independent on storage order, we can transpose the result without penalty. If we do so, re-structure, and ignore the storage order for the vector, we get 1)

    --> ( v * M )T = MT * vT --> MT * v

1)Notice that mathematically v and vT are not identical. It works here because the GPU makes no difference.
The conclusion is that in such a situation, if we pass the matrix with the wrong (w.r.t. the expected) storage order into the shader, we need to reverse the order of operands to yield in the correct result!
Notice that things get more problematic if the matrix isn't a 4x4 quadratic one. This is because a GPU manages matrices as an array of vectors, where each vector has 4 scalar components. This may require padding if the matrix isn't a 4x4 matrix. This is also the reason why the storage layout is changed (IIRC that came with D3D10) from row-major to column-major.
Edited by haegarr

Share this post

Link to post
Share on other sites
Sign in to follow this  

  • Advertisement

Important Information

By using, you agree to our community Guidelines, Terms of Use, and Privacy Policy. is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!