Hi,

Matrix can be column or row major order.

OpenGL use column major order and directx row major order.

What is the best to use when a matrix class is written ?

Thanks

**Edited by Alundra, 02 July 2013 - 02:39 PM.**

Started by Alundra, Jul 02 2013 02:38 PM

7 replies to this topic

Posted 02 July 2013 - 02:52 PM

I would just use whatever way your graphics library uses. There is really no significant difference. All it changes is the order you multiply matrices and vectors in, and if you have to switch between the two to support different graphics platforms then include a transpose method. I personally prefer the directx way of doing it, but either will be fine.

My current game project Platform RPG

Posted 02 July 2013 - 02:53 PM

Column or row major matrices is only half the question; column or row vectors is the second half. While the two APIs are written with different matrix storage modes, they are also written with different vector types, in mind. The net result is that both APIs have exactly the same memory layout of their matrices. In other words, you can use the same matrices as stored in memory in both APIs.

So whether you go with column major matrices and column vector, or row major matrices and row vectors, does not change anything as far as OpenGL or Direct3D are concerned. The only difference is whether you multiply your vectors on the left or the right hand side of the matrix as determined by whether you use row or column vectors.

**Edited by Brother Bob, 02 July 2013 - 02:53 PM.**

Posted 02 July 2013 - 09:03 PM

I had posted this in the past. OpenGL or Direct3D does not care whether you used column major or row major .As long all your matrix multiplication and transformation follow the same rule. The only thing that OpenGL care about is that the element 12,13,14 of the matrix always represent the position. That means your xyz basic axis can be represented either as row or as column.

Posted 03 July 2013 - 08:08 AM

I store my matrix like that :

m16[ 0 ] = 1.0f; m16[ 4 ] = 0.0f; m16[ 8 ] = 0.0f; m16[ 12 ] = x; m16[ 1 ] = 0.0f; m16[ 5 ] = 1.0f; m16[ 9 ] = 0.0f; m16[ 13 ] = y; m16[ 2 ] = 0.0f; m16[ 6 ] = 0.0f; m16[ 10 ] = 1.0f; m16[ 14 ] = z; m16[ 3 ] = 0.0f; m16[ 7 ] = 0.0f; m16[ 11 ] = 0.0f; m16[ 15 ] = 1.0f;

I have renderer on Direct3D11 and OpenGL, and I have to transpose to have it works on D3D11.

I don't understand what you said about one matrix who works for all, maybe one #ifdef who change index ?

Posted 03 July 2013 - 08:39 AM

The OpenGL specification is written with column vectors in mind, and this is how a translation matrix for column vectors look like:

1 0 0 x 0 1 0 y 0 0 1 z 0 0 0 1

Now store this matrix in column major order, and you get the following memory layout:

1 0 0 0 0 1 0 0 0 0 1 0 x y z 1

Direct3D, on the other hand, uses a row vector notation, and this is how a translation matrix for row vectors look like:

1 0 0 0 0 1 0 0 0 0 1 0 x y z 1

Now store this matrix in row major order, and you get the following memory layout:

1 0 0 0 0 1 0 0 0 0 1 0 x y z 1

See how the effect of changing both matrix storage mode and vector mode negates each other and the final memory layout is exactly the same?

There are two questions that you seem to be mixing: column vs. row major storage, and column vs. row vectors. Column vs. row major storage dictates how a two-dimensional matrix is stored in one-dimensional memory, while column vs. row vectors dictates whether you multiply your vector on the left or right hand side of the matrix. The two are completely independent choices, but together they determine the physical layout in linear memory. Column major storage and column vectors have exactly the same physical storage as row major storage and row vectors. That is why you can use the same data for both APIs.

But as BornToCode said, it is not actually correct to say that OpenGL uses column vectors and column major storage. That is why I'm writing that the *specification is written with that notation*, because you can use any storage mode and vector mode as long as the final memory layout is consistent with that OpenGL assumes. Same applies for Direct3D.

**Edited by Brother Bob, 03 July 2013 - 08:42 AM.**

Posted 03 July 2013 - 01:33 PM

I store my matrix like that :

m16[ 0 ] = 1.0f; m16[ 4 ] = 0.0f; m16[ 8 ] = 0.0f; m16[ 12 ] = x; m16[ 1 ] = 0.0f; m16[ 5 ] = 1.0f; m16[ 9 ] = 0.0f; m16[ 13 ] = y; m16[ 2 ] = 0.0f; m16[ 6 ] = 0.0f; m16[ 10 ] = 1.0f; m16[ 14 ] = z; m16[ 3 ] = 0.0f; m16[ 7 ] = 0.0f; m16[ 11 ] = 0.0f; m16[ 15 ] = 1.0f;I have renderer on Direct3D11 and OpenGL, and I have to transpose to have it works on D3D11.

I don't understand what you said about one matrix who works for all, maybe one #ifdef who change index ?

The reason why you need to transpose your matrix for d3d11 it is because the shader matrix memory layout is the transpose of the matrix layout you have store on the CPU side by default. So instead of 12,13,14. The element at 3,7,11 represent position. But in OpenGL GLSL shaders the matrix layout is the same at the one you have on the CPU side, which is why there is no need to transpose for GLSL.

**Edited by BornToCode, 03 July 2013 - 01:42 PM.**

Posted 04 July 2013 - 10:34 AM

The reason why you need to transpose your matrix for d3d11 it is because the shader matrix memory layout is the transpose of the matrix layout you have store on the CPU side by default. So instead of 12,13,14. The element at 3,7,11 represent position. But in OpenGL GLSL shaders the matrix layout is the same at the one you have on the CPU side, which is why there is no need to transpose for GLSL.

There is no way to counter this transpose ?