Jump to content
  • Advertisement
Sign in to follow this  
FantasyVII

OpenGL Vector and matrix multiplication order in DirectX and OpenGL

This topic is 768 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

This is driving me absolutely crazy. I have been researching this topic for days and the more I read about it the more I get confused.

 

Lets all agree that in math there are two ways to multiply a vector and a matrix.

 

You can do this 

P = Mv

 

where P is the final matrix, M is a matrix and v is a vector. This would mean that this is a column major matrix. which means that your translation vector in your matrix would look like this

 

[Xx, Xy, Xz, Tx]

[Yx, Yy, Yz, Ty]

[Zx, Zy, Zz, Tz]

[0,   0,   0,    1]

 

where Tx, Ty, and Tz is your translation vector.

 

On the other hand you can also do this

P = vM

 

This would mean that this is a row major matrix. Which means that your translation vector in your matrix would look like this

 

[Xx, Xy, Xz, 0]

[Yx, Yy, Yz, 0]

[Zx, Zy, Zz,  0]

[Tx, Ty, Tz,  1]

 

Ok this makes total sense. Now lets implement that in HLSL and GLSL

 

 

Now this HLSL code makes sense. The position vector is on the left side of the multiplication and the model matrix is on the right side. That means this is a row major matrix.

struct VOut
{
	float4 position : SV_POSITION;
	float4 color : COLOR;
};

VOut main(float4 position : POSITION, float4 color : COLOR)
{
	float4x4 buffer_modelMatrix = 
        {
		{ 1,    0,    0,    0 },
		{ 0,    1,    0,    0 },
		{ 0,    0,    1,    0 },
		{ 0.5f, 0.1f, 0.4f, 1 },
	};

	VOut output;

	output.position = mul(position, buffer_modelMatrix);
	output.color = color;

	return output;
}

Now lets move the position vector to the right side of the multiplication and the model matrix to the left side. Also we will move the translation vector to the top side inseard of the bottom side of the matrix. Well that is valid and now the matrix is a column major matrix. Everything works as intended and we can still translate the triangle just fine.

struct VOut
{
	float4 position : SV_POSITION;
	float4 color : COLOR;
};

VOut main(float4 position : POSITION, float4 color : COLOR)
{
	float4x4 buffer_modelMatrix = 
        {
		{ 1, 0, 0, 0.5f },
		{ 0, 1, 0, 0.1f },
		{ 0, 0, 1, 0.4f },
		{ 0, 0 ,0, 1    },
	};

	VOut output;

	output.position = mul(buffer_modelMatrix, position);
                output.color = color;
                return output;
}
 
Now lets try and do the same with GLSL
 
#version 450 core
layout(location = 0) in vec3 inPosition;
layout(location = 1) in vec3 inColor;

out vec3 fragmentColor;

mat4 buffer_modelMatrix = 
{
			  { 1, 0, 0, 0.5f },
			  { 0, 1, 0, 0    },
			  { 0, 0, 1, 0    },
			  { 0, 0, 0, 1    },
};

void main()
{
	gl_Position = buffer_modelMatrix * vec4(inPosition.xyz, 1.0f);
	fragmentColor = inColor;
}

if you try and run this code, the triangle will not be translated correctly. Even though it should. However if you move the inPosition to the left and model matrix to the right side of the multiplication, the translation will work correctly. But it shouldn't since the translation is still on the top side not bottom.

gl_Position = vec4(inPosition.xyz, 1.0f) * buffer_modelMatrix;

Now that does not make any seance because as we agreed above if the vector (in this case inPosition) is on the left side of the multiplication and if the matrix is on the right side, then this would mean that this is a row major matrix order and the translation should be on the bottom not top.

 

I hope i'm making sense. Can someone please explain why GLSL is not following the above rules for vector and matrix multiplication?

Edited by FantasyVII

Share this post


Link to post
Share on other sites
Advertisement

You are doing an uncorrect syntax of mat4 constructor and there is no need to specify the f of floats in GLSL, it should be 

mat4 buffer_modelMatrix = mat4(1, 0, 0, 0.5,
                               0, 1, 0, 0,
                               0, 0, 1, 0,
                               0, 0, 0, 1);

Share this post


Link to post
Share on other sites
The HLSL matrix constructor takes rows of values, but the GLSL matrix constructor takes columns of values.
So your HLSL code is putting a translation in the right column, but your GLSL code is putting a translation in the bottom row. Fun quirks...
Seeing it's rare to construct a matrix in a shader, it's very easy to overlook this differences in the two languages :|

Share this post


Link to post
Share on other sites

 

You are doing an uncorrect syntax of mat4 constructor and there is no need to specify the f of floats in GLSL, it should be 

mat4 buffer_modelMatrix = mat4(1, 0, 0, 0.5,
                               0, 1, 0, 0,
                               0, 0, 1, 0,
                               0, 0, 0, 1);

Off the top of my head that might not work either. I recall GLSL being really picky when you start mixing integers with decimals. You might have to put .0 on the end of all the numbers not just the.

 

As for what is actually going on I cannot say for sure but I don't think you can actually change whether a matrix is row or column major in either of those since that is a matter of how it is laid out in memory which is out of your hands at that point. What you can change is the matrix you build to account for that. you are transposing those matrices which is effectively accounts for that. As far as I can tell, when you swap the order of multiplication you are transposing the (in) position vector as there is a mathematical limit to the form of matrces that can be multiplied together. When the vector is on the right it will need to be vertical but when it is on left it needs to be horizontal. I believe both OpenGL and DirectX handle that for you transparently.

 

According to this:

https://www.opengl.org/wiki/Data_Type_(GLSL)#Constructors

For multiple values, matrices are filled in in column-major order

The matrix that is build using Supremecy's code (and I assume what you intended) would make a matrix that is suitable to be multiplied when a vector is horizontal, meaning the vector will need to be on the left. If you want it on the right then you will need to transpose the vector.

Share this post


Link to post
Share on other sites

The HLSL matrix constructor takes rows of values, but the GLSL matrix constructor takes columns of values.
So your HLSL code is putting a translation in the right column, but your GLSL code is putting a translation in the bottom row. Fun quirks...
Seeing it's rare to construct a matrix in a shader, it's very easy to overlook this differences in the two languages :|

 

This makes so much sense now. Thank you. I was going insane.

The reason i'm defining the matrix inside HLSL and GLSL is to help me understand how both shaders handle matrices. I didn't want to send the matrix from C++ to the shader and add more confusion to the already confusing topic.

 

Alright to to summarize,

 

Row major order:-

Vector is always on the left side of the multiplication with a matrix.

P = vM

Translation vector is always on the 12, 13 and 14th element.

 

Column major order:-

Vector is always on the right side of the multiplication with a matrix.

P = Mv

Translation vector is always on the 3, 7 and 11th element.

 

So the only difference between HLSL and GLSL is how they layout this data in memory.

HLSL reads the matrix row by row. GLSL reads the matrix column by column.

 

So this is how HLSL layout the data in memory

0  1  2  3 
4  5  6  7 
8  9  10 11
12 13 14 15

And this is how GLSL layout the data in memory.

0 4 8  12 
1 5 9  13 
2 6 10 14 
3 7 11 15

Alright. This makes sense. Now I have to understand how I should layout the matrix in C++ and send it to HLSL and GLSL correctly.

 

Man..... Why couldn't OpenGL just read things row by row...... WHY !! :P

Share this post


Link to post
Share on other sites

Wow, this how I imagine hell looks like.

 

Alright, I think I understand what you are saying.

 

So let me try one more time.

 

Math row major and column major /= CS row major and column major.

Math talks about the multiplication order and CS talks about the indexing order which do not equal each other.

 

Both HLSL and GLSL use this order

0 4 8  12 
1 5 9  13 
2 6 10 14 
3 7 11 15

But HLSL matrix constructor decided to take the matrix this way

0  1  2  3 
4  5  6  7 
8  9  10 11
12 13 14 15

So, in C++ if you have a 1D array of 16 elements, the order in memory should be "column order indexing" even though in the shader you are doing "math row major" multiplication.

 

and if you are doing "math column major" multiplication your C++ memory layout for the array should be in "row major indexing".

 

Yeah this is defiantly hell.


C++ column order indexing               HLSL/GLSL
  [0 4 8  12]                  =
  [1 5 9  13]                  =         P = vM (row major multiplication "Math")
  [2 6 10 14]                  =
  [3 7 11 15]                  =

C++ array indexing         [0, 1, 2, 3   -  4, 5, 6, 7   -  8, 9, 10, 11  -  12, 13, 14, 15] 
C++ array memory layout    [0, 4, 8, 12  -  1, 5, 9, 13  -  2, 6, 10, 14  -  3,  7,  11, 15] 

Translation vector is at 
array[3]
array[7]
array[11]

------------------------------------------------------------------------------------------

C++ row order indexing                 HLSL/GLSL
  [0  1  2  3]                 =
  [4  5  6  7]                 =        P = Mv (column major multiplication "Math")
  [8  9  10 11]                =
  [12 13 14 15]                =

C++ array indexing         [0, 1, 2, 3   -  4, 5, 6, 7   -  8, 9, 10, 11  -  12, 13, 14, 15] 
C++ array memory layout    [0, 1, 2, 3   -  4, 5, 6, 7   -  8, 9, 10, 11  -  12, 13, 14, 15] 

Translation vector is at 
array[12]
array[13]
array[14]

Share this post


Link to post
Share on other sites

So, in C++ if you have a 1D array of 16 elements, the order in memory should be "column order indexing" even though in the shader you are doing "math row major" multiplication.

Yes, whether you're doing "math row major" multiplication or not is irrelevant.
The only time you should use row-major ordering in C++ is if you've also used the row_major keyword to tell your shaders to interpret the memory using that convention.

and if you are doing "math column major" multiplication your C++ memory layout for the array should be in "row major indexing".

No. There's no connection between whether you should use a particular "comp sci majorness" and a "math majorness". Comp-sci-row-major and math-column-major will work together just fine.

You just need to make sure that:
* If you use comp-sci column-major memory layout in the C++ side, then your shaders should work out of the box (just avoid the row_major keyword!).
* If you use comp-sci row-major memory layout in the C++ side, then use the row_major keyword in your shaders so that they interpret your memory correctly.
And separately:
* That your math makes sense, from a purely mathematical perspective  :)
* i.e. The choice of row-vectors / column-vectors, basis vectors in rows / basis vectors in columns, pre-multiply / post-multiply all depend on which mathematical conventions you want to use. These are all well defined and work as long as you're consistent.
* The math conventions that you choose have no impact whatsoever on which com-sci conventions you can use.

Edited by Hodgman

Share this post


Link to post
Share on other sites

Yeah this is defiantly hell.
 

 

I gotta agree on that one. I find the best method of dealing with this is to start at the end and work back. I think of my vectors as a vertical column so mathematically they then have to be the second argument of a multiplication (with a 4x4 matrix) or else it can't be multiplied. Once that's set I then I know that I have to build my transforms a certain way (e.g. the translation needs to be in the last column).

 

Then you need to hide everything away behind functions and avoid creating a matrix directly from values. That way you never really care what ordering it is, it'll just work as you expect it to. The only confusion then comes when you send it to your shader but you can document that step in your code quite thoroughly so once you have it once it shouldn't be an issue. You might need to transpose before sending but that's about it.

 

I believe the mathematical convention is actually to go down columns first and then across rows (which is the opposite of most other things in maths as you tend to go across first then up) so things being column major does make sense in that regard.

 

This has probably been the topic that causes me the most confusion too. Thinking backwards, trying to stick with mathematical convention and using functions/abstraction helps a lot.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!