DirectX prefers row-major matrices, whereas OpenGL matrices are column-major.
Just to be pedantic -- the D3D
X math library is row-major (in both array storage and mathematical conventions), but there's no reason to use it in the first place if you're already using GLM.
Both HLSL and GLSL (i.e. shaders in D3D/GL) use column-major array storage conventions by default, and both have no default mathematical row/column-majorness convention (that part is down to how you write your math).
If you use D3DX matrices with (default compiled) HLSL, then you have to transpose your matrices into column-major storage order before passing them to the shader, which is a pain. Ironically, GLM could then be argued to be a better fit :lol:
Some people also do the mental gymnastics of using column-major mathematical conventions in their HLSL code, but row major conventioms with D3DX. This "just works" because the HLSL code interprets the row-major-storage matrices as column-major-storage, which is an implicit (free) transpose, which cancels out the switching of maths conventions. That's mental though!
You can use the exact same matrix math under both GL and D3D - e.g. using column-major everywhere.
IMHO, if you're switching your conventions when you switch APIs, you're in for a world of hurt. Pick one set of conventions and stick with them.
The only annoying difference between GL and D3D is that a GL projection matrix (stupidly) needs to scale Z values into a -1 to 1 range, while a D3D projection matrix needs to scale Z values into a 0 to 1 range.
You can 'fix' a GL style projection matrix for use in D3D by concatenating it with a "scale z by 0.5" matrix and a "translate z by 0.5" matrix (
or vice versa by concatenating with a "scale z by 2" matrix and a "translate z by -1" matrix).