glm::lookAt with DirectX

Started by
6 comments, last by maxfire 8 years, 9 months ago

I'm creating a DirectX renderer for a project I previously made in OpenGL, reusing the glm library would be really useful.

Is it possible to convert the glm::lookAt matrix to work with DirectX, I have tried the following but the matrix created by DirectX Math Library is different


	D3DXMatrixLookAtLH( &matView,
		&D3DXVECTOR3( 0.0f, 3.0f, 5.0f ),
		&D3DXVECTOR3( 0.0f, 0.0f, 0.0f ),
		&D3DXVECTOR3( 0.0f, 1.0f, 0.0f ) ); 
	
	glm::mat4 gmatView = glm::lookAt( glm::vec3( 0.0f, 3.0f, 5.0f ),
									glm::vec3( 0.0f, 0.0f, 0.0f ),
									glm::vec3( 0.0f, 1.0f, 0.0f ) );

	float* pt = glm::value_ptr( gmatView );
	D3DXMATRIX matGLConvertView = D3DXMATRIX( pt );

Thanks

Advertisement

Just a guess (I'm not a glm person): try comparing your glm lookAt matrix with the transpose of D3DXMatrixLookAtRH.

Please don't PM me with questions. Post them in the forums for everyone's benefit, and I can embarrass myself publicly.

You don't forget how to play when you grow old; you grow old when you forget how to play.

DirectX prefers row-major matrices, whereas OpenGL matrices are column-major. You'll need to convert between the two, such as with the glm::rowMajor() function.

You are also likely to find differences in how projection matrices are constructed between the two, which are somewhat harder to overcome.

Tristam MacDonald. Ex-BigTech Software Engineer. Future farmer. [https://trist.am]

Buckeye, you utter legend. After you said that I looked into glm its default is a RightHand projection and look at matrix. Once I use D3DXMatrixPerspectiveFovRH instead of D3DXMatrixLookAtLH it sprung to life :D.

I suggest you to write your own versions of lookAt perspective ortho ect. matrices, and then send them to the GLM guy. You can find the layout of those matrices on msdn or in DirectXMatrix header/inl files.

The code that I "stole". SELF_TYPE is the matrix type.



		SELF_TYPE result;

		vec3<DATA_TYPE> R2 = normalized(eye - at);

		vec3<DATA_TYPE> R0 = cross(up, R2);
		R0 = normalized(R0);

		vec3<DATA_TYPE> R1 = cross(R2, R0);

		vec3<DATA_TYPE> NegEyePosition = -eye;

		DATA_TYPE D0 = dot(R0, NegEyePosition);
		DATA_TYPE D1 = dot(R1, NegEyePosition);
		DATA_TYPE D2 = dot(R2, NegEyePosition);

		result.data[0] = vec4<DATA_TYPE>(R0, D0);
		result.data[1] = vec4<DATA_TYPE>(R1, D1);
		result.data[2] = vec4<DATA_TYPE>(R2, D2);
		result.data[3] = vec4<DATA_TYPE>::GetAxis(3);

		result = transposed(result); // For lazy ppl.

		return result;

DirectX prefers row-major matrices, whereas OpenGL matrices are column-major.

Just to be pedantic -- the D3DX math library is row-major (in both array storage and mathematical conventions), but there's no reason to use it in the first place if you're already using GLM.

Both HLSL and GLSL (i.e. shaders in D3D/GL) use column-major array storage conventions by default, and both have no default mathematical row/column-majorness convention (that part is down to how you write your math).

If you use D3DX matrices with (default compiled) HLSL, then you have to transpose your matrices into column-major storage order before passing them to the shader, which is a pain. Ironically, GLM could then be argued to be a better fit :lol:

Some people also do the mental gymnastics of using column-major mathematical conventions in their HLSL code, but row major conventioms with D3DX. This "just works" because the HLSL code interprets the row-major-storage matrices as column-major-storage, which is an implicit (free) transpose, which cancels out the switching of maths conventions. That's mental though!

You can use the exact same matrix math under both GL and D3D - e.g. using column-major everywhere.
IMHO, if you're switching your conventions when you switch APIs, you're in for a world of hurt. Pick one set of conventions and stick with them.

The only annoying difference between GL and D3D is that a GL projection matrix (stupidly) needs to scale Z values into a -1 to 1 range, while a D3D projection matrix needs to scale Z values into a 0 to 1 range.
You can 'fix' a GL style projection matrix for use in D3D by concatenating it with a "scale z by 0.5" matrix and a "translate z by 0.5" matrix (or vice versa by concatenating with a "scale z by 2" matrix and a "translate z by -1" matrix).

The only annoying difference between GL and D3D is that a GL projection matrix (stupidly) needs to scale Z values into a -1 to 1 range, while a D3D projection matrix needs to scale Z values into a 0 to 1 range.

You can 'fix' a GL style projection matrix for use in D3D by concatenating it with a "scale z by 0.5" matrix and a "translate z by 0.5" matrix (or vice versa by concatenating with a "scale z by 2" matrix and a "translate z by -1" matrix).

Or you can glClipControl(GL_UPPER_LEFT, GL_ZERO_TO_ONE) and pray that your players know how to update drivers.

Thanks for the answers guys, my final solution was to use the glm RH ViewMatrix, then implement a projection matrix myself which serves up different z ranges based on a flag of OpenGL or DirectX.

Hodgman, I will give your scale and translate a go, seems like a better solution than making two different matrix's :)

This topic is closed to new replies.

Advertisement