# OpenGL Unifying OpenGL and DirectX coordinate systems

## Recommended Posts

I'm co-designing a graphics engine which is supposed to support both DirectX (>=9) and OpenGL (>=2) with support for the fixed function pipeline.

DirectX and OpenGL use different coordinate systems (left-handed vs. right-handed), so care must be taken when dealing with matrices. Currently, I've got a function [font="Courier New"]set_transformation_matrix(matrix4x4)[/font] which flips the z axis in the OpenGL backend and doesn't in the DirectX backend.

However, I also want to support shaders. My shader class has a function [font="Courier New"]set_uniform_matrix(string name,matrix4x4)[/font] which sets a matrix. This setter function, however, should not flip the z axis per se, because we might want to use the matrix for something other than projection.

So my question is: How do I treat the coordinate systems in my engine? Or, if I were to abstract GLSL and HLSL, how would I treat projection matrices? "Tag" them with "is_projection_matrix"?

##### Share on other sites
You pick either a left handed or right handed system and go with that. There is nothing more special about it than that.

##### Share on other sites
D3D (since at least version 8) has matrix functions for both LH and RH systems, so the differences are not really relevant any more. Just use the same system in both APIs and it will work just fine.

##### Share on other sites
You're both missing the point.

I cannot simply "pick" a coordinate system, since the user should be able to set shader matrix variables himself. And I cannot just transform those matrices to match OpenGL's or DirectX's coordinate system since I don't know if the matrices are used for projection or something else. If they're not used for projection, the transformation leads to unexpected behavior.

##### Share on other sites
If you can't pick the coordinate system, then you must let the user pick it and then call the API's LH or RH functions depending on the user's choice.

##### Share on other sites
[quote name='Phillemann2' timestamp='1311005369' post='4836858']I cannot simply "pick" a coordinate system, since the user should be able to set shader matrix variables himself. And I cannot just transform those matrices to match OpenGL's or DirectX's coordinate system since I don't know if the matrices are used for projection or something else. If they're not used for projection, the transformation leads to unexpected behavior.
[/quote]

Ok, that's new info.

Nonetheless, the core point remains: D3D [i]doesn't actually have a set coordinate system[/i]. It supports both LH and RH and you as the programmer can choose which to use. This is true even in the fixed pipeline. So unless there's more new info we don't yet have (like the user being able to choose the coordinate system when they supply coordinates) then there's no reason why you can't just use -RH globally and be done with it.

Similarly, OpenGL's gluPerspective and glFrustum calls generate a RH coord transformation, but there's absolutely nothing to stop you from composing your own LH matrixes and using glLoadMatrixf. That will also work with the fixed pipeline.

So none of this is really any kind of big deal and doesn't involve the level of care you describe. Every D3D matrix function where handedness matters has both an -LH and an -RH version, and you can use either. I've ported programs from OpenGL to D3D and it's been utterly painless so far as matrixes and coordinate systems are involved.

##### Share on other sites
[quote name='Phillemann2' timestamp='1311005369' post='4836858']
I cannot simply "pick" a coordinate system, since the user should be able to set shader matrix variables himself. And I cannot just transform those matrices to match OpenGL's or DirectX's coordinate system since I don't know if the matrices are used for projection or something else....
[/quote]
AFAIK at some point it must be ensured that the involved matrices match a convention before they are concatenated. There must be a decision to use the one or other system. If the user is responsible to set any and all matrices then you can leave this decision making to her anyway. Otherwise you need a prescription or a preferences setting.

[quote name='Phillemann2' timestamp='1311005369' post='4836858']
... If they're not used for projection, the transformation leads to unexpected behavior.
[/quote]
This is not exactly true. Because the mirroring happens inside a chain of transformations, you can adapt either the left or else right matrix, because of
( [b]P[/b] * [b]M[/b] ) * [b]V[/b] == [b]P[/b] * ( [b]M[/b] * [b]V[/b] )
when using column vector notation where [b]P[/b] means the projection, [b]M[/b] the mirroring and [b]V[/b] the view matrix. However, you're right in that you must not shift the mirroring onto the side of the model matrices as long as you don't want the user to be able to intermix handedness at will.

##### Share on other sites
One possible solution: don't put any tagging or conversion code to the shader uniform system, but rather make your camera class behave differently based on the API being used and supply a projection matrix that's correct for the API. There probably are projective matrices elsewhere as well, such as for projective lights and shadowmaps, and these will also need adjustment.

##### Share on other sites
Ok, first of all, thanks for all your input. We've looked at each post and then decided on a solution, which is as follows:

The convention we use in the renderer is a left-handed system. The renderer provides projection functions for this system only (copies of the DirectX LHS projection functions). In set_transformation_matrix(matrix4x4 input), [font="Arial"]the OpenGL backend transforms the input matrix by doing[/font]

translation_matrix({0,0,-1}) * scaling_matrix({0,0,2}) * input

[font="Arial"]which accounts for the different canonical view volumes in OpenGL and DirectX. Thus, OpenGL also uses a LHS system, the z axis is not flipped.[/font]
[font="Arial"]
Additionally, the shaders now do not receive a plain [font="Courier New"]matrix4x4[/font] in the [font="Courier New"]set_uniform_matrix[/font] function anymore, but a new class containing the [font="Courier New"]matrix4x4[/font] and a flag indicating if the supplied matrix is a projection matrix. If that's the case, the same transformation as above is done.[/font]

## Create an account

Register a new account

• ## Partner Spotlight

• ### Forum Statistics

• Total Topics
627667
• Total Posts
2978534
• ### Similar Content

• Both functions are available since 3.0, and I'm currently using glMapBuffer(), which works fine.
But, I was wondering if anyone has experienced advantage in using glMapBufferRange(), which allows to specify the range of the mapped buffer. Could this be only a safety measure or does it improve performance?
Note: I'm not asking about glBufferSubData()/glBufferData. Those two are irrelevant in this case.
• By xhcao
Before using void glBindImageTexture(    GLuint unit, GLuint texture, GLint level, GLboolean layered, GLint layer, GLenum access, GLenum format), does need to make sure that texture is completeness.
• By cebugdev
hi guys,
are there any books, link online or any other resources that discusses on how to build special effects such as magic, lightning, etc. in OpenGL? i mean, yeah most of them are using particles but im looking for resources specifically on how to manipulate the particles to look like an effect that can be use for games,. i did fire particle before, and I want to learn how to do the other 'magic' as well.
Like are there one book or link(cant find in google) that atleast featured how to make different particle effects in OpenGL (or DirectX)? If there is no one stop shop for it, maybe ill just look for some tips on how to make a particle engine that is flexible enough to enable me to design different effects/magic
let me know if you guys have recommendations.
• By dud3
How do we rotate the camera around x axis 360 degrees, without having the strange effect as in my video below?
Mine behaves exactly the same way spherical coordinates would, I'm using euler angles.
Tried googling, but couldn't find a proper answer, guessing I don't know what exactly to google for, googled 'rotate 360 around x axis', got no proper answers.

References:
Code: https://pastebin.com/Hcshj3FQ
The video shows the difference between blender and my rotation:

• By Defend
I've had a Google around for this but haven't yet found some solid advice. There is a lot of "it depends", but I'm not sure on what.
My question is what's a good rule of thumb to follow when it comes to creating/using VBOs & VAOs? As in, when should I use multiple or when should I not? My understanding so far is that if I need a new VBO, then I need a new VAO. So when it comes to rendering multiple objects I can either:
* make lots of VAO/VBO pairs and flip through them to render different objects, or
* make one big VBO and jump around its memory to render different objects.
I also understand that if I need to render objects with different vertex attributes, then a new VAO is necessary in this case.
If that "it depends" really is quite variable, what's best for a beginner with OpenGL, assuming that better approaches can be learnt later with better understanding?

• 9
• 10
• 10
• 12
• 22