Sign in to follow this  
Phillemann2

OpenGL Unifying OpenGL and DirectX coordinate systems

Recommended Posts

Phillemann2    100
I'm co-designing a graphics engine which is supposed to support both DirectX (>=9) and OpenGL (>=2) with support for the fixed function pipeline.

DirectX and OpenGL use different coordinate systems (left-handed vs. right-handed), so care must be taken when dealing with matrices. Currently, I've got a function [font="Courier New"]set_transformation_matrix(matrix4x4)[/font] which flips the z axis in the OpenGL backend and doesn't in the DirectX backend.

However, I also want to support shaders. My shader class has a function [font="Courier New"]set_uniform_matrix(string name,matrix4x4)[/font] which sets a matrix. This setter function, however, should not flip the z axis per se, because we might want to use the matrix for something other than projection.

So my question is: How do I treat the coordinate systems in my engine? Or, if I were to abstract GLSL and HLSL, how would I treat projection matrices? "Tag" them with "is_projection_matrix"?

Share this post


Link to post
Share on other sites
mhagain    13430
D3D (since at least version 8) has matrix functions for both LH and RH systems, so the differences are not really relevant any more. Just use the same system in both APIs and it will work just fine.

Share this post


Link to post
Share on other sites
Phillemann2    100
You're both missing the point.

I cannot simply "pick" a coordinate system, since the user should be able to set shader matrix variables himself. And I cannot just transform those matrices to match OpenGL's or DirectX's coordinate system since I don't know if the matrices are used for projection or something else. If they're not used for projection, the transformation leads to unexpected behavior.

Share this post


Link to post
Share on other sites
gsamour    140
If you can't pick the coordinate system, then you must let the user pick it and then call the API's LH or RH functions depending on the user's choice.

Share this post


Link to post
Share on other sites
mhagain    13430
[quote name='Phillemann2' timestamp='1311005369' post='4836858']I cannot simply "pick" a coordinate system, since the user should be able to set shader matrix variables himself. And I cannot just transform those matrices to match OpenGL's or DirectX's coordinate system since I don't know if the matrices are used for projection or something else. If they're not used for projection, the transformation leads to unexpected behavior.
[/quote]

Ok, that's new info.

Nonetheless, the core point remains: D3D [i]doesn't actually have a set coordinate system[/i]. It supports both LH and RH and you as the programmer can choose which to use. This is true even in the fixed pipeline. So unless there's more new info we don't yet have (like the user being able to choose the coordinate system when they supply coordinates) then there's no reason why you can't just use -RH globally and be done with it.

Similarly, OpenGL's gluPerspective and glFrustum calls generate a RH coord transformation, but there's absolutely nothing to stop you from composing your own LH matrixes and using glLoadMatrixf. That will also work with the fixed pipeline.

So none of this is really any kind of big deal and doesn't involve the level of care you describe. Every D3D matrix function where handedness matters has both an -LH and an -RH version, and you can use either. I've ported programs from OpenGL to D3D and it's been utterly painless so far as matrixes and coordinate systems are involved.

Share this post


Link to post
Share on other sites
haegarr    7372
[quote name='Phillemann2' timestamp='1311005369' post='4836858']
I cannot simply "pick" a coordinate system, since the user should be able to set shader matrix variables himself. And I cannot just transform those matrices to match OpenGL's or DirectX's coordinate system since I don't know if the matrices are used for projection or something else....
[/quote]
AFAIK at some point it must be ensured that the involved matrices match a convention before they are concatenated. There must be a decision to use the one or other system. If the user is responsible to set any and all matrices then you can leave this decision making to her anyway. Otherwise you need a prescription or a preferences setting.

[quote name='Phillemann2' timestamp='1311005369' post='4836858']
... If they're not used for projection, the transformation leads to unexpected behavior.
[/quote]
This is not exactly true. Because the mirroring happens inside a chain of transformations, you can adapt either the left or else right matrix, because of
( [b]P[/b] * [b]M[/b] ) * [b]V[/b] == [b]P[/b] * ( [b]M[/b] * [b]V[/b] )
when using column vector notation where [b]P[/b] means the projection, [b]M[/b] the mirroring and [b]V[/b] the view matrix. However, you're right in that you must not shift the mirroring onto the side of the model matrices as long as you don't want the user to be able to intermix handedness at will.

Share this post


Link to post
Share on other sites
AgentC    2352
One possible solution: don't put any tagging or conversion code to the shader uniform system, but rather make your camera class behave differently based on the API being used and supply a projection matrix that's correct for the API. There probably are projective matrices elsewhere as well, such as for projective lights and shadowmaps, and these will also need adjustment.

Share this post


Link to post
Share on other sites
Phillemann2    100
Ok, first of all, thanks for all your input. We've looked at each post and then decided on a solution, which is as follows:

The convention we use in the renderer is a left-handed system. The renderer provides projection functions for this system only (copies of the DirectX LHS projection functions). In set_transformation_matrix(matrix4x4 input), [font="Arial"]the OpenGL backend transforms the input matrix by doing[/font]

translation_matrix({0,0,-1}) * scaling_matrix({0,0,2}) * input

[font="Arial"]which accounts for the different canonical view volumes in OpenGL and DirectX. Thus, OpenGL also uses a LHS system, the z axis is not flipped.[/font]
[font="Arial"]
Additionally, the shaders now do not receive a plain [font="Courier New"]matrix4x4[/font] in the [font="Courier New"]set_uniform_matrix[/font] function anymore, but a new class containing the [font="Courier New"]matrix4x4[/font] and a flag indicating if the supplied matrix is a projection matrix. If that's the case, the same transformation as above is done.[/font]

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Similar Content

    • By povilaslt2
      Hello. I'm Programmer who is in search of 2D game project who preferably uses OpenGL and C++. You can see my projects in GitHub. Project genre doesn't matter (except MMO's :D).
    • By ZeldaFan555
      Hello, My name is Matt. I am a programmer. I mostly use Java, but can use C++ and various other languages. I'm looking for someone to partner up with for random projects, preferably using OpenGL, though I'd be open to just about anything. If you're interested you can contact me on Skype or on here, thank you!
      Skype: Mangodoor408
    • By tyhender
      Hello, my name is Mark. I'm hobby programmer. 
      So recently,I thought that it's good idea to find people to create a full 3D engine. I'm looking for people experienced in scripting 3D shaders and implementing physics into engine(game)(we are going to use the React physics engine). 
      And,ye,no money =D I'm just looking for hobbyists that will be proud of their work. If engine(or game) will have financial succes,well,then maybe =D
      Sorry for late replies.
      I mostly give more information when people PM me,but this post is REALLY short,even for me =D
      So here's few more points:
      Engine will use openGL and SDL for graphics. It will use React3D physics library for physics simulation. Engine(most probably,atleast for the first part) won't have graphical fron-end,it will be a framework . I think final engine should be enough to set up an FPS in a couple of minutes. A bit about my self:
      I've been programming for 7 years total. I learned very slowly it as "secondary interesting thing" for like 3 years, but then began to script more seriously.  My primary language is C++,which we are going to use for the engine. Yes,I did 3D graphics with physics simulation before. No, my portfolio isn't very impressive. I'm working on that No,I wasn't employed officially. If anybody need to know more PM me. 
       
    • By Zaphyk
      I am developing my engine using the OpenGL 3.3 compatibility profile. It runs as expected on my NVIDIA card and on my Intel Card however when I tried it on an AMD setup it ran 3 times worse than on the other setups. Could this be a AMD driver thing or is this probably a problem with my OGL code? Could a different code standard create such bad performance?
    • By Kjell Andersson
      I'm trying to get some legacy OpenGL code to run with a shader pipeline,
      The legacy code uses glVertexPointer(), glColorPointer(), glNormalPointer() and glTexCoordPointer() to supply the vertex information.
      I know that it should be using setVertexAttribPointer() etc to clearly define the layout but that is not an option right now since the legacy code can't be modified to that extent.
      I've got a version 330 vertex shader to somewhat work:
      #version 330 uniform mat4 osg_ModelViewProjectionMatrix; uniform mat4 osg_ModelViewMatrix; layout(location = 0) in vec4 Vertex; layout(location = 2) in vec4 Normal; // Velocity layout(location = 3) in vec3 TexCoord; // TODO: is this the right layout location? out VertexData { vec4 color; vec3 velocity; float size; } VertexOut; void main(void) { vec4 p0 = Vertex; vec4 p1 = Vertex + vec4(Normal.x, Normal.y, Normal.z, 0.0f); vec3 velocity = (osg_ModelViewProjectionMatrix * p1 - osg_ModelViewProjectionMatrix * p0).xyz; VertexOut.velocity = velocity; VertexOut.size = TexCoord.y; gl_Position = osg_ModelViewMatrix * Vertex; } What works is the Vertex and Normal information that the legacy C++ OpenGL code seem to provide in layout location 0 and 2. This is fine.
      What I'm not getting to work is the TexCoord information that is supplied by a glTexCoordPointer() call in C++.
      Question:
      What layout location is the old standard pipeline using for glTexCoordPointer()? Or is this undefined?
       
      Side note: I'm trying to get an OpenSceneGraph 3.4.0 particle system to use custom vertex, geometry and fragment shaders for rendering the particles.
  • Popular Now