Archived

This topic is now archived and is closed to further replies.

onnel

OpenGL Converting opengl to d3d coords

Recommended Posts

onnel    124
I''ve written a d3d conversion of an opengl md3 parser. The problem is, as I understand it, all opengl (and md3) coords are Lefthanded and d3d''s are right handed (or is it vice-versa?). What do I need to convert an opengl vertex into a d3d one? I know this has been discussed before, but the search engine was of no use. Sorry. Should be an easy one! Thanks for the help, Onnel

Share this post


Link to post
Share on other sites
Prosper/LOADED    100
I''m quite sure OpenGL can use left handed AND right handed coords. Just choose the one you want (I don''t remember how to do this, check your doc).

I suppose D3D8 can do this too.

Share this post


Link to post
Share on other sites
Dactylos    122
Simply negate the Z coordinate to transform between a left-handed and a right-handed coordinate system.
In a left handed coordinate system the positive Z axis points 'into' the screen, while in a right-handed coordinate system the positive Z axis 'comes out of' the screen.

The positive X axis always point to the right and the positive Y axis always point upwards.

Hold you right hand in front of your face, with your thumb (X axis) pointing to the right and your index finger (Y axis) pointing upwards, then point your middle finger (Z axis) towards your face. This is where the name 'right handed coordinate system' comes from.

Now hold your left hand in front of your face, similarly with your index finger (Y axis) pointing upwards and your thumb (X axis) pointing to the right, now your middle finger (Z axis) points away from your face. Hence the name 'left handed coordinate system'.

So, you see, there isn't really much difference between the two. IIRC OpenGL by default uses an RHS (Right handed system), while Direct3D uses an LHS. To make either one use the other system, simply scale your world matrix with (1, 1, -1) before applying your transformations and rotations etc...

[EDIT: I realised that I had given an incorrect scale vector]

Edited by - Dactylos on July 31, 2001 6:33:29 AM

Share this post


Link to post
Share on other sites
onnel    124
DAC,

Thanks for the good explanation! Does this even truly apply to the way coordinates are stored then, or does it only apply to transformations done on coords? If it doesn''t effect the way coords and stored, and front how I understand what you wrote, it doesn;t, then there is no problem loading the exact same vertex data into OPENGL or D3D.

Is this correct?

Are there any other issues that would cause problems loading index and vertex data which was originally stored for use with opengl into d3d? Winding issues (ordering of vertices CW vs. CCW), or anything else?

Thanks!
Onnel

Share this post


Link to post
Share on other sites
Dactylos    122
If you load a model created for an LHS into an application that uses an RHS, then the model will appear mirrored in the XY plane (the Z coordinate is in effect implicitly negated; so to speak ).

You should decide whether to use an RHS or an LHS in your engine and then load the vertices etc in the exact same way, but each frame you would automatically mirror the world matrix in the XY plane (in other words, scale by (1, 1, -1) as I mentioned above) if you are using the rendering API for which your chosen coordinate system isn't the default.

ex.

If you choose to always work with a right handed coordinate system (I'm assuming here that OpenGL uses a right handed system and Direct3D uses a left handed system. This might not be correct however, please look this up, or try it out. If I have it backwards, then simply do it 'the opposite way' from what I show here).

When rendering with the OpenGL API you don't have to do anything, because it uses a right handed system by default. so simply apply all the translations/rotations/etc and then render your model.

When rendering with the Direct3D API, you should apply a scale(1, 1, -1) to the world matrix (or the view matrix) every frame, before doing any transformations and/or rendering. This should ideally be done in your BeginFrame() function or equivalent (the one where you call d3d_device->BeginScene() etc...), so that afterward all translations and rotations and whatever the application using your engine performs are in effect performed in a right handed coordinate system.

If you don't apply this scale your models will appear mirrored (eg. an enemy wielding a sword in his right hand would appear correctly under one API, while the sword would appear in his left hand under the other API).

Of course there is also the possibility that I'm totally off and that both OpenGL and Direct3D use the same kind of coordinate system. If so, then just ignore what I've said (it's of course still good to know).

Umm.... Feels like I'm mostly ranting by now, so I'll stop...

[EDIT: I realised that I had given an incorrect scale vector]

Edited by - Dactylos on July 31, 2001 6:34:31 AM

Share this post


Link to post
Share on other sites
stefu    120
This may be a little of the topic,
but I think both OpenGL and D3D can use both RH and LH systems.

I use RH for both APIs. It''s just the question of projection matrix and face culling. To use RH for d3d call D3DXPerspectiveRH (instead of D3DXPrespectiveLH).

Share this post


Link to post
Share on other sites
Dactylos    122
That's a convenience function that performs what I have described above (in addition to setting up a perspective projection matrix). Though, I claimed that you should apply the scale to the world matrix, perhaps I was wrong, and the projection matrix is the one you should scale.

I was not aware such a function existed, since I mostly dabble in OpenGL. What I have described above is however the 'theory' behind using different coordinate systems in the two APIs. AFAIK there is no such convenience function in OpenGL and you must explicitly perform a mirror in the XY plane (if you want a nonstandard coord-system).

The culling mode can be changed in both APIs to use either CW or CCW.

Edited by - Dactylos on July 31, 2001 10:37:44 AM

Share this post


Link to post
Share on other sites

  • Similar Content

    • By Kjell Andersson
      I'm trying to get some legacy OpenGL code to run with a shader pipeline,
      The legacy code uses glVertexPointer(), glColorPointer(), glNormalPointer() and glTexCoordPointer() to supply the vertex information.
      I know that it should be using setVertexAttribPointer() etc to clearly define the layout but that is not an option right now since the legacy code can't be modified to that extent.
      I've got a version 330 vertex shader to somewhat work:
      #version 330 uniform mat4 osg_ModelViewProjectionMatrix; uniform mat4 osg_ModelViewMatrix; layout(location = 0) in vec4 Vertex; layout(location = 2) in vec4 Normal; // Velocity layout(location = 3) in vec3 TexCoord; // TODO: is this the right layout location? out VertexData { vec4 color; vec3 velocity; float size; } VertexOut; void main(void) { vec4 p0 = Vertex; vec4 p1 = Vertex + vec4(Normal.x, Normal.y, Normal.z, 0.0f); vec3 velocity = (osg_ModelViewProjectionMatrix * p1 - osg_ModelViewProjectionMatrix * p0).xyz; VertexOut.velocity = velocity; VertexOut.size = TexCoord.y; gl_Position = osg_ModelViewMatrix * Vertex; } What works is the Vertex and Normal information that the legacy C++ OpenGL code seem to provide in layout location 0 and 2. This is fine.
      What I'm not getting to work is the TexCoord information that is supplied by a glTexCoordPointer() call in C++.
      Question:
      What layout location is the old standard pipeline using for glTexCoordPointer()? Or is this undefined?
       
      Side note: I'm trying to get an OpenSceneGraph 3.4.0 particle system to use custom vertex, geometry and fragment shaders for rendering the particles.
    • By markshaw001
      Hi i am new to this forum  i wanted to ask for help from all of you i want to generate real time terrain using a 32 bit heightmap i am good at c++ and have started learning Opengl as i am very interested in making landscapes in opengl i have looked around the internet for help about this topic but i am not getting the hang of the concepts and what they are doing can some here suggests me some good resources for making terrain engine please for example like tutorials,books etc so that i can understand the whole concept of terrain generation.
       
    • By KarimIO
      Hey guys. I'm trying to get my application to work on my Nvidia GTX 970 desktop. It currently works on my Intel HD 3000 laptop, but on the desktop, every bind textures specifically from framebuffers, I get half a second of lag. This is done 4 times as I have three RGBA textures and one depth 32F buffer. I tried to use debugging software for the first time - RenderDoc only shows SwapBuffers() and no OGL calls, while Nvidia Nsight crashes upon execution, so neither are helpful. Without binding it runs regularly. This does not happen with non-framebuffer binds.
      GLFramebuffer::GLFramebuffer(FramebufferCreateInfo createInfo) { glGenFramebuffers(1, &fbo); glBindFramebuffer(GL_FRAMEBUFFER, fbo); textures = new GLuint[createInfo.numColorTargets]; glGenTextures(createInfo.numColorTargets, textures); GLenum *DrawBuffers = new GLenum[createInfo.numColorTargets]; for (uint32_t i = 0; i < createInfo.numColorTargets; i++) { glBindTexture(GL_TEXTURE_2D, textures[i]); GLint internalFormat; GLenum format; TranslateFormats(createInfo.colorFormats[i], format, internalFormat); // returns GL_RGBA and GL_RGBA glTexImage2D(GL_TEXTURE_2D, 0, internalFormat, createInfo.width, createInfo.height, 0, format, GL_FLOAT, 0); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); DrawBuffers[i] = GL_COLOR_ATTACHMENT0 + i; glBindTexture(GL_TEXTURE_2D, 0); glFramebufferTexture(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0 + i, textures[i], 0); } if (createInfo.depthFormat != FORMAT_DEPTH_NONE) { GLenum depthFormat; switch (createInfo.depthFormat) { case FORMAT_DEPTH_16: depthFormat = GL_DEPTH_COMPONENT16; break; case FORMAT_DEPTH_24: depthFormat = GL_DEPTH_COMPONENT24; break; case FORMAT_DEPTH_32: depthFormat = GL_DEPTH_COMPONENT32; break; case FORMAT_DEPTH_24_STENCIL_8: depthFormat = GL_DEPTH24_STENCIL8; break; case FORMAT_DEPTH_32_STENCIL_8: depthFormat = GL_DEPTH32F_STENCIL8; break; } glGenTextures(1, &depthrenderbuffer); glBindTexture(GL_TEXTURE_2D, depthrenderbuffer); glTexImage2D(GL_TEXTURE_2D, 0, depthFormat, createInfo.width, createInfo.height, 0, GL_DEPTH_COMPONENT, GL_FLOAT, 0); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); glBindTexture(GL_TEXTURE_2D, 0); glFramebufferTexture(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, depthrenderbuffer, 0); } if (createInfo.numColorTargets > 0) glDrawBuffers(createInfo.numColorTargets, DrawBuffers); else glDrawBuffer(GL_NONE); if (glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE) std::cout << "Framebuffer Incomplete\n"; glBindFramebuffer(GL_FRAMEBUFFER, 0); width = createInfo.width; height = createInfo.height; } // ... // FBO Creation FramebufferCreateInfo gbufferCI; gbufferCI.colorFormats = gbufferCFs.data(); gbufferCI.depthFormat = FORMAT_DEPTH_32; gbufferCI.numColorTargets = gbufferCFs.size(); gbufferCI.width = engine.settings.resolutionX; gbufferCI.height = engine.settings.resolutionY; gbufferCI.renderPass = nullptr; gbuffer = graphicsWrapper->CreateFramebuffer(gbufferCI); // Bind glBindFramebuffer(GL_DRAW_FRAMEBUFFER, fbo); // Draw here... // Bind to textures glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_2D, textures[0]); glActiveTexture(GL_TEXTURE1); glBindTexture(GL_TEXTURE_2D, textures[1]); glActiveTexture(GL_TEXTURE2); glBindTexture(GL_TEXTURE_2D, textures[2]); glActiveTexture(GL_TEXTURE3); glBindTexture(GL_TEXTURE_2D, depthrenderbuffer); Here is an extract of my code. I can't think of anything else to include. I've really been butting my head into a wall trying to think of a reason but I can think of none and all my research yields nothing. Thanks in advance!
    • By Adrianensis
      Hi everyone, I've shared my 2D Game Engine source code. It's the result of 4 years working on it (and I still continue improving features ) and I want to share with the community. You can see some videos on youtube and some demo gifs on my twitter account.
      This Engine has been developed as End-of-Degree Project and it is coded in Javascript, WebGL and GLSL. The engine is written from scratch.
      This is not a professional engine but it's for learning purposes, so anyone can review the code an learn basis about graphics, physics or game engine architecture. Source code on this GitHub repository.
      I'm available for a good conversation about Game Engine / Graphics Programming
    • By C0dR
      I would like to introduce the first version of my physically based camera rendering library, written in C++, called PhysiCam.
      Physicam is an open source OpenGL C++ library, which provides physically based camera rendering and parameters. It is based on OpenGL and designed to be used as either static library or dynamic library and can be integrated in existing applications.
       
      The following features are implemented:
      Physically based sensor and focal length calculation Autoexposure Manual exposure Lense distortion Bloom (influenced by ISO, Shutter Speed, Sensor type etc.) Bokeh (influenced by Aperture, Sensor type and focal length) Tonemapping  
      You can find the repository at https://github.com/0x2A/physicam
       
      I would be happy about feedback, suggestions or contributions.

  • Popular Now