Archived

This topic is now archived and is closed to further replies.

OpenGL Converting opengl to d3d coords

This topic is 6013 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I''ve written a d3d conversion of an opengl md3 parser. The problem is, as I understand it, all opengl (and md3) coords are Lefthanded and d3d''s are right handed (or is it vice-versa?). What do I need to convert an opengl vertex into a d3d one? I know this has been discussed before, but the search engine was of no use. Sorry. Should be an easy one! Thanks for the help, Onnel

Share this post


Link to post
Share on other sites
I''m quite sure OpenGL can use left handed AND right handed coords. Just choose the one you want (I don''t remember how to do this, check your doc).

I suppose D3D8 can do this too.

Share this post


Link to post
Share on other sites
Simply negate the Z coordinate to transform between a left-handed and a right-handed coordinate system.
In a left handed coordinate system the positive Z axis points 'into' the screen, while in a right-handed coordinate system the positive Z axis 'comes out of' the screen.

The positive X axis always point to the right and the positive Y axis always point upwards.

Hold you right hand in front of your face, with your thumb (X axis) pointing to the right and your index finger (Y axis) pointing upwards, then point your middle finger (Z axis) towards your face. This is where the name 'right handed coordinate system' comes from.

Now hold your left hand in front of your face, similarly with your index finger (Y axis) pointing upwards and your thumb (X axis) pointing to the right, now your middle finger (Z axis) points away from your face. Hence the name 'left handed coordinate system'.

So, you see, there isn't really much difference between the two. IIRC OpenGL by default uses an RHS (Right handed system), while Direct3D uses an LHS. To make either one use the other system, simply scale your world matrix with (1, 1, -1) before applying your transformations and rotations etc...

[EDIT: I realised that I had given an incorrect scale vector]

Edited by - Dactylos on July 31, 2001 6:33:29 AM

Share this post


Link to post
Share on other sites
DAC,

Thanks for the good explanation! Does this even truly apply to the way coordinates are stored then, or does it only apply to transformations done on coords? If it doesn''t effect the way coords and stored, and front how I understand what you wrote, it doesn;t, then there is no problem loading the exact same vertex data into OPENGL or D3D.

Is this correct?

Are there any other issues that would cause problems loading index and vertex data which was originally stored for use with opengl into d3d? Winding issues (ordering of vertices CW vs. CCW), or anything else?

Thanks!
Onnel

Share this post


Link to post
Share on other sites
If you load a model created for an LHS into an application that uses an RHS, then the model will appear mirrored in the XY plane (the Z coordinate is in effect implicitly negated; so to speak ).

You should decide whether to use an RHS or an LHS in your engine and then load the vertices etc in the exact same way, but each frame you would automatically mirror the world matrix in the XY plane (in other words, scale by (1, 1, -1) as I mentioned above) if you are using the rendering API for which your chosen coordinate system isn't the default.

ex.

If you choose to always work with a right handed coordinate system (I'm assuming here that OpenGL uses a right handed system and Direct3D uses a left handed system. This might not be correct however, please look this up, or try it out. If I have it backwards, then simply do it 'the opposite way' from what I show here).

When rendering with the OpenGL API you don't have to do anything, because it uses a right handed system by default. so simply apply all the translations/rotations/etc and then render your model.

When rendering with the Direct3D API, you should apply a scale(1, 1, -1) to the world matrix (or the view matrix) every frame, before doing any transformations and/or rendering. This should ideally be done in your BeginFrame() function or equivalent (the one where you call d3d_device->BeginScene() etc...), so that afterward all translations and rotations and whatever the application using your engine performs are in effect performed in a right handed coordinate system.

If you don't apply this scale your models will appear mirrored (eg. an enemy wielding a sword in his right hand would appear correctly under one API, while the sword would appear in his left hand under the other API).

Of course there is also the possibility that I'm totally off and that both OpenGL and Direct3D use the same kind of coordinate system. If so, then just ignore what I've said (it's of course still good to know).

Umm.... Feels like I'm mostly ranting by now, so I'll stop...

[EDIT: I realised that I had given an incorrect scale vector]

Edited by - Dactylos on July 31, 2001 6:34:31 AM

Share this post


Link to post
Share on other sites
This may be a little of the topic,
but I think both OpenGL and D3D can use both RH and LH systems.

I use RH for both APIs. It''s just the question of projection matrix and face culling. To use RH for d3d call D3DXPerspectiveRH (instead of D3DXPrespectiveLH).

Share this post


Link to post
Share on other sites
That's a convenience function that performs what I have described above (in addition to setting up a perspective projection matrix). Though, I claimed that you should apply the scale to the world matrix, perhaps I was wrong, and the projection matrix is the one you should scale.

I was not aware such a function existed, since I mostly dabble in OpenGL. What I have described above is however the 'theory' behind using different coordinate systems in the two APIs. AFAIK there is no such convenience function in OpenGL and you must explicitly perform a mirror in the XY plane (if you want a nonstandard coord-system).

The culling mode can be changed in both APIs to use either CW or CCW.

Edited by - Dactylos on July 31, 2001 10:37:44 AM

Share this post


Link to post
Share on other sites

  • Similar Content

    • By xhcao
      Does sync be needed to read texture content after access texture image in compute shader?
      My simple code is as below,
      glUseProgram(program.get());
      glBindImageTexture(0, texture[0], 0, GL_FALSE, 3, GL_READ_ONLY, GL_R32UI);
      glBindImageTexture(1, texture[1], 0, GL_FALSE, 4, GL_WRITE_ONLY, GL_R32UI);
      glDispatchCompute(1, 1, 1);
      // Does sync be needed here?
      glUseProgram(0);
      glBindFramebuffer(GL_READ_FRAMEBUFFER, framebuffer);
      glFramebufferTexture2D(GL_READ_FRAMEBUFFER, GL_COLOR_ATTACHMENT0,
                                     GL_TEXTURE_CUBE_MAP_POSITIVE_X + face, texture[1], 0);
      glReadPixels(0, 0, kWidth, kHeight, GL_RED_INTEGER, GL_UNSIGNED_INT, outputValues);
       
      Compute shader is very simple, imageLoad content from texture[0], and imageStore content to texture[1]. Does need to sync after dispatchCompute?
    • By Jonathan2006
      My question: is it possible to transform multiple angular velocities so that they can be reinserted as one? My research is below:
      // This works quat quaternion1 = GEQuaternionFromAngleRadians(angleRadiansVector1); quat quaternion2 = GEMultiplyQuaternions(quaternion1, GEQuaternionFromAngleRadians(angleRadiansVector2)); quat quaternion3 = GEMultiplyQuaternions(quaternion2, GEQuaternionFromAngleRadians(angleRadiansVector3)); glMultMatrixf(GEMat4FromQuaternion(quaternion3).array); // The first two work fine but not the third. Why? quat quaternion1 = GEQuaternionFromAngleRadians(angleRadiansVector1); vec3 vector1 = GETransformQuaternionAndVector(quaternion1, angularVelocity1); quat quaternion2 = GEQuaternionFromAngleRadians(angleRadiansVector2); vec3 vector2 = GETransformQuaternionAndVector(quaternion2, angularVelocity2); // This doesn't work //quat quaternion3 = GEQuaternionFromAngleRadians(angleRadiansVector3); //vec3 vector3 = GETransformQuaternionAndVector(quaternion3, angularVelocity3); vec3 angleVelocity = GEAddVectors(vector1, vector2); // Does not work: vec3 angleVelocity = GEAddVectors(vector1, GEAddVectors(vector2, vector3)); static vec3 angleRadiansVector; vec3 angularAcceleration = GESetVector(0.0, 0.0, 0.0); // Sending it through one angular velocity later in my motion engine angleVelocity = GEAddVectors(angleVelocity, GEMultiplyVectorAndScalar(angularAcceleration, timeStep)); angleRadiansVector = GEAddVectors(angleRadiansVector, GEMultiplyVectorAndScalar(angleVelocity, timeStep)); glMultMatrixf(GEMat4FromEulerAngle(angleRadiansVector).array); Also how do I combine multiple angularAcceleration variables? Is there an easier way to transform the angular values?
    • By dpadam450
      I have this code below in both my vertex and fragment shader, however when I request glGetUniformLocation("Lights[0].diffuse") or "Lights[0].attenuation", it returns -1. It will only give me a valid uniform location if I actually use the diffuse/attenuation variables in the VERTEX shader. Because I use position in the vertex shader, it always returns a valid uniform location. I've read that I can share uniforms across both vertex and fragment, but I'm confused what this is even compiling to if this is the case.
       
      #define NUM_LIGHTS 2
      struct Light
      {
          vec3 position;
          vec3 diffuse;
          float attenuation;
      };
      uniform Light Lights[NUM_LIGHTS];
       
       
    • By pr033r
      Hello,
      I have a Bachelor project on topic "Implenet 3D Boid's algorithm in OpenGL". All OpenGL issues works fine for me, all rendering etc. But when I started implement the boid's algorithm it was getting worse and worse. I read article (http://natureofcode.com/book/chapter-6-autonomous-agents/) inspirate from another code (here: https://github.com/jyanar/Boids/tree/master/src) but it still doesn't work like in tutorials and videos. For example the main problem: when I apply Cohesion (one of three main laws of boids) it makes some "cycling knot". Second, when some flock touch to another it scary change the coordination or respawn in origin (x: 0, y:0. z:0). Just some streng things. 
      I followed many tutorials, change a try everything but it isn't so smooth, without lags like in another videos. I really need your help. 
      My code (optimalizing branch): https://github.com/pr033r/BachelorProject/tree/Optimalizing
      Exe file (if you want to look) and models folder (for those who will download the sources):
      http://leteckaposta.cz/367190436
      Thanks for any help...

    • By Andrija
      I am currently trying to implement shadow mapping into my project , but although i can render my depth map to the screen and it looks okay , when i sample it with shadowCoords there is no shadow.
      Here is my light space matrix calculation
      mat4x4 lightViewMatrix; vec3 sun_pos = {SUN_OFFSET * the_sun->direction[0], SUN_OFFSET * the_sun->direction[1], SUN_OFFSET * the_sun->direction[2]}; mat4x4_look_at(lightViewMatrix,sun_pos,player->pos,up); mat4x4_mul(lightSpaceMatrix,lightProjMatrix,lightViewMatrix); I will tweak the values for the size and frustum of the shadow map, but for now i just want to draw shadows around the player position
      the_sun->direction is a normalized vector so i multiply it by a constant to get the position.
      player->pos is the camera position in world space
      the light projection matrix is calculated like this:
      mat4x4_ortho(lightProjMatrix,-SHADOW_FAR,SHADOW_FAR,-SHADOW_FAR,SHADOW_FAR,NEAR,SHADOW_FAR); Shadow vertex shader:
      uniform mat4 light_space_matrix; void main() { gl_Position = light_space_matrix * transfMatrix * vec4(position, 1.0f); } Shadow fragment shader:
      out float fragDepth; void main() { fragDepth = gl_FragCoord.z; } I am using deferred rendering so i have all my world positions in the g_positions buffer
      My shadow calculation in the deferred fragment shader:
      float get_shadow_fac(vec4 light_space_pos) { vec3 shadow_coords = light_space_pos.xyz / light_space_pos.w; shadow_coords = shadow_coords * 0.5 + 0.5; float closest_depth = texture(shadow_map, shadow_coords.xy).r; float current_depth = shadow_coords.z; float shadow_fac = 1.0; if(closest_depth < current_depth) shadow_fac = 0.5; return shadow_fac; } I call the function like this:
      get_shadow_fac(light_space_matrix * vec4(position,1.0)); Where position is the value i got from sampling the g_position buffer
      Here is my depth texture (i know it will produce low quality shadows but i just want to get it working for now):
      sorry because of the compression , the black smudges are trees ... https://i.stack.imgur.com/T43aK.jpg
      EDIT: Depth texture attachment:
      glTexImage2D(GL_TEXTURE_2D, 0,GL_DEPTH_COMPONENT24,fbo->width,fbo->height,0,GL_DEPTH_COMPONENT,GL_FLOAT,NULL); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, fbo->depthTexture, 0);
  • Popular Now