Writing an OpenGL rendering backend for a left-handed DX11-engine

Started by
8 comments, last by 21st Century Moose 7 years, 2 months ago

Hey,

I recently started studying OpenGL with the intention of adding an OpenGL 4.X rendering backend to my, currently DX11-only, engine. Before I started I want to get a better picture of what "DX is left-handed, OpenGL is right-handed" means in the modern APIs. To my understanding, modern OpenGL has done away with the LookAt() functions and similar functionality (that heavily depended on a certain handedness) in the same way as DX no longer has these things built in. I have built a very simple vector and matrix library that I use with DX11 and it assumes a left-handed coordinate system. I would like to use the same library with the OpenGL rendering backend. Where do I need to do any conversions? Does the modern OpenGL API still assume I am working with right-handed coordinates, or was that only in the old API?

Some things I do know (and correct me if I am wrong):

- My matrices are row-major and OpenGL wants them as column-major, so that's a transpose call.. (I am using the FX-framework for DX11 and I am going to get rid of that too. I actually think that DX11 wants them as column-major too but the FX-framework has done a transpose for me, so that might be one difference that will go away once I ditch the FX-framework?)

- The NDC z-axis goes from 0->1 in DX and -1->1 in OpenGL, so I have to take that into account, but that's just using a different projection matrix, right?

Anything else that needs to be done for this to work in OpenGL? For now, I am going to hand write all shaders as separate HLSL and GLSL versions, ie. no automatic translation or higher-level language. Is there something that needs to be reversed in the shader code with regards to the handedness of the coordinate space?

Cheers!

Advertisement

My matrices are row-major and OpenGL wants them as column-major, so that's a transpose call

HLSL matrices are column-major by default. You have to write row_major float4x4 blah; to get a row-major one. GLSL is the same -- there's keywords to force one array storage order or the other, but by default it will choose column major storage order.
Note that row/column major array storage order and row/column major mathematics are two completely different things.

A mathamatician might look at a matrix on paper that looks like:
Xx Xy Xz Xw
Yx Yy Yz Yw
Zx Zy Zz Zw
Wx Wy Wz Ww

And tell you that the basis vectors are stored in the rows of the matrix, therefore it's using the row-major convention...
But a computer scientist will say that this matrix is row-major if it's stored in memory as [Xx Xy Xz Xw Yx Yy Yz Yw Zx Zy Zz Zw Wx Wy Wz Ww] and column-major if it's stored in memory as [Xx Yx Zx Wx Xy Yy Zy Wy Xz Yz Zz Wz Xw Yw Zw Ww] (while the on-paper representation is unchanged).

The column_major / row_major keywords in HLSL (and layout(column_major) / layout(row_major) in GLSL) specify the computer science array ordering convention.
The mathematical convention is determined by whether you write mul( matrix, vector ) or mul( vector, matrix ).

In any case, there's no reason to do anything differently between GL and D3D here, both of them support both of the comp-sci-majorness conventions and both of the mathematical-majorness conventions, so you can/should use the same conventions everywhere.

The NDC z-axis goes from 0->1 in DX and -1->1 in OpenGL, so I have to take that into account, but that's just using a different projection matrix, right?

Correct - typically you'll construct your projection matrices differently for GL.

Alternatively you can use glClipControl to use D3D's conventions in GL.

Before I started I want to get a better picture of what "DX is left-handed, OpenGL is right-handed" means in the modern APIs.


In the context of the modern versions of the APIs it means absolutely nothing since nothing in either API enforces either handedness on you.

The matrix library you use may enforce a handedness on you, but any good matrix library should be capable of supporting either. So far as the API is concerned it's just a matrix * vector (or vector * matrix) operation and the mathematics are exactly the same irrespective.

However, by convention, OpenGL sample code may still assume a right-handed co-ordinate system whereas D3D sample code may still assume left-handed. But this is just a convention based on historical usage. So if you're using any sample code you will need to be aware of it's handedness, and unless explicitly stated otherwise you can use this convention as a rule of thumb to guess which handedness it uses.

Direct3D has need of instancing, but we do not. We have plenty of glVertexAttrib calls.

Cool, thanks guys! Seems like I am roughly on point then.

As others have said.

The things to watch out for that GL does differently than D3D are mostly just two things:

- the GL NDC has a depth range of -1..+1 rather than 0..+1, so you have to construct your projection matrices accordingly to get full precision. This one is easy to ignore but you shouldn't.

- GL has inverted texture coordinates from D3D, requiring textures to either be flipped when loaded or shaders to invert the access. This is the most obvious issue you run into when porting and the internet is filled with questions and answers about it.

Hypothetically there may be extensions to correct these issues, though there weren't the last time I subjected myself to OpenGL.

Sean Middleditch – Game Systems Engineer – Join my team!

GL_ARB_clip_control exists for NDC conventions. It doesn't affect viewports, scissor rects, etc, however, so that is something you need to watch out for. There is a rationale given for this in the extension spec, and I'm sure that the reasons were important to someone somewhere sometime, but IMO it's Not Good Enough. Either do it fully and consistently across the whole API, or don't do it at all, whichever, but don't half-ass it so that sometimes we have to use one convention and sometimes the other. Rant over.

For texture coords it's not actually that bad. Because you also load textures from bottom-left in GL, the two bottom-lefts cancel each other out and you can use the same texture coords and image data in GL as in D3D, unmodified. Where you may need to adjust is texture coords for render targets (or any other similar source).

Direct3D has need of instancing, but we do not. We have plenty of glVertexAttrib calls.

For texture coords it's not actually that bad. Because you also load textures from bottom-left in GL, the two bottom-lefts cancel each other out and you can use the same texture coords and image data in GL as in D3D, unmodified. Where you may need to adjust is texture coords for render targets (or any other similar source).

Hmm, I don't quite understand what you mean. What is the 'also' referring to in "Because you also..."?

If I have a texture loaded into D3D using the [0,0 == left,up] convention wont loading that texture into OpenGL make it appear upside down?

OK, this might be complicated.

glTexImage2D (other texture specification calls work the same) specifies the following with respect to it's data pointer:

The first element corresponds to the lower left corner of the texture image. Subsequent elements progress left-to-right through the remaining texels in the lowest row of the texture image, and then in successively higher rows of the texture image. The final element corresponds to the upper right corner of the texture image.


So when you provide an array of data to glTexImage2D, data[0] is not actually the top-left of the texture image, it's the bottom-left of it.

So an image loaded through glTexImage2D will be upside-down by comparison to what you might expect.

However, OpenGL texture coords also work from bottom-left; i.e. {0,0} is the bottom-left corner of the texture. So this effectively undoes the upside-down load and everything comes out the same.

w2dlrb.png

What this means in practical terms.

You need to change nothing in your engine when dealing with textures loaded from system memory pointers. You give the same data to glTexImage2D as you would give to your D3D11_SUBRESOURCE_DATA::pSysMem pointer, you use the same texture coordinates when drawing, and everything just works.

Direct3D has need of instancing, but we do not. We have plenty of glVertexAttrib calls.

Thanks for the explanation :)

However, after reading your and Sean's post I have seen several posts on the internet about this particular issue and many suggestions on "how to solve it", eg. flip the image in memory before passing to OpenGL, flip it beforehand as part of the asset pipeline etc. If the texture coordinates are flipped too I don't see why people would be upset about it. Is there something I am missing here?

Thanks for the explanation :)

However, after reading your and Sean's post I have seen several posts on the internet about this particular issue and many suggestions on "how to solve it", eg. flip the image in memory before passing to OpenGL, flip it beforehand as part of the asset pipeline etc. If the texture coordinates are flipped too I don't see why people would be upset about it. Is there something I am missing here?

You'll find that the people flipping the image are also flipping the texture coords.

In other words, they're looking at the two differences in isolation from each other rather than putting them together and realising that they cancel out.

Direct3D has need of instancing, but we do not. We have plenty of glVertexAttrib calls.

This topic is closed to new replies.

Advertisement