Sign in to follow this  
EbonySeraphim

Merits of writing a multi-API Renderer

Recommended Posts

I’m about to evolve my old 2D game engine and migrate to 3D. I am very serious about game development and was wondering what, if any, merits there are in writing a renderer that has implementations in both OpenGL and Direct3D.

To breakdown the question further –
1) Do any well-known games or use a renderer that has both a Direct3D and OpenGL implementation? (other than presumably Unreal)
2) What is the level of difficulty in accomplishing that versus concentrating on one API?
3) What are the most significant challenges in supporting both related to how they differ specifically? (i.e. need to learn both HLSL and GLSL; GL buffers conceptually are very different from Direct3D buffers)
4) What kind of performance hit is there in renderer that can use multiple APIs?
5) How much of a benefit would it be to try to learn both OpenGL and Direct3D concepts at once?

This is somewhat of a sidebar question, but also what which versions of Direct3D and OpenGL reasonably match up to which (from DirectX9 and up).

(extreme sidebar) - how do I prevent this post from being centered? It looks really dumb.

Share this post


Link to post
Share on other sites
Unless you're a major game developer targeting multiple PC platforms and console systems, there isn't much merit in supporting both OpenGL and DirectX unless you just want to learn both APIs and get some more experience working with interface patterns. Otherwise the choice is based on whether or not you want your engine to be cross-platform or Windows-only. If the former then you need to go with OpenGL, if the latter you can pick whichever API you're most comfortable with. The real difficulty involved is finding an appropriate abstract interface that hides the details of both APIs but still supports all the features you want in your graphics engine and can be implemented by both. Then there's the raw implementation work involved. Learning the APIs is straightforward, since there are plenty of books and articles out there, but learning to use them well is another story. If you're new to 3D programming, learn the ropes with one API, then add support for the other if you really want to. This has the added bonus of first providing you with a concrete implementation from which you can build an interface.

Share this post


Link to post
Share on other sites
I had the same question when I started my engine, and I opted to go with OpenGL only. I'm glad I did, since when you are starting out you won't know what is worth abstracting.

You will also probably ignore painful issues like the difference between coordinate systems conventions between different APIs, row vs. column major matrices, different shading languages, etc.

It will be a huge pain in the ass.

Share this post


Link to post
Share on other sites
I suggest first learning one API well and after that making an engine that supports both. Doing exactly that now myself, I've learned to abstract away all platform-specific code and have a clean, well-documented API. OpenGL works on many platforms, but D3D drivers are sometimes more optimized and less buggy because of its dominance in the gaming industry. Re: 4) it's possible to make the code so as that there is no performance hit at all.

Share this post


Link to post
Share on other sites
Quote:
Original post by Triton
You will also probably ignore painful issues like the difference between coordinate systems conventions...
Whenever this comes up, I always feel compelled to point out that strictly speaking, there's no difference in coordinate system between OpenGL and D3D (aside from perhaps viewport or screen coordinates - I can't remember off the top of my head how those are handled between the two APIs).

Keep in mind that in both OpenGL and D3D you can construct the transform matrices yourself and upload them directly to the API. This along with the fact that the DirectX math library includes both left- and right-handed versions of the functions for which it makes a difference means that you can set up both OpenGL and D3D to be either right or left handed, as you prefer. As such, coordinate system conversions shouldn't be an issue; you can use whichever convention you prefer with both APIs.

Share this post


Link to post
Share on other sites
Quote:
Original post by jyk
Whenever this comes up, I always feel compelled to point out that strictly speaking, there's no difference in coordinate system between OpenGL and D3D (aside from perhaps viewport or screen coordinates - I can't remember off the top of my head how those are handled between the two APIs).


The viewport transform is different. In GL z is [-1,1], while in D3D z is [0,1].

Share this post


Link to post
Share on other sites
Quote:
Original post by jyk
Quote:
Original post by Triton
You will also probably ignore painful issues like the difference between coordinate systems conventions...
Whenever this comes up, I always feel compelled to point out that strictly speaking, there's no difference in coordinate system between OpenGL and D3D (aside from perhaps viewport or screen coordinates - I can't remember off the top of my head how those are handled between the two APIs).

Keep in mind that in both OpenGL and D3D you can construct the transform matrices yourself and upload them directly to the API. This along with the fact that the DirectX math library includes both left- and right-handed versions of the functions for which it makes a difference means that you can set up both OpenGL and D3D to be either right or left handed, as you prefer. As such, coordinate system conversions shouldn't be an issue; you can use whichever convention you prefer with both APIs.


Thanks for correcting me.

And it is indeed true that you can construct the transform matrices yourself (and you have to from non-deprecated OpenGL 3 onwards), but if I recall correctly GLSL expects column major matrices, and there are specific calls to send uniforms to the shaders that transpose the data. So you still have to be extra careful with these details.

Anyway, my point is that while it is possible to do it with some care, a beginner won't grasp these details and will have a huge pain in the ass making it all work.

Share this post


Link to post
Share on other sites
Also, IIRC GL doesn't have the half-texel offset issue that DX9 has.

Quote:
1) Do any well-known games or use a renderer that has both a Direct3D and OpenGL implementation? (other than presumably Unreal)
Most big games that supports both Mac and Windows (D3D on windows, GL on mac).
Also console games (D3D9/10/11 on PC, GL on mac, D3D9-and-a-half on 360, PSGL/GCM on PS3, GX on Wii, ...)
Quote:
2) What is the level of difficulty in accomplishing that versus concentrating on one API?
Roughly double, plus a bit extra to find common abstractions between each API.
Quote:
3) What are the most significant challenges in supporting both related to how they differ specifically? (i.e. need to learn both HLSL and GLSL; GL buffers conceptually are very different from Direct3D buffers)
If I was making a GL/D3D renderer, I would use HLSL on D3D and Cg on GL. HLSL/Cg are the same language, so then your shaders will work on both renderers.
Quote:
4) What kind of performance hit is there in renderer that can use multiple APIs?
There shouldn't be any as long as you use compile-time polymorphism over runtime polymorphism.
Quote:
This is somewhat of a sidebar question, but also what which versions of Direct3D and OpenGL reasonably match up to which (from DirectX9 and up).
GL1 <-> DX6/7/8, GL2 <-> DX9, GL3 <-> DX10, GL4 <-> DX11

Share this post


Link to post
Share on other sites
As an alternative to Cg, there is also a project called MojoShader which I discovered the other day.

Quote:
MojoShader is a library to work with Direct3D shaders on alternate 3D APIs and non-Windows platforms. The primary motivation is moving shaders to OpenGL languages on the fly. The developer deals with "profiles" that represent various target languages, such as GLSL or ARB_*_program.


Google is also working on something similiar, but for converting GLSL shaders to HLSL (for usage in WebGL enabled browsers). It's called the ANGLE project. You can find the the source for the GLSL-to-HLSL compiler in the repository.

Hope it might be useful.

Share this post


Link to post
Share on other sites
You can also map a lot of GLSL to HLSL code (and vice versa) with #defines... "#define float4 vec4" etc. I've done this with some success in the past. The main area where the two languages differ too greatly though is how they handle vertex attributes and interpolated/varying values, so you'll end up having to write separate skeletons of the shaders separately in both languages. But you can easily have libraries of functions written in one language that compile in the other with #defines.

Share this post


Link to post
Share on other sites
Quote:
Original post by MJP
The viewport transform is different. In GL z is [-1,1], while in D3D z is [0,1].
Right, but that's not really a coordinate system issue, is it? In any case, I've never thought of it as such. To me, whether a system is left or right handed and which axes are considered to point in which directions (relative to some frame of reference) are coordinate system issues; the near clipping plane distance for the canonical view volume on the other hand would not be a coordinate system issue.

Also, the question of z clip range applies not only to the viewport transform, but also to the projection transform, which also needs to be adjusted depending on z clip range. Again though, I wouldn't consider that to be a coordinate system issue.

Share this post


Link to post
Share on other sites
Hodgman, would you mind to elaborate on this?

Quote:

Quote:
4) What kind of performance hit is there in renderer that can use multiple APIs?

There shouldn't be any as long as you use compile-time polymorphism over runtime polymorphism.


I've been thinking about making a multi-API graphics plugin that could be used by someone on either language, so I've been thinking about the issue a little bit.

In trying to abstract both Direct3d and OpenGL, lets say you wanted to wanted to create an API independent class such as VertexBuffer. I'd first imagine to develop an equivalent interface, and from this you could have two classes (glVertexBuffer and dxVertexBuffer) that both inherit from this interface.

This isn't very good though because then you've got runtime polymorphism for every interaction with the graphics engine, and also would require to IFDEF anywhere you instantiate a buffer whether it is really a glVertexBuffer or a dxVertexBuffer.

Another thought I had would be to create the two classes with identical interfaces, but also maintain a global file of typedefs:

#ifdef RENDER_OPENGL
typedef glVertexBuffer VertexBuffer
#endif
#ifdef RENDER_DX11
typedef dxVertexBuffer VertexBuffer
#endif




This would allow you to only instantiate the class VertexBuffer everywhere in your code making it much cleaner.

Is this the best way you would go about it, or am I missing some important concept when you refer to 'compile time polymorphism'?

Share this post


Link to post
Share on other sites
Yep, I'd use your ifdef technique, or something like it. The point is to use ifdef (or template if you're a boost fanatic, j/k) over virtual.

Other variations are having the same interface in two headers:
//VertexBuffer.h
#ifdef RENDER_OPENGL
#include "gl2/VertexBuffer.h"
#endif
#ifdef RENDER_DX11
#include "dx11/VertexBuffer.h"
#endif
Or one interface with two implementation files:
//VertexBuffer.h
class VertexBuffer
{
...
};

//gl2VertexBuffer.cpp
#ifdef RENDER_OPENGL
VertexBuffer::...()
{
}
#endif

//dx11VertexBuffer.cpp
#ifdef RENDER_DX11
VertexBuffer::...()
{
}
#endif

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this