Deciding graphics interface on init

Started by
6 comments, last by haegarr 9 years, 5 months ago

So i've been trying to think of the best way to decide which graphics interface to use on initialization of my engine. I really like in C# how you can create an interface, inherit from it, then declare an object of that interface and initialize it with one of the inherited classes. I know thats not exactly possible with C++ because you can't create an object from an interface.

I know i'm missing something here, but i was hoping i might get a little insight from some of you here. The way i've decided to go about it, although its going to change once i find a better way, is create an IGraphics interface, then create directx and opengl classes that inherit from that interface. in the app, i have one member for each of those inherited classes. i load in the options file, then initialize the graphics based on whats in the options file. to do this, i have a function which returns a reference to one of those members and then i initialize the graphics. here's some code snippet of what i'm doing, and let me know your thoughts and hopefully a better way of doing this.

application class


class App
{
private:
    // both these inherit from the IGraphics interface
    static GraphicsD3D11_1 m_GraphicsD3D11_1;
    static GraphicsOpenGL m_GraphicsOpenGL;

public:
    IGraphics *Graphics();

    // more stuff
}

function which decides which api to use


IGraphics *App::Graphics()
{
    switch (m_Settings.m_GraphicsAPI)
    {
        case GraphicsAPI_D3D11_1:
        {
            return &m_GraphicsD3D11_1;
	}
	case GraphicsAPI_OPENGL:
	{
            return &m_GraphicsOpenGL;
	}
	default:
            return NULL;
    }
    return NULL;
}

then when i need to use the graphics api, i just call it like this:


Graphics()->Initialize(); // just an example of a call to initialize the graphics

any suggestions on a better way to do this. my worry is that it is not the most performant solution, and especially since i need to have declared a member for each graphics api

Advertisement
This is not a thing most real games do. Either you write your game using one API or you write it in the other.

You don't write a game that this run uses OpenGL, then close the app and rerun it using Direct3D.

There was a time, many decades ago, where you needed to build your own graphics drivers for different classes of graphics cards. At install time you might choose between CGA and EGA graphics systems, or later, you needed to choose between several different VESA options. Thankfully those days are long past.

There are some experimental types of games that do that, but they are outliers. You can spend your valuable development time building a game, or you can spend your valuable development time trying to abstract away the graphics API.


As for C++ and interfaces, you can have an abstract interface where the functionality is provided by another system. DirectX itself is a great example of that. You request an instance. The OS provides an appropriate hardware abstraction object. You don't know the innermost details, nor should you care about them. You have something that satisfies the interface, and that is usually good enough.

yeah, thats a good answer. i was trying to make the engine as flexible as possible, but i'll just go with one

You *can* write an engine that allows for using one API or the other, but it will only double the amount of work you have to do interfacing with the gfx API. You will need to either devise your own shader language, and convert to either API at runtime, or just hand-write all your shaders for both APIs.

This is very do-able, just not worthwhile.


This is not a thing most real games do. Either you write your game using one API or you write it in the other.

What about different versions of an API, eg dx9 vs dx11? I've seen quite a few games that support both.

[size="2"]Currently working on an open world survival RPG - For info check out my Development blog:[size="2"] ByteWrangler

I really like in C# how you can create an interface, inherit from it, then declare an object of that interface and initialize it with one of the inherited classes. I know thats not exactly possible with C++ because you can't create an object from an interface.

They both work the same way:
C#: MyInterface foo = null; foo = new MyDerived();
C++: MyInterface* foo = nullptr; foo = new MyDerived();
The only difference is that C# hides pointer syntax from you (all class/interface variables in C# are always pointers/references).

In your example code, you're creating an instance of both D3D and GL, and then returning one or the other. It would probably be better to only create one or the other, e.g.


class App
{
private:
    unique_ptr<IGraphics> m_Graphics;
public:
    IGraphics *Graphics();
}

App::App(const Settings& settings)
{
    switch (settings.m_GraphicsAPI)
    {
        case GraphicsAPI_D3D11_1:
        {
            m_Graphics = make_unique<m_GraphicsD3D11_1>(); break;
	}
	case GraphicsAPI_OPENGL:
	{
            m_Graphics = make_unique<m_GraphicsOpenGL>(); break;
	}
    }
}

What about different versions of an API, eg dx9 vs dx11? I've seen quite a few games that support both.

I do that by compiling two different EXEs for the game. A launcher program then selects the appropriate EXE to run.
The game selects the renderer using #ifdef's rather than interfaces.
Doing the selection via runtime interfaces is of course possible, it's just slower and more complex IMHO.

You will need to either devise your own shader language, and convert to either API at runtime, or just hand-write all your shaders for both APIs.

OpenGL is a real stick in the mud here... I can write HLSL shaders (with a few macros for portability) and then pre-compile the same source files into binary shaders for PC-D3D9 / PC-D3D11 / XB360 / XB1 / PS3 / PS4 / WiiU / etc... i.e. my shaders source files work on every platform except Mac/Linux/Android/iOS/WebGL.
Unfortunately yeah, when it comes to porting to GL, I have to use a dodgy HLSL->GLSL translator, then a GLSL->AST optimizing compiler (because you can't trust GL drivers to actually optimize your code), then a AST->GLSL converter (because GL requires you to ship ASCII instead of binary shaders) angry.png
Really not worth the effort if you can avoid it!

my original thought was to have two exes and a loader. I was just trying something different here, swapping out the api for another one would be simple, only one change in the app other than adding the new graphics class. Shaders is a good point though, if I'm planning on supporting both opengl and directx. Ill have to create a shader for both for every shader I need. Still more thought needs to go into the design. Truthfully, I'm probably going to skip choosing an api on load for now though, and stick to the loader

For me, the running executable has build in which technology to try at all, e.g. OpenGL on Mac, D3D on Windows. The abstract interface class of the graphics sub-system and its rendering device class are implemented in the engine core, but the derived concrete classes are, based on technology and version, implemented in own dynamic libraries (so I have a DLL for OpenGL 3 and another for OpenGL 4, for example). The init process tries to load them, if possible w.r.t. the version information fetched from the OS, in order of decreasing usefulness one by one until the first success. The result is a concrete instance of the sub-system class which acts as factory for a concrete instance of the rendering class. The API of the latter is small (method-wise) because it is data driven, i.e. rendering jobs are send to it, so the amount of virtual function calls is insignificant.

Although the above approach allows to load all runtime bindable graphics sub-systems in parallel, in normal mode only one is actually loaded, and only one is instantiated.

This topic is closed to new replies.

Advertisement