# OpenGL Tips on abstracting rendering interfaces for multiple renderers?

## Recommended Posts

Greetings!

I've long used D3D9 directly in my coding for years, and thought I'd like to undertake learning D3D11, and what better way to do this than to work out a small game idea I've been toying with. But even though XP is on it's way out, I still want my friends and others on XP to be able to run my projects, so I thought I'd like to abstract away the actual rendering calls so I could more or less use either one in my code without specifically targeting one or the other. I've been programming for years now, but to be honest, I'm still a bit green when it comes to situations like this. D3D9 and 11 seem largely different enough that I'm not sure how I could efficiently do this. I'd also be interesting in taking what I've learned and applying it to OpenGL so that some day in the (far) future I could consider cross-platform releases.

I stumbled across this page [url="http://troylawlor.com/content/writing-an-abstract-renderer-part-1/"]http://troylawlor.co...enderer-part-1/[/url] -- It seemed to be everything I was hoping for but to my dismay it seems they either haven't finished the next installment or have abandoned it, given that several months has passed since part 1.

Does anyone have any tips, resources, articles, or anything they can share on doing this sort of thing, abstracting away rendering API? I do a lot of work through SlimDX but I'm not afraid of C/C++ (used that for years before I got in bed with C#) so I'm not really afraid of the language used in the articles or what have you. Anything would help. Thanks!

##### Share on other sites
D3D11 is a stricter API than D3D9 (or OpenGL for that matter), as instead of a large, free-form state machine you have a rather limited set of state objects. Therefore, to make sure you're using D3D11 performantly and can take full advantage of its features in the future I'd recommend basing your abstract API on the D3D11 model (state objects, constant buffers) and emulating it on D3D9 and OpenGL as needed, instead of the other way around.

Here's one example of an abstracted rendering API which provides implementations on D3D11, OpenGL 3 and OpenGL ES 2 (the implementation is not open source, though):

[url="http://clb.demon.fi/gfxapi/"]http://clb.demon.fi/gfxapi/[/url] Edited by AgentC

##### Share on other sites
Take a look at Tesla Engine: http://www.tesla-engine.net/

The guy there (I think he's Starnick from around the forums here) has a great design for a multi API renderer that is really interesting to look at.

##### Share on other sites
[quote name='MJP' timestamp='1343673126' post='4964575']
Ugh, abstract bases classes. Not a fan.

For the most part I prefer low-level implementation functions and simple data structs, with the implementation of both being determined at compile time based on the platform I'm building for. So there might be a Texture.h with a function "CreateTexture", then a Texture_win.cpp that creates a D3D11 ID3D11Texture2D, then a Texture_ps3.cpp that does the PS3 equivalent, and so on.Then if you want you can build high-level classes on top of those functions.

You can actually use the same approach for more than just graphics, if you want. For instance file IO, threads, and other system-level stuff.
[/quote]

That's actually an interesting idea.

Thank you everyone for your replies! I'm going to go over what I've got in front of me and if I have any more questions, I'll come back.

##### Share on other sites
[quote name='MJP' timestamp='1343673126' post='4964575']
Ugh, abstract bases classes. Not a fan.

For the most part I prefer low-level implementation functions and simple data structs, with the implementation of both being determined at compile time based on the platform I'm building for. So there might be a Texture.h with a function "CreateTexture", then a Texture_win.cpp that creates a D3D11 ID3D11Texture2D, then a Texture_ps3.cpp that does the PS3 equivalent, and so on.Then if you want you can build high-level classes on top of those functions.

You can actually use the same approach for more than just graphics, if you want. For instance file IO, threads, and other system-level stuff.
[/quote]

I've made a platform agnostic renderer using your method and abstract base classes, I found that it was a giant pain managing all the platform defines to make sure that the proper helper structures get included, and I found that it was really difficult to abstract around all of the strange features of each renderer using the compile time solution. While I've also started working on a new project using abstract base classes. Why do your prefer compile time to abstract base classes, and how do you handle platform scaling, like D3D11 feature levels or OGL levels?

##### Share on other sites
[quote name='Seabolt' timestamp='1343841022' post='4965247']
I've made a platform agnostic renderer using your method and abstract base classes, I found that it was a giant pain managing all the platform defines to make sure that the proper helper structures get included
[/quote]

We have our structures in one header file, with one other header file that includes the right header based on the platform. I can't imagine why you'd need more than that.

[quote name='Seabolt' timestamp='1343841022' post='4965247']
and I found that it was really difficult to abstract around all of the strange features of each renderer using the compile time solution.
[/quote]

How does compile-time polymorphism at all limit you in terms of your ability to abstract out higher-level features? You can do all of the same things you can do with abstract base classes (if not more), the only difference is you don't eat a virtual function call every time you need to do something. I mentioned dealing with the small, low-level building blocks of a renderer but you can also have different platform implementations of higher-level features.

[quote name='Seabolt' timestamp='1343841022' post='4965247']
Why do your prefer compile time to abstract base classes, and
[/quote]

Like I already mentioned, I prefer not having virtual function calls and indirections all over the place.

[quote name='Seabolt' timestamp='1343841022' post='4965247']
how do you handle platform scaling, like D3D11 feature levels or OGL levels?
[/quote]

I don't, because I don't care about them. I mainly deal with consoles, which obviously skews my preferences quite a bit.

##### Share on other sites
Ah I see, I had my helper structures in their own namespaces in their own files, like texture would be in TextureDX11.h and it would define a structure and any sort of DX11 specific functionality, so I had multiple files.

On your second point, I see your point. My problem was the way I had the actual architecture structured, not the compile time vs abstract base class approach, so oops.

And on the third point; fair enough.

##### Share on other sites
With dx9 you can run at most pixel shader model 3.0. If you are fine with that, there is no strong reason for you to migrate. I myself am using a deffered renderer, that does not need higher shader model, for my shaders have few instructions but run very often before target is rendered. Actualy, with deffered rendering, your shaders are very strict but passes often. I have been thinking that I would need to move onto higher pixel shader model, but since I moved onto deffered rendring, I can do miracles without long shaders.

##### Share on other sites
Wrote a D3D8/9/10 GL2/3 engine years ago, and a GLES2/3+D3D11 one recently.
In both instances I decided to follow D3D10/11 API as it made the most sense, and like MJP I just include the right files for a given config to avoid virtuals.

The benefit of doing that over getting to a higher level of abstraction on top of the API is that I can write the abstraction layer for all of them at once, with very little API specific code.
(Of course if you want to write something that runs on all the API then you need to make sure that you use features common to them all.)

The only thing I've not taken care of is converting D3D11 HLSL to GLSL (seems to be the best way around as they carry more meaning than GLSL).

##### Share on other sites
[quote name='JohnnyCode' timestamp='1343871190' post='4965369']
With dx9 you can run at most pixel shader model 3.0. If you are fine with that, there is no strong reason for you to migrate.
[/quote]
Except the part of about being left behind and becoming obsolete.

The difference between DirectX 11 and DirectX 9 is far more than “shader model 3”.
DirectX 11 is better designed and provides vastly superior performance even without taking advantage of its extra features such as multi-threaded rendering. My engine functions equally in DirectX 9 and in DirectX 11, not taking advantage of any special feature of DirectX 11 to gain performance, and just with this it is literally twice as fast as DirectX 9.

Then of course you get more texture slots/stages/units in DirectX 11, you can read the depth buffer directly (without hacks), 8 simultaneous render targets instead of 4 and a more efficient pipeline allowing to actually [i]use[/i] that extra transfer bandwidth.

Then of course geometry shaders to allow faster environment mapping (and a few other things), compute shaders to open up a world of who-knows-what to you, etc.
But I guess you don’t need that if you are fine with shader model 3.0.

Then there are features levels and guaranteed feature sets, instead of each individual feature being potentially unsupported causing you to have to write fall-back code everywhere.

Did I mention obsolete?
The future is DirectX 11. PC games are already heading there and the next generation of consoles will have same (Xbox 720 uses 64-bit DirectX 11) or similar-level custom API’s.

As for the original topic, I posted an article on the subject and explained a clean way to do this. Similar to what MJP proposed.
[url="http://lspiroengine.com/?p=49"]Organization for Multi-platform Support[/url]

L. Spiro

##### Share on other sites
[quote name='JohnnyCode' timestamp='1343871190' post='4965369']
since I moved onto deffered rendring, I can do miracles without long shaders.
[/quote]

And with regards to the state of the art you are behind the curve.

Most people are now refocusing to hybrid solutions such as those presented in AMD's Leo demo where deferred and forward lighting combine to give you the best of both worlds; many lights and complicated BRDFs with good performance.

## Create an account

Register a new account

• ## Partner Spotlight

• ### Forum Statistics

• Total Topics
627661
• Total Posts
2978490
• ### Similar Content

• Both functions are available since 3.0, and I'm currently using glMapBuffer(), which works fine.
But, I was wondering if anyone has experienced advantage in using glMapBufferRange(), which allows to specify the range of the mapped buffer. Could this be only a safety measure or does it improve performance?
Note: I'm not asking about glBufferSubData()/glBufferData. Those two are irrelevant in this case.
• By xhcao
Before using void glBindImageTexture(    GLuint unit, GLuint texture, GLint level, GLboolean layered, GLint layer, GLenum access, GLenum format), does need to make sure that texture is completeness.
• By cebugdev
hi guys,
are there any books, link online or any other resources that discusses on how to build special effects such as magic, lightning, etc. in OpenGL? i mean, yeah most of them are using particles but im looking for resources specifically on how to manipulate the particles to look like an effect that can be use for games,. i did fire particle before, and I want to learn how to do the other 'magic' as well.
Like are there one book or link(cant find in google) that atleast featured how to make different particle effects in OpenGL (or DirectX)? If there is no one stop shop for it, maybe ill just look for some tips on how to make a particle engine that is flexible enough to enable me to design different effects/magic
let me know if you guys have recommendations.
• By dud3
How do we rotate the camera around x axis 360 degrees, without having the strange effect as in my video below?
Mine behaves exactly the same way spherical coordinates would, I'm using euler angles.
Tried googling, but couldn't find a proper answer, guessing I don't know what exactly to google for, googled 'rotate 360 around x axis', got no proper answers.

References:
Code: https://pastebin.com/Hcshj3FQ
The video shows the difference between blender and my rotation:

• By Defend
I've had a Google around for this but haven't yet found some solid advice. There is a lot of "it depends", but I'm not sure on what.
My question is what's a good rule of thumb to follow when it comes to creating/using VBOs & VAOs? As in, when should I use multiple or when should I not? My understanding so far is that if I need a new VBO, then I need a new VAO. So when it comes to rendering multiple objects I can either:
* make lots of VAO/VBO pairs and flip through them to render different objects, or
* make one big VBO and jump around its memory to render different objects.
I also understand that if I need to render objects with different vertex attributes, then a new VAO is necessary in this case.
If that "it depends" really is quite variable, what's best for a beginner with OpenGL, assuming that better approaches can be learnt later with better understanding?

• 10
• 12
• 22
• 13
• 33