How to abstract OpenGL or DirectX specific things from classes specific to rendering

Started by
16 comments, last by metsfan 11 years, 1 month ago

For example:

Let's say there is a Texture2D class, that is used to refer to textures stored on the GPU. The problem is: storing the specific GLuint (or whatever the alternative for DX is) within the Texture2D object directly, which would make my code very dependent to OpenGL.

RED ALERT.

The very first step to abstracting across APIs is being familiar with several of them. You apparently are not. The very first step to writing abstract code is understanding what is being abstracted and at what layer the abstraction needs to exist. It also requires some perspective on common functionality within the APIs and from the point of view of the client code. Reusability and abstraction DO NOT exist in a vacuum and you can't simply make them up based on some general ideas of what constitutes an API's objects.

More generally speaking, creating an abstracted layer that can do D3D or OGL internally isn't about papering over whether it's a GLuint or an IDirect3DTexture* underneath. Those things are trivial, stupid issues. Some judicious use of the preprocessor and API specific files will solve those issues. No, the hard part in creating these abstractions is behavior and implied assumptions. There is a lot of devil in the details type stuff that can take some careful stepping to navigate. Simple example: OpenGL has the vertex array object (VAO), which encapsulates a client state (enabled attrib bits and bound VBOs). You'll have one per object, more or less. That doesn't exist in D3D 9. Instead there is a separate set of stream sources, and a vertex declaration which is typically shared heavily. It can get messier if you want to handle constant buffer APIs (D3D 11, GL 4.2). Render states are handled differently in GL than D3D9 than D3D11. And a nasty one for OpenGL based engines: D3D requires everything to be funneled through a device pointer, which is not necessary in GL. If your GL abstraction assumes it can change pipeline states without an external reference, you'll wind up patching in globals to support D3D. And this is all just a small slice.

It IS doable, and it is even doable with low level wrapping. But the lower level you get, the more intricate the behavioral details and implied assumptions become. More familiarity with the potential APIs (GL 2.x, GL 3.x, GL 4.x, D3D9, D3D11, D3D Xbox, PS3, etc) is needed. The advice to do the abstraction with very high level objects is a good one. It yields a fair bit of control to the underlying API and can create an annoying amount of duplicate or similar code across implementations. But it is more likely to be robust in the face of many different hardware APIs.

SlimDX | Ventspace Blog | Twitter | Diverse teams make better games. I am currently hiring capable C++ engine developers in Baltimore, MD.
Advertisement

Phew, after reading what you posted, I feel like the way I did it would handle all of those cases.

t IS doable, and it is even doable with low level wrapping. But the lower level you get, the more intricate the behavioral details and implied assumptions become. More familiarity with the potential APIs (GL 2.x, GL 3.x, GL 4.x, D3D9, D3D11, D3D Xbox, PS3, etc) is needed. The advice to do the abstraction with very high level objects is a good one. It yields a fair bit of control to the underlying API and can create an annoying amount of duplicate or similar code across implementations. But it is more likely to be robust in the face of many different hardware APIs.

Hm... it seems I'm in over my head about this situation. Perhaps instead of thinking too much about it, I'll just do it and see with experience the limitations about the design I will use.

For example:

Let's say there is a Texture2D class, that is used to refer to textures stored on the GPU. The problem is: storing the specific GLuint (or whatever the alternative for DX is) within the Texture2D object directly, which would make my code very dependent to OpenGL.

RED ALERT.

The very first step to abstracting across APIs is being familiar with several of them. You apparently are not. The very first step to writing abstract code is understanding what is being abstracted and at what layer the abstraction needs to exist. It also requires some perspective on common functionality within the APIs and from the point of view of the client code. Reusability and abstraction DO NOT exist in a vacuum and you can't simply make them up based on some general ideas of what constitutes an API's objects.

More generally speaking, creating an abstracted layer that can do D3D or OGL internally isn't about papering over whether it's a GLuint or an IDirect3DTexture* underneath. Those things are trivial, stupid issues. Some judicious use of the preprocessor and API specific files will solve those issues. No, the hard part in creating these abstractions is behavior and implied assumptions. There is a lot of devil in the details type stuff that can take some careful stepping to navigate. Simple example: OpenGL has the vertex array object (VAO), which encapsulates a client state (enabled attrib bits and bound VBOs). You'll have one per object, more or less. That doesn't exist in D3D 9. Instead there is a separate set of stream sources, and a vertex declaration which is typically shared heavily. It can get messier if you want to handle constant buffer APIs (D3D 11, GL 4.2). Render states are handled differently in GL than D3D9 than D3D11. And a nasty one for OpenGL based engines: D3D requires everything to be funneled through a device pointer, which is not necessary in GL. If your GL abstraction assumes it can change pipeline states without an external reference, you'll wind up patching in globals to support D3D. And this is all just a small slice.

I don't know a lot D3D, but I assumed (from what I heard) that the concepts are very similar to OpenGL. i.e. Texture objects for DX. I did know about the D3D device, which I could easily just pass to if I create a renderer, could I not?

anax - An open source C++ entity system

At a high level, sure there are a lot of similarities. In practice they are vastly different implementations.

Perception is when one imagination clashes with another

Both possess texture objects for sure, but there are two major differences and a third more subtle difference.

First major difference is that D3D has no equivalent whatsoever to the glActiveTexture call. In GL you select a currently active texture unit and all subsequent GL calls that affect texture state will affect the state for that unit. This model doesn't exist at all in D3D - instead the texture stage to be affected is passed as a parameter to all texture state calls.

Second one is that GL has texture binding points - GL_TEXTURE_2D, GL_TEXTURE_3D, etc. Each texture unit has a full set of binding points and multiple textures can be bound to a single texture unit by binding one to each binding point. Again, state changes must specify the binding point to be affected. D3D doesn't have these binding points at all; it's one texture stage, one texture.

The more subtle one is that if you want to create or subsequently modify a texture in GL you must glBindTexture it first. glTexImage/glTexSubImage calls only affect the currently bound texture. D3D doesn't have this requirement - instead you operate directly on the texture object itself without needing a SetTexture call before doing so. State changes for drawing are going to affect modification/creation and vice-versa in GL but not in D3D. (This also applies to buffer objects in GL vs D3D too).

So by going to too low a level, these API difference are all things that your program is going to have to be aware of, which means that the abstraction bubbles up into your main program code. This kind of difference exists throughout the two APIs - you'll have similar problems with shaders and shader uniforms, for example, and again the abstraction will bubble up to your main program code. So your main program code will end up having to be aware of which API it's using, what it can and cannot do in each, what the differences are for handling objects, etc. At that point you've no longer got an abstraction, you've got a set of thin wrappers with your main program code doing all the heavy work.

Direct3D has need of instancing, but we do not. We have plenty of glVertexAttrib calls.

I would check out Hummus' "Framework 3". http://www.humus.name/index.php?page=3D

+1 on this. His engine is a great example on how to abstract the rendering pipeline. He has different pipelines for dx, opengl, and even for handheld devices.

Wow that actually looks pretty awesome. I'll have to take a good look at that.

How does your Texture class handle 1D, 2D, 3D, and cube textures? Do you have multiple sub-classing?

For this, what I've done is create an enumeration for the texture types, arguments, filters, ect that I want to support. Then, in each API-specific texture class, I define an array of values which map to my custom enumeration, and put -1 where the value is unsupported. It seems to work pretty well for me at the moment.

For instance, for texture type I would have an enum:


enum TextureType
{
  TextureType1D,
  TextureType2D,
  TextureType3D
}

Then my mapping for OpenGL would be


static const int GLTextureType[3] = 
{
  GL_TEXTURE_1D,
  GL_TEXTURE_2D,
  -1 // 3D Textures are not supported
}

This topic is closed to new replies.

Advertisement