Abstraction layer between Direct3D9 and OpenGL 2.1

Started by
11 comments, last by Squared'D 11 years ago

Hi, I'm currently working on an abstraction layer between Direct3D9 and OpenGL 2.1.

Basically there is a virtual class (the rendering interface) that can be implemented with D3D9 or GL21 class. The problem that I have is that I don't know how to store and send the vertices. D3D9 provides FVF where every vertices contains position, texture uv and color in a DWORD value. OpenGL handles the single vertices' elements in different locations of memory (an array for position, an array for texture uv and an array for color in vec4 float format). Basically I want an independent way to create vertices, the application should store them in a portion of memory, then send them all at once to the render interface, but how? I know that OpenGL can handle the vertices in D3D compatibility mode (with glVertexAttribPointer, managing stride and pointer value) that seems to be slower than the native way, but how to manage for example the colors? D3D9 accepts a single 32-bit value in integer format, OpenGL manages it in a more dynamic (but heavy) way, storing each color's channel in a 32-bit floating point value. In this case I can handle the 32-bit integer value inside the vertex shader and then convert it in a vec4 right? But all these operations seems to be too heavy. In the future I want that this abstraction layer is good enough to implement other rendering engines like Direct3D11, OpenGL ES 2.0 and others. So my main question is: there is a nice way to abstract these two rendering API without to change the rest of the code that uses the rendering interface?

Advertisement
I'd recommend not supporting fixed-function pipeline at all, and instead requiring pixel and vertex shaders.
In D3D9, this means that you'd be using vertex declarations instead of FVFs.
Both D3D and GL allow you to store arrays of individual attributes or strided structures of attributes. The optimal memory layout depends on the GPU.
I've not heard of 'D3D compatability mode' in GL...
Both APIs let you have 4-byte vertex attributes (e.g. For colors).

How are you creating your vertices anyway? I would create an API independand vertex layout that fits your needs, and have the actual rendering implementation translate this into the correct vertex layout for each API. I don't know if this is the optimal way, but at least it will cause the least problems since the actual GPU vertices are quaranted to fit the current APIs needs. Creating could take a bit longer, but that all depends on how intelligent you implement the translation of your generic vertex format to the API specific. Pseudocode might go like this.


modelFile = LoadFile("Model.myext");

GenericVertexFmt vertices[modelFile.numVertices];

for(each vertex in model)
{
vertices = vertex; //optimal if your file format is API independant, too
}

renderClass.createVertexObject(vertices); //creates a vertex object for the current implemented API

Thats at least how I'd do that, again there might be a better approach, I havn't done this myself but it appears to be a good possibility.

I'd recommend not supporting fixed-function pipeline at all, and instead requiring pixel and vertex shaders.
In D3D9, this means that you'd be using vertex declarations instead of FVFs.
Both D3D and GL allow you to store arrays of individual attributes or strided structures of attributes. The optimal memory layout depends on the GPU.
I've not heard of 'D3D compatability mode' in GL...
Both APIs let you have 4-byte vertex attributes (e.g. For colors).

Yes, I found the vertex declaration more flexible than FVF. How D3D supports individual attributes? And how I can say if the GPU prefeer packed or individual vertices attributes? Seems that OpenGL manages better in a way, Direct3D9 in another way. I found the D3D9 compatibility mode here.


How are you creating your vertices anyway? I would create an API independand vertex layout that fits your needs, and have the actual rendering implementation translate this into the correct vertex layout for each API. I don't know if this is the optimal way, but at least it will cause the least problems since the actual GPU vertices are quaranted to fit the current APIs needs. Creating could take a bit longer, but that all depends on how intelligent you implement the translation of your generic vertex format to the API specific. Pseudocode might go like this.

modelFile = LoadFile("Model.myext");

GenericVertexFmt vertices[modelFile.numVertices];

for(each vertex in model)
{
vertices = vertex; //optimal if your file format is API independant, too
}

renderClass.createVertexObject(vertices); //creates a vertex object for the current implemented API

Thats at least how I'd do that, again there might be a better approach, I havn't done this myself but it appears to be a good possibility.

This can be a solution obviously, but can be overweight as works in cases of a lot of vertices. Currently I'm creating a lot of vertices at runtime that handles sprites with XYZ coords, UV coords, colors and a value that points to the palette index (the value is processed by the fragment shader).

This can be a solution obviously, but can be overweight as works in cases of a lot of vertices. Currently I'm creating a lot of vertices at runtime that handles sprites with XYZ coords, UV coords, colors and a value that points to the palette index (the value is processed by the fragment shader).

If I understood Hodgman correctly, there is no big difference in the actual layout of the vertices for both APIs. Using your own format would just assure you can plug in any API you want, while you might need to create a format struct that is aligned to certain bytes (just assuming, not really my stuff here, sorry) it would be a simple memcpy (or similar) operation, which shouldn't be too costy, as far as I can say. Granted that you already don't do things much different then writing your vertices in an array and copying that into the vertex buffer, unless there really is a huge difference in how both APIs expect their vertices, it could cost virtually nothing.

You should be able to handle the exactly same binary vertex data in both Direct3D9 and OpenGL. When using shaders, and generic vertex attributes on the OpenGL side, for example vertex colors can be defined as 4 normalized unsigned bytes in both APIs.

That wiki doesn't describe a "D3D comparability mode", it just has some tips on how to do some operations that are easy in D3D but confusing in GL.

When it comes to less buffers & large strides, vs more buffers with small strides, no, there is not a preferred way for each API. They both cope with either way just as well as each other.

In D3D, the 'stream' param of the declaration is the index of the vertex buffer that will be read from. Before drawing, you must bind a vertex buffer to this slot with SetStreamSource.

As for the optimal data layout, it depends in multiple different vertex shaders use the same data (e.g. A shadow mapping shader only needs position, not normals, etc), and on the actual GPUs memory fetching hardware. This will operate like any CPU cache, where it will overfetch whole 'cache lines' from RAM - usually you'd want your vertex stride to match this size, which means putting multiple attributes together - back in this era of GPUs, 32 byte stride was the recommedation, but this depends on the GPU... However, as above, if you pack pos/normal together, then the shadow mapping shader will be loading normals into the cache for no reason, so maybe it would be preferable to have one buffer with just positions, and another with all other attributes...
Really though, don't worry too much about it. It doesn't matter if your GPU usage isn't 100% optimal.

I personally think that this is too low-level an abstraction. You shouldn't be going to a lower level than handing over a model's binary data to the API layers, then leaving details of vertex formats, buffer setup, layouts, etc to the individual API layers instead. One key difference between the APIs here is that D3D decouples the vertex format specification from the data specification, whereas OpenGL does not (in 2.1; 4.3 resolves this) and this is going to cause you further problems as you work more on it. Other differences, such as OpenGL's bind-to-modify thing, are also going to cause you trouble if you ever need to update a dynamic buffer.

This is far from the only API difference that will cause you problems; one I can quite confidently predict you'll have trouble with will be OpenGL tying uniform values to the currently selected program object but D3D having them as global state. You'll also have trouble with lack of separate shader objects in OpenGL, with any render to texture setup (no FBOs in GL2.1) and with no equivalent to glActiveTexture in D3D (the texture stage to operate on is an additional param to the relevant API calls). Not to mention sampler states vs glTexParameter.

Direct3D has need of instancing, but we do not. We have plenty of glVertexAttrib calls.

To some degree, you can pick your "favorite API" and then make the others emulate it. Like mhagain warned, actually doing this may require going through some contortions, and even though your program works correctly, you may run into non-optimal performance, usually on the OpenGL side.

For example I picked D3D as my favorite and wanted to emulate the simple SetRenderTarget / SetDepthStencil API with OpenGL FBO's. The initial and simple solution was to create a single FBO and attach different color buffers and depth buffers according to the currently set targets. This worked correctly, but exposed performance loss (to add insult to injury, it was Linux-only, Windows drivers were fine.) The solution to avoid the performance loss was to create several FBO's sorted by the resolution and format of the first color buffer, and what was only a few lines of code in D3D became hundreds in OpenGL :)

Additionally, to be able to utilize D3D11 effectively in the future you probably need to make your abstract API resemble D3D11 with its state objects, constant buffers etc. and emulate those in the other APIs.

I've been working on making my project able to use multiple APIs albiet for now just Direct X 9 and 11. My first attempt was to emulate DirectX 11 and make Direct X 9 fit to it. Later because I wanted it to be easier for others to use my coded, I decided on a more high level solution. I've found that both have their plusses and minuses.

Low Level (pick your favorite api and emulate it )
Good
* It's easier to build higher level functionality because everything has the same base
Bad
* As AgentC said, you can run into some non-optimal situations that could take some "hacks" to fix.

High Level
Good
* You can code things in the most optimal way for the API.
Bad
* Without adequate planning it could almost be like coding two engines. You must be sure you isolate similarities or you'll end up coding many things twice.

Those are the issues that I've noticed so far. I'm sure there are more.

Learn all about my current projects and watch some of the game development videos that I've made.

Squared Programming Home

New Personal Journal

This topic is closed to new replies.

Advertisement