Jump to content

  • Log In with Google      Sign In   
  • Create Account


Abstraction layer between Direct3D9 and OpenGL 2.1


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
12 replies to this topic

#1 Retsu90   Members   -  Reputation: 208

Like
0Likes
Like

Posted 07 April 2013 - 06:28 AM

Hi, I'm currently working on an abstraction layer between Direct3D9 and OpenGL 2.1.

Basically there is a virtual class (the rendering interface) that can be implemented with D3D9 or GL21 class. The problem that I have is that I don't know how to store and send the vertices. D3D9 provides FVF where every vertices contains position, texture uv and color in a DWORD value. OpenGL handles the single vertices' elements in different locations of memory (an array for position, an array for texture uv and an array for color in vec4 float format). Basically I want an independent way to create vertices, the application should store them in a portion of memory, then send them all at once to the render interface, but how? I know that OpenGL can handle the vertices in D3D compatibility mode (with glVertexAttribPointer, managing stride and pointer value) that seems to be slower than the native way, but how to manage for example the colors? D3D9 accepts a single 32-bit value in integer format, OpenGL manages it in a more dynamic (but heavy) way, storing each color's channel in a 32-bit floating point value. In this case I can handle the 32-bit integer value inside the vertex shader and then convert it in a vec4 right? But all these operations seems to be too heavy. In the future I want that this abstraction layer is good enough to implement other rendering engines like Direct3D11, OpenGL ES 2.0 and others. So my main question is: there is a nice way to abstract these two rendering API without to change the rest of the code that uses the rendering interface? 



Sponsor:

#2 Hodgman   Moderators   -  Reputation: 28447

Like
4Likes
Like

Posted 07 April 2013 - 06:57 AM

I'd recommend not supporting fixed-function pipeline at all, and instead requiring pixel and vertex shaders.
In D3D9, this means that you'd be using vertex declarations instead of FVFs.
Both D3D and GL allow you to store arrays of individual attributes or strided structures of attributes. The optimal memory layout depends on the GPU.
I've not heard of 'D3D compatability mode' in GL...
Both APIs let you have 4-byte vertex attributes (e.g. For colors).

#3 Juliean   GDNet+   -  Reputation: 2345

Like
0Likes
Like

Posted 07 April 2013 - 07:11 AM

How are you creating your vertices anyway? I would create an API independand vertex layout that fits your needs, and have the actual rendering implementation translate this into the correct vertex layout for each API. I don't know if this is the optimal way, but at least it will cause the least problems since the actual GPU vertices are quaranted to fit the current APIs needs. Creating could take a bit longer, but that all depends on how intelligent you implement the translation of your generic vertex format to the API specific. Pseudocode might go like this.

 

modelFile = LoadFile("Model.myext");

GenericVertexFmt vertices[modelFile.numVertices];

for(each vertex in model)
{
vertices[i] = vertex; //optimal if your file format is API independant, too
}

renderClass.createVertexObject(vertices[i]); //creates a vertex object for the current implemented API

Thats at least how I'd do that, again there might be a better approach, I havn't done this myself but it appears to be a good possibility.



#4 Retsu90   Members   -  Reputation: 208

Like
0Likes
Like

Posted 07 April 2013 - 07:36 AM

I'd recommend not supporting fixed-function pipeline at all, and instead requiring pixel and vertex shaders.
In D3D9, this means that you'd be using vertex declarations instead of FVFs.
Both D3D and GL allow you to store arrays of individual attributes or strided structures of attributes. The optimal memory layout depends on the GPU.
I've not heard of 'D3D compatability mode' in GL...
Both APIs let you have 4-byte vertex attributes (e.g. For colors).

Yes, I found the vertex declaration more flexible than FVF. How D3D supports individual attributes? And how I can say if the GPU prefeer packed or individual vertices attributes? Seems that OpenGL manages better in a way, Direct3D9 in another way. I found the D3D9 compatibility mode here

 


How are you creating your vertices anyway? I would create an API independand vertex layout that fits your needs, and have the actual rendering implementation translate this into the correct vertex layout for each API. I don't know if this is the optimal way, but at least it will cause the least problems since the actual GPU vertices are quaranted to fit the current APIs needs. Creating could take a bit longer, but that all depends on how intelligent you implement the translation of your generic vertex format to the API specific. Pseudocode might go like this.

 

modelFile = LoadFile("Model.myext");

GenericVertexFmt vertices[modelFile.numVertices];

for(each vertex in model)
{
vertices[i] = vertex; //optimal if your file format is API independant, too
}

renderClass.createVertexObject(vertices[i]); //creates a vertex object for the current implemented API

Thats at least how I'd do that, again there might be a better approach, I havn't done this myself but it appears to be a good possibility.

This can be a solution obviously, but can be overweight as works in cases of a lot of vertices. Currently I'm creating a lot of vertices at runtime that handles sprites with XYZ coords, UV coords, colors and a value that points to the palette index (the value is processed by the fragment shader).



#5 Juliean   GDNet+   -  Reputation: 2345

Like
1Likes
Like

Posted 07 April 2013 - 07:48 AM

This can be a solution obviously, but can be overweight as works in cases of a lot of vertices. Currently I'm creating a lot of vertices at runtime that handles sprites with XYZ coords, UV coords, colors and a value that points to the palette index (the value is processed by the fragment shader).

 

If I understood Hodgman correctly, there is no big difference in the actual layout of the vertices for both APIs. Using your own format would just assure you can plug in any API you want, while you might need to create a format struct that is aligned to certain bytes (just assuming, not really my stuff here, sorry) it would be a simple memcpy (or similar) operation, which shouldn't be too costy, as far as I can say. Granted that you already don't do things much different then writing your vertices in an array and copying that into the vertex buffer, unless there really is a huge difference in how both APIs expect their vertices, it could cost virtually nothing.



#6 AgentC   Members   -  Reputation: 1253

Like
1Likes
Like

Posted 07 April 2013 - 03:01 PM

You should be able to handle the exactly same binary vertex data in both Direct3D9 and OpenGL. When using shaders, and generic vertex attributes on the OpenGL side, for example vertex colors can be defined as 4 normalized unsigned bytes in both APIs.


Every time you add a boolean member variable, God kills a kitten. Every time you create a Manager class, God kills a kitten. Every time you create a Singleton...

Urho3D (engine)  Hessian (C64 game project)


#7 Hodgman   Moderators   -  Reputation: 28447

Like
0Likes
Like

Posted 07 April 2013 - 03:41 PM

That wiki doesn't describe a "D3D comparability mode", it just has some tips on how to do some operations that are easy in D3D but confusing in GL.

When it comes to less buffers & large strides, vs more buffers with small strides, no, there is not a preferred way for each API. They both cope with either way just as well as each other.

In D3D, the 'stream' param of the declaration is the index of the vertex buffer that will be read from. Before drawing, you must bind a vertex buffer to this slot with SetStreamSource.

As for the optimal data layout, it depends in multiple different vertex shaders use the same data (e.g. A shadow mapping shader only needs position, not normals, etc), and on the actual GPUs memory fetching hardware. This will operate like any CPU cache, where it will overfetch whole 'cache lines' from RAM - usually you'd want your vertex stride to match this size, which means putting multiple attributes together - back in this era of GPUs, 32 byte stride was the recommedation, but this depends on the GPU... However, as above, if you pack pos/normal together, then the shadow mapping shader will be loading normals into the cache for no reason, so maybe it would be preferable to have one buffer with just positions, and another with all other attributes...
Really though, don't worry too much about it. It doesn't matter if your GPU usage isn't 100% optimal.

#8 mhagain   Crossbones+   -  Reputation: 7565

Like
1Likes
Like

Posted 07 April 2013 - 03:43 PM

I personally think that this is too low-level an abstraction.  You shouldn't be going to a lower level than handing over a model's binary data to the API layers, then leaving details of vertex formats, buffer setup, layouts, etc to the individual API layers instead.  One key difference between the APIs here is that D3D decouples the vertex format specification from the data specification, whereas OpenGL does not (in 2.1; 4.3 resolves this) and this is going to cause you further problems as you work more on it.  Other differences, such as OpenGL's bind-to-modify thing, are also going to cause you trouble if you ever need to update a dynamic buffer.

 

This is far from the only API difference that will cause you problems; one I can quite confidently predict you'll have trouble with will be OpenGL tying uniform values to the currently selected program object but D3D having them as global state.  You'll also have trouble with lack of separate shader objects in OpenGL, with any render to texture setup (no FBOs in GL2.1) and with no equivalent to glActiveTexture in D3D (the texture stage to operate on is an additional param to the relevant API calls).  Not to mention sampler states vs glTexParameter.


It appears that the gentleman thought C++ was extremely difficult and he was overjoyed that the machine was absorbing it; he understood that good C++ is difficult but the best C++ is well-nigh unintelligible.


#9 AgentC   Members   -  Reputation: 1253

Like
1Likes
Like

Posted 07 April 2013 - 04:26 PM

To some degree, you can pick your "favorite API" and then make the others emulate it. Like mhagain warned, actually doing this may require going through some contortions, and even though your program works correctly, you may run into non-optimal performance, usually on the OpenGL side.

 

For example I picked D3D as my favorite and wanted to emulate the simple SetRenderTarget / SetDepthStencil API with OpenGL FBO's. The initial and simple solution was to create a single FBO and attach different color buffers and depth buffers according to the currently set targets. This worked correctly, but exposed performance loss (to add insult to injury, it was Linux-only, Windows drivers were fine.) The solution to avoid the performance loss was to create several FBO's sorted by the resolution and format of the first color buffer, and what was only a few lines of code in D3D became hundreds in OpenGL :)

 

Additionally, to be able to utilize D3D11 effectively in the future you probably need to make your abstract API resemble D3D11 with its state objects, constant buffers etc. and emulate those in the other APIs.


Every time you add a boolean member variable, God kills a kitten. Every time you create a Manager class, God kills a kitten. Every time you create a Singleton...

Urho3D (engine)  Hessian (C64 game project)


#10 Squared'D   Members   -  Reputation: 2222

Like
0Likes
Like

Posted 09 April 2013 - 02:07 AM

I've been working on making my project able to use multiple APIs albiet for now just Direct X 9 and 11. My first attempt was to emulate DirectX 11 and make Direct X 9 fit to it. Later because I wanted it to be easier for others to use my coded, I decided on a more high level solution. I've found that both have their plusses and minuses.

Low Level (pick your favorite api and emulate it )
Good
* It's easier to build higher level functionality because everything has the same base
Bad
* As AgentC said, you can run into some non-optimal situations that could take some "hacks" to fix.

High Level
Good
* You can code things in the most optimal way for the API.
Bad
* Without adequate planning it could almost be like coding two engines. You must be sure you isolate similarities or you'll end up coding many things twice.

Those are the issues that I've noticed so far. I'm sure there are more.

Learn all about my current projects and watch some of the game development videos that I've made.

Squared Programming Home

New Personal Journal

 


#11 Juliean   GDNet+   -  Reputation: 2345

Like
1Likes
Like

Posted 09 April 2013 - 12:18 PM

I've been working on making my project able to use multiple APIs albiet for now just Direct X 9 and 11. My first attempt was to emulate DirectX 11 and make Direct X 9 fit to it. Later because I wanted it to be easier for others to use my coded, I decided on a more high level solution. I've found that both have their plusses and minuses.

 

At first, it all depends on whether you are fine with compile-time dependency for one of the APIs or actually want to be able to switch between them on run time. For latter, you are going to need a very complicated system of designing an API-agonstic layer atop all your other abstractions, but thats another topic. I just wanted to add that you don't really have to chooce between any of those two - I'd say you fare well if you sort of combine them too. I (before going API agnostic) used a two layer abstraction approach - one lower, and one higher level. That said, I have to disagree with some of your points:

 

Low Level (pick your favorite api and emulate it )
Good
* It's easier to build higher level functionality because everything has the same base
Bad
* As AgentC said, you can run into some non-optimal situations that could take some "hacks" to fix.

 

Why is it easier to build higher level functionality that way? It's probably that, if you don't follow the Dependency Inversion principle, it might be harder to build your functionality the other way round, but to say generalizing that the low-level approach is easier to build upon is just... wrong, at least IMHO.

 

High Level
Good
* You can code things in the most optimal way for the API.
Bad
* Without adequate planning it could almost be like coding two engines. You must be sure you isolate similarities or you'll end up coding many things twice.

 

Just one word: Interfaces! Whether a direct approach (java, ...) or the languege abstracted equivalent (C++, ...), if you design your API abstraction layer with those, and follow most importantly all patterns of the SOLID principles, it isn't really any "harder" than using a low level approach only.  But you are right in a way, for an inexperienced programmer, it might be hard, given that those will possible meet the requirement of "no adequate planning" more easily. But going "low-level" isn't going to be easier for him in any way, and then again, a programming beginner shouldn't be aiming for making a multi-API "engine" anyway. At least in my opinion.

 

Once again talking about my way, I actually have a low-level encapsulation around the actual API I'm using (not interchangable, each abstraction is completely independant of any other API's abstraction) combined with a high level abstraction, that supplies interfaces like ITexture, IMesh, ISprite etc... . From those you would inherit your actual API dependand implementations, like DX9Texture, DX9Sprite, etc... . Now truly like you said, if one e.g. wasn't using interfaces properly, he might end up having to rewrite huge parts of his engine just to fit for the specifiy API. But that way, I'm taking advantage of both the low-levelish encapsulation and the high-level abstraction.



#12 Retsu90   Members   -  Reputation: 208

Like
0Likes
Like

Posted 09 April 2013 - 12:58 PM

I read carefully all the replies, thanks for the tips.
I learned OpenGL 1.1 and 2.1 some time ago, but I don't now many things of Direct3D9 (for example the possibility to specify the vertex declaration without using FVF), the global shader uniforms (thanks mhagain for this information) and others. Fortunately the framework will be pretty tiny, it will support a single vertex and fragment shader (directly compiled and used during the initialization until the end of the application), specify the view matrix only during window resizing, no world matrix (I'm translating all the vertices from CPU side for specific reasons); the only thing that the framework should do, is to initialize the rendering device, set the view, create and/or load the texture and accept indexed vertices. I think that the "preferred API" can be a choice in application / games that requires a lot of performance with heavy 3D models and effects.
So until now, I've a virtual class with these members:
GetDeviceName(), Initialize, SetClearColor(float, float, float), RenderBegin (handles glBegin or pd3d9->BeginScene if it's fixed pipelne), RenderEnd (glEnd or pd3d9->EndScene() with the swap of the backbuffer), SendVertices(data, count, primitiveType), CreateTexture(id&, width, height, format), UseTexture(id) and SetView(width, height). All the works from the render device (like shader creation, uniform assignments, stores various textures in a single big texture, vertices caching, texture filtering, render to texture (glcopyteximage2d should works with GL 2.1<) etc) will be transparent, like a black box. The framework works fine with rotated sprites, cubes etc, the only weird thing that I noticed is that Direct3D9 handles the colors as ABGR and OpenGL as ARGB, but I can imagine that shaders will cover this issue. 
 
So with big projects, a preferred API is chosen and during portings these APIs are wrapped, right? This remember me that Final Fantasy 1 for Windows Phone is a porting of Android version that it's also a porting of the iOS, that finally it's a porting of PSP version, in fact the game is... horrible. In cases of powerful engines that promises high performances like Unity, UDK etc, they program it directly with the low-levels things and during the porting they change most of the code without to wrap the original API? 


#13 Squared'D   Members   -  Reputation: 2222

Like
1Likes
Like

Posted 09 April 2013 - 04:36 PM

Why is it easier to build higher level functionality that way? It's probably that, if you don't follow the Dependency Inversion principle, it might be harder to build your functionality the other way round, but to say generalizing that the low-level approach is easier to build upon is just... wrong, at least IMHO.

 

In my more low level implementation, I have about 4 or 5 basic interfaces (textures, vertex buffers, shaders, etc) Personally I found it quite easy to build higher level systems from this initial more low level abstraction. I also have another interface for handling states and draw calls. Now all I need to do is call these things and I'm fine. I still use this approach for myself. I can now just switch out implementations based on which API I want to use.

 

The more higher level approach was for code that I'd like to share because the interface is easier. (High level systems should have a cleaner interface in my opinion or what's the point.) I guess in the end, you are correct, I really shouldn't say either way is easier and I probably misspoke a little.


Learn all about my current projects and watch some of the game development videos that I've made.

Squared Programming Home

New Personal Journal

 





Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS