• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.
Sign in to follow this  
Followers 0
Xeeynamo

OpenGL
Abstraction layer between Direct3D9 and OpenGL 2.1

12 posts in this topic

Hi, I'm currently working on an abstraction layer between Direct3D9 and OpenGL 2.1.

Basically there is a virtual class (the rendering interface) that can be implemented with D3D9 or GL21 class. The problem that I have is that I don't know how to store and send the vertices. D3D9 provides FVF where every vertices contains position, texture uv and color in a DWORD value. OpenGL handles the single vertices' elements in different locations of memory (an array for position, an array for texture uv and an array for color in vec4 float format). Basically I want an independent way to create vertices, the application should store them in a portion of memory, then send them all at once to the render interface, but how? I know that OpenGL can handle the vertices in D3D compatibility mode (with glVertexAttribPointer, managing stride and pointer value) that seems to be slower than the native way, but how to manage for example the colors? D3D9 accepts a single 32-bit value in integer format, OpenGL manages it in a more dynamic (but heavy) way, storing each color's channel in a 32-bit floating point value. In this case I can handle the 32-bit integer value inside the vertex shader and then convert it in a vec4 right? But all these operations seems to be too heavy. In the future I want that this abstraction layer is good enough to implement other rendering engines like Direct3D11, OpenGL ES 2.0 and others. So my main question is: there is a nice way to abstract these two rendering API without to change the rest of the code that uses the rendering interface? 

0

Share this post


Link to post
Share on other sites

How are you creating your vertices anyway? I would create an API independand vertex layout that fits your needs, and have the actual rendering implementation translate this into the correct vertex layout for each API. I don't know if this is the optimal way, but at least it will cause the least problems since the actual GPU vertices are quaranted to fit the current APIs needs. Creating could take a bit longer, but that all depends on how intelligent you implement the translation of your generic vertex format to the API specific. Pseudocode might go like this.

 

modelFile = LoadFile("Model.myext");

GenericVertexFmt vertices[modelFile.numVertices];

for(each vertex in model)
{
vertices[i] = vertex; //optimal if your file format is API independant, too
}

renderClass.createVertexObject(vertices[i]); //creates a vertex object for the current implemented API

Thats at least how I'd do that, again there might be a better approach, I havn't done this myself but it appears to be a good possibility.

0

Share this post


Link to post
Share on other sites

I'd recommend not supporting fixed-function pipeline at all, and instead requiring pixel and vertex shaders.
In D3D9, this means that you'd be using vertex declarations instead of FVFs.
Both D3D and GL allow you to store arrays of individual attributes or strided structures of attributes. The optimal memory layout depends on the GPU.
I've not heard of 'D3D compatability mode' in GL...
Both APIs let you have 4-byte vertex attributes (e.g. For colors).

Yes, I found the vertex declaration more flexible than FVF. How D3D supports individual attributes? And how I can say if the GPU prefeer packed or individual vertices attributes? Seems that OpenGL manages better in a way, Direct3D9 in another way. I found the D3D9 compatibility mode here

 


How are you creating your vertices anyway? I would create an API independand vertex layout that fits your needs, and have the actual rendering implementation translate this into the correct vertex layout for each API. I don't know if this is the optimal way, but at least it will cause the least problems since the actual GPU vertices are quaranted to fit the current APIs needs. Creating could take a bit longer, but that all depends on how intelligent you implement the translation of your generic vertex format to the API specific. Pseudocode might go like this.

 

modelFile = LoadFile("Model.myext");

GenericVertexFmt vertices[modelFile.numVertices];

for(each vertex in model)
{
vertices[i] = vertex; //optimal if your file format is API independant, too
}

renderClass.createVertexObject(vertices[i]); //creates a vertex object for the current implemented API

Thats at least how I'd do that, again there might be a better approach, I havn't done this myself but it appears to be a good possibility.

This can be a solution obviously, but can be overweight as works in cases of a lot of vertices. Currently I'm creating a lot of vertices at runtime that handles sprites with XYZ coords, UV coords, colors and a value that points to the palette index (the value is processed by the fragment shader).

0

Share this post


Link to post
Share on other sites

This can be a solution obviously, but can be overweight as works in cases of a lot of vertices. Currently I'm creating a lot of vertices at runtime that handles sprites with XYZ coords, UV coords, colors and a value that points to the palette index (the value is processed by the fragment shader).

 

If I understood Hodgman correctly, there is no big difference in the actual layout of the vertices for both APIs. Using your own format would just assure you can plug in any API you want, while you might need to create a format struct that is aligned to certain bytes (just assuming, not really my stuff here, sorry) it would be a simple memcpy (or similar) operation, which shouldn't be too costy, as far as I can say. Granted that you already don't do things much different then writing your vertices in an array and copying that into the vertex buffer, unless there really is a huge difference in how both APIs expect their vertices, it could cost virtually nothing.

1

Share this post


Link to post
Share on other sites

You should be able to handle the exactly same binary vertex data in both Direct3D9 and OpenGL. When using shaders, and generic vertex attributes on the OpenGL side, for example vertex colors can be defined as 4 normalized unsigned bytes in both APIs.

1

Share this post


Link to post
Share on other sites
That wiki doesn't describe a "D3D comparability mode", it just has some tips on how to do some operations that are easy in D3D but confusing in GL.

When it comes to less buffers & large strides, vs more buffers with small strides, no, there is not a preferred way for each API. They both cope with either way just as well as each other.

In D3D, the 'stream' param of the declaration is the index of the vertex buffer that will be read from. Before drawing, you must bind a vertex buffer to this slot with SetStreamSource.

As for the optimal data layout, it depends in multiple different vertex shaders use the same data (e.g. A shadow mapping shader only needs position, not normals, etc), and on the actual GPUs memory fetching hardware. This will operate like any CPU cache, where it will overfetch whole 'cache lines' from RAM - usually you'd want your vertex stride to match this size, which means putting multiple attributes together - back in this era of GPUs, 32 byte stride was the recommedation, but this depends on the GPU... However, as above, if you pack pos/normal together, then the shadow mapping shader will be loading normals into the cache for no reason, so maybe it would be preferable to have one buffer with just positions, and another with all other attributes...
Really though, don't worry too much about it. It doesn't matter if your GPU usage isn't 100% optimal.
0

Share this post


Link to post
Share on other sites

I personally think that this is too low-level an abstraction.  You shouldn't be going to a lower level than handing over a model's binary data to the API layers, then leaving details of vertex formats, buffer setup, layouts, etc to the individual API layers instead.  One key difference between the APIs here is that D3D decouples the vertex format specification from the data specification, whereas OpenGL does not (in 2.1; 4.3 resolves this) and this is going to cause you further problems as you work more on it.  Other differences, such as OpenGL's bind-to-modify thing, are also going to cause you trouble if you ever need to update a dynamic buffer.

 

This is far from the only API difference that will cause you problems; one I can quite confidently predict you'll have trouble with will be OpenGL tying uniform values to the currently selected program object but D3D having them as global state.  You'll also have trouble with lack of separate shader objects in OpenGL, with any render to texture setup (no FBOs in GL2.1) and with no equivalent to glActiveTexture in D3D (the texture stage to operate on is an additional param to the relevant API calls).  Not to mention sampler states vs glTexParameter.

1

Share this post


Link to post
Share on other sites

To some degree, you can pick your "favorite API" and then make the others emulate it. Like mhagain warned, actually doing this may require going through some contortions, and even though your program works correctly, you may run into non-optimal performance, usually on the OpenGL side.

 

For example I picked D3D as my favorite and wanted to emulate the simple SetRenderTarget / SetDepthStencil API with OpenGL FBO's. The initial and simple solution was to create a single FBO and attach different color buffers and depth buffers according to the currently set targets. This worked correctly, but exposed performance loss (to add insult to injury, it was Linux-only, Windows drivers were fine.) The solution to avoid the performance loss was to create several FBO's sorted by the resolution and format of the first color buffer, and what was only a few lines of code in D3D became hundreds in OpenGL :)

 

Additionally, to be able to utilize D3D11 effectively in the future you probably need to make your abstract API resemble D3D11 with its state objects, constant buffers etc. and emulate those in the other APIs.

1

Share this post


Link to post
Share on other sites
I've been working on making my project able to use multiple APIs albiet for now just Direct X 9 and 11. My first attempt was to emulate DirectX 11 and make Direct X 9 fit to it. Later because I wanted it to be easier for others to use my coded, I decided on a more high level solution. I've found that both have their plusses and minuses.

Low Level (pick your favorite api and emulate it )
Good
* It's easier to build higher level functionality because everything has the same base
Bad
* As AgentC said, you can run into some non-optimal situations that could take some "hacks" to fix.

High Level
Good
* You can code things in the most optimal way for the API.
Bad
* Without adequate planning it could almost be like coding two engines. You must be sure you isolate similarities or you'll end up coding many things twice.

Those are the issues that I've noticed so far. I'm sure there are more.
0

Share this post


Link to post
Share on other sites

I've been working on making my project able to use multiple APIs albiet for now just Direct X 9 and 11. My first attempt was to emulate DirectX 11 and make Direct X 9 fit to it. Later because I wanted it to be easier for others to use my coded, I decided on a more high level solution. I've found that both have their plusses and minuses.

 

At first, it all depends on whether you are fine with compile-time dependency for one of the APIs or actually want to be able to switch between them on run time. For latter, you are going to need a very complicated system of designing an API-agonstic layer atop all your other abstractions, but thats another topic. I just wanted to add that you don't really have to chooce between any of those two - I'd say you fare well if you sort of combine them too. I (before going API agnostic) used a two layer abstraction approach - one lower, and one higher level. That said, I have to disagree with some of your points:

 

Low Level (pick your favorite api and emulate it )
Good
* It's easier to build higher level functionality because everything has the same base
Bad
* As AgentC said, you can run into some non-optimal situations that could take some "hacks" to fix.

 

Why is it easier to build higher level functionality that way? It's probably that, if you don't follow the Dependency Inversion principle, it might be harder to build your functionality the other way round, but to say generalizing that the low-level approach is easier to build upon is just... wrong, at least IMHO.

 

High Level
Good
* You can code things in the most optimal way for the API.
Bad
* Without adequate planning it could almost be like coding two engines. You must be sure you isolate similarities or you'll end up coding many things twice.

 

Just one word: Interfaces! Whether a direct approach (java, ...) or the languege abstracted equivalent (C++, ...), if you design your API abstraction layer with those, and follow most importantly all patterns of the SOLID principles, it isn't really any "harder" than using a low level approach only.  But you are right in a way, for an inexperienced programmer, it might be hard, given that those will possible meet the requirement of "no adequate planning" more easily. But going "low-level" isn't going to be easier for him in any way, and then again, a programming beginner shouldn't be aiming for making a multi-API "engine" anyway. At least in my opinion.

 

Once again talking about my way, I actually have a low-level encapsulation around the actual API I'm using (not interchangable, each abstraction is completely independant of any other API's abstraction) combined with a high level abstraction, that supplies interfaces like ITexture, IMesh, ISprite etc... . From those you would inherit your actual API dependand implementations, like DX9Texture, DX9Sprite, etc... . Now truly like you said, if one e.g. wasn't using interfaces properly, he might end up having to rewrite huge parts of his engine just to fit for the specifiy API. But that way, I'm taking advantage of both the low-levelish encapsulation and the high-level abstraction.

1

Share this post


Link to post
Share on other sites
I read carefully all the replies, thanks for the tips.
I learned OpenGL 1.1 and 2.1 some time ago, but I don't now many things of Direct3D9 (for example the possibility to specify the vertex declaration without using FVF), the global shader uniforms (thanks mhagain for this information) and others. Fortunately the framework will be pretty tiny, it will support a single vertex and fragment shader (directly compiled and used during the initialization until the end of the application), specify the view matrix only during window resizing, no world matrix (I'm translating all the vertices from CPU side for specific reasons); the only thing that the framework should do, is to initialize the rendering device, set the view, create and/or load the texture and accept indexed vertices. I think that the "preferred API" can be a choice in application / games that requires a lot of performance with heavy 3D models and effects.
So until now, I've a virtual class with these members:
GetDeviceName(), Initialize, SetClearColor(float, float, float), RenderBegin (handles glBegin or pd3d9->BeginScene if it's fixed pipelne), RenderEnd (glEnd or pd3d9->EndScene() with the swap of the backbuffer), SendVertices(data, count, primitiveType), CreateTexture(id&, width, height, format), UseTexture(id) and SetView(width, height). All the works from the render device (like shader creation, uniform assignments, stores various textures in a single big texture, vertices caching, texture filtering, render to texture (glcopyteximage2d should works with GL 2.1<) etc) will be transparent, like a black box. The framework works fine with rotated sprites, cubes etc, the only weird thing that I noticed is that Direct3D9 handles the colors as ABGR and OpenGL as ARGB, but I can imagine that shaders will cover this issue. 
 
So with big projects, a preferred API is chosen and during portings these APIs are wrapped, right? This remember me that Final Fantasy 1 for Windows Phone is a porting of Android version that it's also a porting of the iOS, that finally it's a porting of PSP version, in fact the game is... horrible. In cases of powerful engines that promises high performances like Unity, UDK etc, they program it directly with the low-levels things and during the porting they change most of the code without to wrap the original API? 
0

Share this post


Link to post
Share on other sites

Why is it easier to build higher level functionality that way? It's probably that, if you don't follow the Dependency Inversion principle, it might be harder to build your functionality the other way round, but to say generalizing that the low-level approach is easier to build upon is just... wrong, at least IMHO.

 

In my more low level implementation, I have about 4 or 5 basic interfaces (textures, vertex buffers, shaders, etc) Personally I found it quite easy to build higher level systems from this initial more low level abstraction. I also have another interface for handling states and draw calls. Now all I need to do is call these things and I'm fine. I still use this approach for myself. I can now just switch out implementations based on which API I want to use.

 

The more higher level approach was for code that I'd like to share because the interface is easier. (High level systems should have a cleaner interface in my opinion or what's the point.) I guess in the end, you are correct, I really shouldn't say either way is easier and I probably misspoke a little.

1

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now
Sign in to follow this  
Followers 0

  • Similar Content

    • By sandor_deak
      Hi,
      I started to learn OpenGL about 3 months ago and I've been working on a game engine which uses tools from Riemannian geometry to display curved spaces. Here is a link to a video. (It has a lot to improve yet.) I'm a mathematician but I would like to work in the game industry in a programmer role and I'm making this engine to build my portfolio. I would appreciate some feedback and opinions about the engine, how "convincing" it is as a portfolio, and also some advice about the game industry. Thanks a lot!
    • By Toastmastern
      So it's been a while since I took a break from my whole creating a planet in DX11. Last time around I got stuck on fixing a nice LOD.
      A week back or so I got help to find this:
      https://github.com/sp4cerat/Planet-LOD
      In general this is what I'm trying to recreate in DX11, he that made that planet LOD uses OpenGL but that is a minor issue and something I can solve. But I have a question regarding the code
      He gets the position using this row
      vec4d pos = b.var.vec4d["position"]; Which is then used further down when he sends the variable "center" into the drawing function:
      if (pos.len() < 1) pos.norm(); world::draw(vec3d(pos.x, pos.y, pos.z));  
      Inside the draw function this happens:
      draw_recursive(p3[0], p3[1], p3[2], center); Basically the 3 vertices of the triangle and the center of details that he sent as a parameter earlier: vec3d(pos.x, pos.y, pos.z)
      Now onto my real question, he does vec3d edge_center[3] = { (p1 + p2) / 2, (p2 + p3) / 2, (p3 + p1) / 2 }; to get the edge center of each edge, nothing weird there.
      But this is used later on with:
      vec3d d = center + edge_center[i]; edge_test[i] = d.len() > ratio_size; edge_test is then used to evaluate if there should be a triangle drawn or if it should be split up into 3 new triangles instead. Why is it working for him? shouldn't it be like center - edge_center or something like that? Why adding them togheter? I asume here that the center is the center of details for the LOD. the position of the camera if stood on the ground of the planet and not up int he air like it is now.

      Full code can be seen here:
      https://github.com/sp4cerat/Planet-LOD/blob/master/src.simple/Main.cpp
      If anyone would like to take a look and try to help me understand this code I would love this person. I'm running out of ideas on how to solve this in my own head, most likely twisted it one time to many up in my head
      Thanks in advance
      Toastmastern
       
       
    • By fllwr0491
      I googled around but are unable to find source code or details of implementation.
      What keywords should I search for this topic?
      Things I would like to know:
      A. How to ensure that partially covered pixels are rasterized?
         Apparently by expanding each triangle by 1 pixel or so, rasterization problem is almost solved.
         But it will result in an unindexable triangle list without tons of overlaps. Will it incur a large performance penalty?
      B. A-buffer like bitmask needs a read-modiry-write operation.
         How to ensure proper synchronizations in GLSL?
         GLSL seems to only allow int32 atomics on image.
      C. Is there some simple ways to estimate coverage on-the-fly?
         In case I am to draw 2D shapes onto an exisitng target:
         1. A multi-pass whatever-buffer seems overkill.
         2. Multisampling could cost a lot memory though all I need is better coverage.
            Besides, I have to blit twice, if draw target is not multisampled.
       
    • By mapra99
      Hello

      I am working on a recent project and I have been learning how to code in C# using OpenGL libraries for some graphics. I have achieved some quite interesting things using TAO Framework writing in Console Applications, creating a GLUT Window. But my problem now is that I need to incorporate the Graphics in a Windows Form so I can relate the objects that I render with some .NET Controls.

      To deal with this problem, I have seen in some forums that it's better to use OpenTK instead of TAO Framework, so I can use the glControl that OpenTK libraries offer. However, I haven't found complete articles, tutorials or source codes that help using the glControl or that may insert me into de OpenTK functions. Would somebody please share in this forum some links or files where I can find good documentation about this topic? Or may I use another library different of OpenTK?

      Thanks!
    • By Solid_Spy
      Hello, I have been working on SH Irradiance map rendering, and I have been using a GLSL pixel shader to render SH irradiance to 2D irradiance maps for my static objects. I already have it working with 9 3D textures so far for the first 9 SH functions.
      In my GLSL shader, I have to send in 9 SH Coefficient 3D Texures that use RGBA8 as a pixel format. RGB being used for the coefficients for red, green, and blue, and the A for checking if the voxel is in use (for the 3D texture solidification shader to prevent bleeding).
      My problem is, I want to knock this number of textures down to something like 4 or 5. Getting even lower would be a godsend. This is because I eventually plan on adding more SH Coefficient 3D Textures for other parts of the game map (such as inside rooms, as opposed to the outside), to circumvent irradiance probe bleeding between rooms separated by walls. I don't want to reach the 32 texture limit too soon. Also, I figure that it would be a LOT faster.
      Is there a way I could, say, store 2 sets of SH Coefficients for 2 SH functions inside a texture with RGBA16 pixels? If so, how would I extract them from inside GLSL? Let me know if you have any suggestions ^^.
  • Popular Now