Sign in to follow this  
NathanRidley

OpenGL Abstracting away DirectX, OpenGL, etc.

Recommended Posts

NathanRidley    1092

I'm a bit torn. I'm just learning graphics programming, but I want to make sure I use good practices as I go. Typically, when writing a web/business application I would separate the presentation layer from the business layer (typical 3-tier design) and ensure that the different layers are loosely coupled, with a proper separation of concerns between the different components and so forth. That is all old-hat to me, but game programming is new, so I'm trying to work out what might be different than how I'd normally approach things, due to performance concerns and other game-specific problem areas.

 

In my case, I'm using SharpDX, which exposes DirectX via a managed wrapper, and it provides a bunch of really useful classes, such as SharpDX.Vector3, which has a whole lot of handy utility methods, mainly for performing various types of transforms.

 

Despite its usefulness, I feel like I should not use this in my game because it will force me to reference SharpDX which will sting me if I decide to swap in OpenGL for rendering. Or am I thinking too low-level, and really I should be wrapping the actual vertex structures I use as well so that anything dealing directly with a vertex is more of a rendering engine detail?

 

Any tips, advice, pointers to good articles, etc, would be appreciated.

Share this post


Link to post
Share on other sites
Radikalizm    4807

I think it's important at this stage to decide if supporting OpenGL in the future is something you actually want to do.

 

A lot of people start out with the idea of supporting OpenGL (or DirectX if they start out with OpenGL) at a later point in time in their rendering engines; this results in them trying to write an abstraction layer on top of their original graphics API which doesn't really do the underlying API any justice.

 

Designing a proper wrapper exposing the functionalities of both APIs to their fullest extent is HARD. It's going to require a complete understanding of both APIs, their designs and their quirks. Add to this that you'll have to design a system which will unify these sometimes completely different beasts into a single API without introducing too much overhead and without sacrificing functionality.

 

If you really do want to support both you could always check out some third party utility libraries for doing vector math and such, I'm sure there'll be plenty of those out there.

 

If support for multiple rendering APIs isn't an absolute must I just wouldn't worry about it and work with the tools provided by SharpDX; no need to put a lot of effort into something which eventually won't have that much of an impact on your end result (ie. your game)

Share this post


Link to post
Share on other sites
satanir    1452


I'm just learning graphics programming...

Not to discourage you, but don't worry about abstraction if you are just a beginner. You'll proabably change your design a couple of times even if you're focusing on only one API.

As your graphics understanding improves, so will your design and code quality. 

Share this post


Link to post
Share on other sites
NathanRidley    1092

I think it's important at this stage to decide if supporting OpenGL in the future is something you actually want to do.

 

A lot of people start out with the idea of supporting OpenGL (or DirectX if they start out with OpenGL) at a later point in time in their rendering engines; this results in them trying to write an abstraction layer on top of their original graphics API which doesn't really do the underlying API any justice.

 

Designing a proper wrapper exposing the functionalities of both APIs to their fullest extent is HARD. It's going to require a complete understanding of both APIs, their designs and their quirks. Add to this that you'll have to design a system which will unify these sometimes completely different beasts into a single API without introducing too much overhead and without sacrificing functionality.

 

If you really do want to support both you could always check out some third party utility libraries for doing vector math and such, I'm sure there'll be plenty of those out there.

 

If support for multiple rendering APIs isn't an absolute must I just wouldn't worry about it and work with the tools provided by SharpDX; no need to put a lot of effort into something which eventually won't have that much of an impact on your end result (ie. your game)

 

Thanks for the advice, and I think you're right. Back when I was learning to program, I often made the mistake of trying to abstract away too many things without having a proper understanding of the APIs and frameworks with which I was dealing, which usually led to a very poor abstraction and the inferior reinvention of multiple different wheels that already existed. The Windows market is plenty big enough that I should be happy to service that market for my first few games while learning. I can worry about engine abstraction at a later date when I have some decent experience writing games.

 

 


I'm just learning graphics programming...

Not to discourage you, but don't worry about abstraction if you are just a beginner. You'll proabably change your design a couple of times even if you're focusing on only one API.

As your graphics understanding improves, so will your design and code quality. 

 

 

Thanks, and I agree. I'm going to stick with SharpDX and Windows exclusively for now.

Share this post


Link to post
Share on other sites
uglycoyote    117
Hi. Professional game developer with ten years experience here. I totally agree with the previous posters who say that trying to write a good quality abstraction layer between opengl and directx will be an uphill battle and probably not the best use of your time.

Having said that, in your original posting, you mentioned that you were finding the vector math packages of sharpDX handy. Even though you have made the decision not to use an abstraction layer for graphics, that does not mean that you need to tie the non-rendering parts (e.g. your AI code) of the game to SharpDX's math library. While writing your own rendering abstraction layer might be a formidable challenge, writing your own vector math routines is not that hard, or you could probably find some good ones that are not tied to a particular rendering engine. If you avoid tying your entire game to directX math libraries then it would mean that if you do decide to port it to another platform you can just replace your rendering code and the rest of your code will not need to be gutted, everywhere that it uses math routines.

On the other hand it might be the case that the sharpdx math library is separate enough from the rest of sharpdx that it would be possible to use it alongside some other rendering library based on opengl. I'm not familiar enough with sharpdx to know if that's the case, but if the math stuff is a separate DLL then that would be a good sign

Share this post


Link to post
Share on other sites
NathanRidley    1092

Hi. Professional game developer with ten years experience here. I totally agree with the previous posters who say that trying to write a good quality abstraction layer between opengl and directx will be an uphill battle and probably not the best use of your time.

Having said that, in your original posting, you mentioned that you were finding the vector math packages of sharpDX handy. Even though you have made the decision not to use an abstraction layer for graphics, that does not mean that you need to tie the non-rendering parts (e.g. your AI code) of the game to SharpDX's math library. While writing your own rendering abstraction layer might be a formidable challenge, writing your own vector math routines is not that hard, or you could probably find some good ones that are not tied to a particular rendering engine. If you avoid tying your entire game to directX math libraries then it would mean that if you do decide to port it to another platform you can just replace your rendering code and the rest of your code will not need to be gutted, everywhere that it uses math routines.

On the other hand it might be the case that the sharpdx math library is separate enough from the rest of sharpdx that it would be possible to use it alongside some other rendering library based on opengl. I'm not familiar enough with sharpdx to know if that's the case, but if the math stuff is a separate DLL then that would be a good sign

 

Cool, thanks for the advice. One concern I had was that to interoperate with DirectX I'd have to use its own math structures, which would mean copying from my own to DX, then going from there, which seems like a wasted step, but I actually now think I'm mistaken, as it looks like I really can copy any data into a buffer; it doesn't need to originate as a DX Vector3 or whatever. While learning though, I'll probably, for now, still stick with SharpDX math structures, then as soon as I'm ready, start using my own, because that will have the added benefit of forcing me to rehash the linear algebra knowledge I've been learning, and mentally associate the different seemingly-abstract operations with their practical applications in 3D programming, which will help drum that knowledge in. All the sources I've been reading/watching so far seem to separate the mathematical principles from the application, which means while i'm trying to learn all the relevant knowledge prerequisites in, say, matrix algebra, i'm not absorbing it as well as I should, because the theory doesn't talk much about how a given operation actually applies in the real world which means all I see is a bunch of abstract operations, many of which don't seem to have any obvious purpose.

Edited by axefrog

Share this post


Link to post
Share on other sites
LorenzoGatti    4450
You shouldn't worry now about the very low risk worst case of a very unlikely event: you aren't actually planning to replace SharpDX with OpenGL-based rendering or to make them coexist; in case you do it, the SharpDX math library might or might not become unsuitable or an inappropriate dependency; if you really need to switch to another math library, a massive round of editing is going to be easy (the mathematical abstractions are fundamentally identical) and a negligible effort compared to porting the graphical side to OpenGL.

Share this post


Link to post
Share on other sites
Jason Z    6436

As the others have mentioned several times, I would also suggest tackling one of the APIs at a time.  If you can write a renderer in one API, then another renderer using the other API, then you will be moderately equipped to design a system for putting the two together.  Also, keep in mind that your designs will surely evolve as you tackle different challenges, so don't be afraid to experiment and shake things up.  Especially if this is a hobby project - don't limit yourself right out of the gate, but rather challenge yourself to make some big steps in the design!

Share this post


Link to post
Share on other sites
Squared'D    2427
I was very similar to you. When I first started using Direct X 7, I wanted to create an abstraction layer so I'd later be able to support other things, maybe even other OS. Well I still think this is a good idea, at the time I was new to graphics programing and just learning DX 7 was a huge task for me. I never ended up supporting any other API, and when I moved to DX8, I scrapped most of the code anyway. (FYI, first attempts usually suck.)

Now that I've written like 5 or 6 different 3D engines, I've finally think I have enough knowledge to do a multiplatform framework. It's working well, but after I finish this game, I'll probably redesign it yet again.

Share this post


Link to post
Share on other sites
NathanRidley    1092

As the others have mentioned several times, I would also suggest tackling one of the APIs at a time.  If you can write a renderer in one API, then another renderer using the other API, then you will be moderately equipped to design a system for putting the two together.  Also, keep in mind that your designs will surely evolve as you tackle different challenges, so don't be afraid to experiment and shake things up.  Especially if this is a hobby project - don't limit yourself right out of the gate, but rather challenge yourself to make some big steps in the design!

 

Agreed, and I have sort of come to this conclusion as well. I wouldn't explicitly call what I'm doing a hobby project, as I'd like to do it commrcially, full time in the future, but I have a lot of learning to do first.

 

I was very similar to you. When I first started using Direct X 7, I wanted to create an abstraction layer so I'd later be able to support other things, maybe even other OS. Well I still think this is a good idea, at the time I was new to graphics programing and just learning DX 7 was a huge task for me. I never ended up supporting any other API, and when I moved to DX8, I scrapped most of the code anyway. (FYI, first attempts usually suck.)

Now that I've written like 5 or 6 different 3D engines, I've finally think I have enough knowledge to do a multiplatform framework. It's working well, but after I finish this game, I'll probably redesign it yet again.

 

They say, figuratively, that you do everything at least three times; once to learn how to do it, once to do it, and once to do it properly. The third stage is probably more of an endless series of improvements and rewrites, but such as life in the world of development. I've been through this process repeatedly in web and application development, but 3D graphics programming is a whole new ball game, so it's back to the drawing board, to a large degree.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Partner Spotlight

  • Similar Content

    • By pseudomarvin
      I assumed that if a shader is computationally expensive then the execution is just slower. But running the following GLSL FS instead just crashes
      void main() { float x = 0; float y = 0; int sum = 0; for (float x = 0; x < 10; x += 0.00005) { for (float y = 0; y < 10; y += 0.00005) { sum++; } } fragColor = vec4(1, 1, 1 , 1.0); } with unhandled exception in nvoglv32.dll. Are there any hard limits on the number of steps/time that a shader can take before it is shut down? I was thinking about implementing some time intensive computation in shaders where it would take on the order of seconds to compute a frame, is that possible? Thanks.
    • By Arulbabu Donbosco
      There are studios selling applications which is just copying any 3Dgraphic content and regenerating into another new window. especially for CAVE Virtual reality experience. so that the user opens REvite or CAD or any other 3D applications and opens a model. then when the user selects the rendered window the VR application copies the 3D model information from the OpenGL window. 
      I got the clue that the VR application replaces the windows opengl32.dll file. how this is possible ... how can we copy the 3d content from the current OpenGL window.
      anyone, please help me .. how to go further... to create an application like VR CAVE. 
       
      Thanks
    • By cebugdev
      hi all,

      i am trying to build an OpenGL 2D GUI system, (yeah yeah, i know i should not be re inventing the wheel, but this is for educational and some other purpose only),
      i have built GUI system before using 2D systems such as that of HTML/JS canvas, but in 2D system, i can directly match a mouse coordinates to the actual graphic coordinates with additional computation for screen size/ratio/scale ofcourse.
      now i want to port it to OpenGL, i know that to render a 2D object in OpenGL we specify coordiantes in Clip space or use the orthographic projection, now heres what i need help about.
      1. what is the right way of rendering the GUI? is it thru drawing in clip space or switching to ortho projection?
      2. from screen coordinates (top left is 0,0 nd bottom right is width height), how can i map the mouse coordinates to OpenGL 2D so that mouse events such as button click works? In consideration ofcourse to the current screen/size dimension.
      3. when let say if the screen size/dimension is different, how to handle this? in my previous javascript 2D engine using canvas, i just have my working coordinates and then just perform the bitblk or copying my working canvas to screen canvas and scale the mouse coordinates from there, in OpenGL how to work on a multiple screen sizes (more like an OpenGL ES question).
      lastly, if you guys know any books, resources, links or tutorials that handle or discuss this, i found one with marekknows opengl game engine website but its not free,
      Just let me know. Did not have any luck finding resource in google for writing our own OpenGL GUI framework.
      IF there are no any available online, just let me know, what things do i need to look into for OpenGL and i will study them one by one to make it work.
      thank you, and looking forward to positive replies.
    • By fllwr0491
      I have a few beginner questions about tesselation that I really have no clue.
      The opengl wiki doesn't seem to talk anything about the details.
       
      What is the relationship between TCS layout out and TES layout in?
      How does the tesselator know how control points are organized?
          e.g. If TES input requests triangles, but TCS can output N vertices.
             What happens in this case?
      In this article,
      http://www.informit.com/articles/article.aspx?p=2120983
      the isoline example TCS out=4, but TES in=isoline.
      And gl_TessCoord is only a single one.
      So which ones are the control points?
      How are tesselator building primitives?
    • By Orella
      I've been developing a 2D Engine using SFML + ImGui.
      Here you can see an image
      The editor is rendered using ImGui and the scene window is a sf::RenderTexture where I draw the GameObjects and then is converted to ImGui::Image to render it in the editor.
      Now I need to create a 3D Engine during this year in my Bachelor Degree but using SDL2 + ImGui and I want to recreate what I did with the 2D Engine. 
      I've managed to render the editor like I did in the 2D Engine using this example that comes with ImGui. 
      3D Editor preview
      But I don't know how to create an equivalent of sf::RenderTexture in SDL2, so I can draw the 3D scene there and convert it to ImGui::Image to show it in the editor.
      If you can provide code will be better. And if you want me to provide any specific code tell me.
      Thanks!
  • Popular Now