Is wrapping DirectX and OpenGL a good thing?

Started by
14 comments, last by tolaris 18 years, 9 months ago
Quote:Original post by DadleFish
Well, the examples I gave you as well as the cutdown of Ogre shown in the thread are full featured ones. All I'm saying is that you shouldn't fear it, as we're not talking about something nearly in the magnitude of 100K of lines.


Yes, thanks to you and swiftcoder I'm now convinced that it's possible to have a full featured wrapper with a reasonable amount of code. However I'm still concerned with the flexibility issue. I see my engine also as a place for experiments, so what happens if the newest nVidia or Ati GPU offers a new OpenGL extension and I want to experiment with it? If I build my render on top of a DX/GL wrapper I don't have access to it. Maybe my wrapper should have an extensions or capabilities system.


Quote:Original post by DadleFish
I can't understand why would you use a visitor pattern.


Because if the scenegraph doesn't know about the render, the only solution is to have the render visiting the scenegraph. The render is the visitor, the scenegraph is the visited.


Quote:Original post by DadleFish
Perhaps I wasn't too clear, sorry. The whole idea of abstraction is that the upper layer doesn't know which API is used by the renderer object(s). It only knows the renderer object(s) interfaces, and works with them. My intention was to hide the API from the scenegraph, for example; you would call "m_pRenderer->ClearScreen()" and you wouldn't know or care how it's done.

However, I do not think that the scenegraph should be ignorant to the actual existence of the renderer (like in the DOCVIEW pattern you've mentioned). I can't really understand why would you do it. After all, it is probable that you would have a single renderer object in your system, just like you have std::cout, and it's unreasonable IMHO that stdout would search for relevant information. Besides, its the scenegraph job to actually feed the renderer with meshes, textures and so on in order for the renderer to send them as polygons to the video adapter.


That's the way it is generally done, but I want to be even more radical and completely hide the renderer interface from the scene code, because the scene code doesn't really need to know it.
Why should the scene graph feed the render with meshes and textures? The job of the scene graph is to store and organize data, nothing more than that. The render can perfectly read the coordinates of textures and meshes stored in the scenegraph nodes and load them without further external help. Moreover the scenegraph can't even know what resources the render needs in a particular moment unless it implements some visibility algorithm, but again visibility is not something it should be concerned with, that's a task for the render.
Obviously some parts of high level code will know the renderer interface, to be precise the parts that instantiate it (getting the object from a factory), calls the render method for every frame and destroy it on cleanup.

I know that this might sound odd from an object oriented perspective, but it's a common solution in generic programming. Think about STL: you have data structures and algorithms and data structures know nothing about algorithms, they just do their job of representing data. In my case the data structure is a graph (a DAG to be precise), while the render uses the appropriate algorithms to visit it and gather informations. I'm really considering the possibility of using BGL (the Boost Graph Library) for this.



Quote:Original post by DadleFish
As for physics and AI - these are indeed higher than the scenegraph and they manipulate it. The renderer isn't such a case. Even if you think in docview terms, you wouldn't say the physics/AI are views manipulating the doc. They are more of the logic.


That's correct. I mentioned the DocView pattern just as an example, but I'm not strictly following it. What I have in mind is a central data structure (the scene graph) with the various engine subsystems accessing and modifying it. Ideally the data structure will never know about the subsystems and the subsystems will depend only on the data structure (ingoring each other).


Original post by DadleFish
The idea behind abstraction isn't minimizing the code. The idea is to get the core systems of your application (game) ignorant to their actual environment. Something like "hey, read that house from a file and display it, I don't care how you do it"; so you'd abstract your OS and your rendering API. The idea is to separate the logic from the actual tidbits of DOING stuff, so you can later replace the API with anything else.


I perfectly agree, but I would add that if abstraction is good, orthogonality (as implemented in STL, BGL and other generic libraries) is even better, since it adds an even higher degree of abstraction, true separation of concerns and generally also minimizes the code.

Maybe my solution will turn out to be overkill, but after all this is not a real project, I'm not going to create a new Unreal Engine or some sort of killer application, it's just a way to experiment and learn.

Thank again for your comments and suggestions.
Advertisement
Interesting topic:
Perhaps a combination of both could be a solution. A RenderDevice that abstracts the API with very few functions, you don't need many, and a plugable Renderer that iterates over the scene graph and abstracts the rendering algorithm.
I agree to use a high level material definition, perhaps some more flexible like in doom3, and let the Renderer generate the shader code from it in form of an abstract syntax tree (CodeDOM) that the RenderDevice uses to create the HLSL/GLSL/Cg or even ARB_fp shader.
Before I started coding my game engine (which I now only expand when my current game project needs new features), I went and downloaded several open-source engines off the web, and dissected them minutely.

The two rendering engines that most influence my renderer abstraction were IrrLicht and OGRE, each of which takes a completely different approach. However, it is definitely worth noting that each of these is primarily a rendering engine, and neither is a full blown game engine (i.e. no sound, no AI, etc.).
A useful game engine to browse is the Crystal Space engine, although it is not quite complete, and I personally regard it as a little over-designed.

Tristam MacDonald. Ex-BigTech Software Engineer. Future farmer. [https://trist.am]

Quote:Original post by will75
However I'm still concerned with the flexibility issue. I see my engine also as a place for experiments, so what happens if the newest nVidia or Ati GPU offers a new OpenGL extension and I want to experiment with it? If I build my render on top of a DX/GL wrapper I don't have access to it. Maybe my wrapper should have an extensions or capabilities system.


Don't worry too much about the future - like I said, if you will, you won't have anything in the present.

Look at it this way. If the whole concept changes dramatically, then you'll probably have to change critical parts of your engine so that you will adhere to these changes. If the changes will not be so dramatic, you will be able to port your engine in a reasonable time.

Anyway, I really think that you're jumping ahead of yourself here. Start with SOMETHING. Even if you make mistakes (and it's quite probable that you will), you will learn from them and will be able to move up and forward.

Quote:Original post by will75
That's the way it is generally done, but I want to be even more radical and completely hide the renderer interface from the scene code, because the scene code doesn't really need to know it.


Now here's an interesting idea. Let's look deeper into your idea. First of all, correct; the scenegraph is merely a database. However, SOMETHING has to know about the rendering API and about the scenegraph, in order to go from point A to point B. This "something" can reside in the SG, or in the Renderer, or in an external place; the way I do it, for example, there is another manager (which I call scene manager) which is familiar with the SG and the Renderer.

Visiting may or may not be a good idea. You should look carefully into performance issues that may rise. Think about optimizations that the SG can do temporally, and the renderer cannot (being ignorant of the actual data). And this is just off the top of my head. I'm sure we can find other issues with all the alternatives. Maybe some sequences will be in place.

Quote:Original post by will75
Ideally the data structure will never know about the subsystems and the subsystems will depend only on the data structure (ingoring each other).


Keep in mind that being very generic also keeps you away from specific problem-domains optimizations. Eberly gives a nice example in his book about 3D game engine architecture. He contemplates why a self made vector class may be better than std::vector, and he does have some interesting and convincing points there.

Quote:Original post by will75
I perfectly agree, but I would add that if abstraction is good, orthogonality (as implemented in STL, BGL and other generic libraries) is even better, since it adds an even higher degree of abstraction, true separation of concerns and generally also minimizes the code.


One thing I've learned from many years of designing large-scale systems is - Just try not to go too fanatically into a certain concept. Don't throw yourself at a good concept (like an abstraction) so deeply that you will lose yourself in the details. Beautiful code is really a nice thing - but the bottom line is that it should work :-) If your generalization costs you in performance, memory, etc. - it may not be able to just cut it.

Quote:Original post by will75
Maybe my solution will turn out to be overkill, but after all this is not a real project, I'm not going to create a new Unreal Engine or some sort of killer application, it's just a way to experiment and learn.


By all means, this is a good, sane approach :-) You will no doubt learn from the experience.

Eldad
Dubito, Cogito ergo sum.
I remeber I tried that some years ago. I ended up with the insight that wrapping the APIs will cost me more than it's worth. If it's "windows only" DirectX is fine. If not OpenGL is a portable API that doesn't need to be wrapped. Every graphics card comes with a good OpenGL driver now. May be console hardware is a different story as it migh not have OpenGL support but IIRC the only console without OpenGL is xbox. So it might be easier to code an opengl version and an xbox version later on. Of course this doesn't mean to distribute all API dependent code all over the project but I question the necessity for strict wrapping in a lot of projects.

Quote:Original post by will75
One thing that still concerns me are shaders: (..) The only solution I can think of is creating an abstraction of gpu shaders, a material system that procedurally generates low level gpu shader code, completely hiding the process to the user and exposing instead an high level modular interface (and possibly also a graphical editor).

Am trying to work with something similar at the moment... effectively, the gfx module is expected to provide certain 'effects' -- either texture based or procedurally generated maps for colour, diffuse, bump luminosity, glosiness, reflection, whatever. These effect are basic layers which, combined together in user-defined order, form complete materials which are then applied to geometry. Exactly how these effects are generated is left to specific implemenation of the rendering module. In the end it's not very different from how 3d packages allow the artist to define exact appearance for their creations... and as such hopefully more intuitive for content creators.

This topic is closed to new replies.

Advertisement