Jump to content
  • Advertisement


This topic is now archived and is closed to further replies.


Encapsulating the different systems in an engine, and the accompanying problems.

This topic is 5382 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

So, I''ve "successfully" isolated the graphics engine from the rest of the game engine in my game. I have abstracted all of the functions of D3D that my game needs, and added an extra model-loading interface. Now, however, I am struck by the lack of flexibility that my game has in terms of graphics. I can''t change the color of models after I load them, I can''t change the texture, I can''t draw a line on the screen without loading a model for it, can''t draw a user interface. My idea had been to separate the game play from the graphics, so I could change one without changing the other. I was also planning to do the same for sound, input, network, etc. The problem I''m running into is that game play is so intrinsically mixed with those different subsystems, particularly sound and graphics, that it seems impossible to separate them and still get a good result. I *could* add a huge rack of accessor functions... changeColor(modelID), drawRectangle(coords, color), addDopplerEffect(soundID), etc. But it seems like by the time I finish adding all the accessors I need, the different components will basically be specialized for my game anyway, and I might as well have just smashed them all together so I don''t have to bother following virtual function pointers. Do you see my problem? Has anyone else here struggled with this and overcome it in an elegant and fast way? Thanks for any advice.. I''m scratching my head here. Riley

Share this post

Link to post
Share on other sites
first you have to decide, what do you want from the different subsistems. you should gather all the objects (entities) that are used in your system. than everything becomes obvious. for example, if you want to change the color or texture of your object, than you should separate the two entities, so you will have the objects:
class CBody
CSurface *Surface; //the object that represents the texture
CMesh *MeshData;
if you want to change texture:
MyObject.AssignSurface(the newly generated surface);

i think you got the idea (i hope)

Share this post

Link to post
Share on other sites
Yeah, that makes a lot of sense, and is sort of what I''ve settled on doing. I was just hoping that maybe there was some less-specific way to do it, because you lose generality when you make functions like AddSurface, etc. Thanks for your help!

Share this post

Link to post
Share on other sites
Original post by rileyriley
I was just hoping that maybe there was some less-specific way to do it, because you lose generality when you make functions like AddSurface, etc.

Going for genericity is not always the best idea anyway. How many people scrap their first engine to write another, then do it again, and yet again? All of that effort they went through to make it as generic as possible was wasted. After working on this project, you'll very likely have new insights when it is finished and will consider a different way of structuring things. Each iteration adds to your knowledge pool, sure. But don't try to support what you aren't making yet. Get the first project completed first.

That being said, it is possible to maintain a generic design and still provide the functionality you need fir this first project. You have to get your head around the whole rendering process and how it is handled by D3D and/or OpenGL. When you understand it completely, then you can structure a rendering system to do what you want.

Changing the color or texture of a model is not tied into the game at all. It is very much a part of the renderer. You can encapsulate render states to handle this. The game just tells the renderer to change the render state at a high level, then the renderer handles the details at a lower level. Why do you need to load a model to draw a line? Why can't you draw your user interface? All of this can be handled with setting projection matrices or changing render states.

So it sounds to me like the design you have so far is not generic at all. Most likely, you've encapsulated complex processes into a limited number of methods/functions. You've put a high level abstract interface directly on top of the low level API. This is not a bad thing in and of itself, but it certainly restricts how flexible you can be.

The key to achieving flexibility lies in the knowledge of how 3D APIs operate. The first step is to make an API abstraction layer. Model the functionality of D3D, but don't encapsulate complex processes into a limited number of method calls. For example, don't make a renderGlowingSphere method that sets five render states, or a loadModel method that sets up the vertexbuffers and sends the texture to the card. At this level, you should be providing methods to set a single renderstate at a time, and seperate methods for loading a model, creating a vertex buffer, and sending a texture to the card. In the ideal case, this abstraction layer could be implemented for D3D, OpenGL, a software renderer, PS2, GameCube, or anything else. If you are restricting yourself to Windows and D3D then you don't need a generic abstraction layer so much, but you still may want a thin wrapper layer here.

The abstraction layer could be used itself by your game, but you may find it useful to write one more layer on top of it. The abstraction layer is generic, the new higher-level layer is specific to a game style (FPS, RTS, etc...). This is where you load models, create the vertex buffers, and send the texture to the card in one call, or set 5 render states in a call to renderGlowingSphere. But you still want to give the game access to the abstraction layer, so that if you do want to use some functionality you didn't put in the higher-level it will be available to you.

See the difference here? By just coding your high-level interface on top of D3D, you are restricting the capabilities of your engine to what you implement. For your game to do more than that, you will have to access D3D directly from the game. The concept of using the abstraction layer is exactly same (the high-level is still restricted to what you implement and the game will still need to access the abstraction layer to do more), but it frees you from being tied to a specific API/platform. Plus, without the abstraction layer all of your code is tied up in one high level interface, so if you decide to start over with the next project, it will feel like rewriting the whole engine. With an abstraction layer, it feels different and it *is* different (provided the abstraction layer does everything you want it to).

[edited by - aldacron on January 24, 2004 1:17:24 AM]

Share this post

Link to post
Share on other sites
Thanks for your reply aldacron, it really opened up my thought-process. I think you are right that I have been focusing too much on making things general - it didn''t occur to me that more generality might not be what I want. Stupid brainwashing CS classes..

I see the benefit of the intermediate abstraction layer you describe, but I don''t think I''m going to write one. My reasoning is this:

1 (minor) - I don''t want to spend more time on it than I have to

2 (major) - it seems to me that writing an abstraction layer that is just a wrapper, function-for-function, does not make sense in my situation. I am one developer working on my own, and I have no REAL plans to port this to different engines... I just thought it would be neat if it was portable. I see the benefit in other situations, and I can see how it would be nicer to have that abstraction layer if I re-write this at some point, but as a project I really am just using as a proof-of-concept for myself, I think it''s a little over the top.

I really appreciate how much time you must have put into that reply - I''m sure a lot of other people will too. I feel like I''ve read what you wrote before, but it never quite got to me. Thanks!


Share this post

Link to post
Share on other sites
Aldacron has a very valid point; when I started out on Manta-X, i was aiming to make a totally general game engine that I could do naything with. After 2 completely failed attempts (eg: the generalisation led to eventual inflexibility or overcomplexity) I decided to start again with a new paradigm. In the latest Manta-X, I''m putting all the general code - stuff I *know* I''ll reuse - into the engine library; the game logic and increasingly the drawing code is kept in a seprate DLL. By ''inceasingly the drawing code'' I mean that after abstracting the render interface, I initially threw the logic that passes the triangles to the render interface into the engine base. I found that whilst many render tasks can be generic, there are a few that are obviously game specific. How I am now tackling this it to have UserSceneInit/UserSceneRender/UserSceneEnd methods in my game inerface that are called at relevant places in the generic code.

I''m finding that while maybe not 100% perfect, it does help things and save rewriting the entire rendering logic. Things are a bit messy, but it seems to be serving it''s purpose.

Share this post

Link to post
Share on other sites

  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!