Jump to content
  • Advertisement
Sign in to follow this  

How to Separate the Rendering Code from the Game Code

This topic is 1516 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I've been writing games as a hobby for about 25 years now, and in my day job I'm a LEad C++ developer writing realtime performance critical highly multithreaded code, however there is one thing I've always struggled with and that is how to separate the Rendering Code from the rest of the game code. It seems that lots of games books suggest that you should do it but they don't really explain how to do it.


For example, lets imagine you have a Command & Conquer style RTS game, internally you might have classes representing the terrain, objects on the terrain, units (Player, remote and/or AI) and bullets/rockets/lasers etc.


These classes contain all the information relating to the object, for example for a player unit you might have the units position, speed, health, waypoint list etc. You may also have rendering data too such as models\meshes, textures etc.


In this simple model (Which has worked fine for me for many years) you can have standard methods on the objects such as Update() and Draw(). It all works great.


The problem is, what if I want to port my DirectX game onto another platform that uses say OpenGL? My rendering code is so embedded into the game code that it'll be a major job to port it.


The books seem to suggest that you should have separate classes for your game objects and your rendering objects. So for example you might have a Unit class that supports an Update method, and a separate UnitRenderer class that supports the Draw class. Whilst this makes sense and would indeed make porting much easier, the problem is that the UnitRenderer would need to know so much about the internals of the Unit class that the Unit class would be forced to expose methods, attributes and data that we otherwise wouldn't want to expose from the Unit class. i.e. we might have a nice clean interface exposed from the Unit class that encapsulates the internal implementation and is sufficient for the rest of the game, but the need to expose internal information to the UnitRenderer means that your nice clean Unit class becomes a mess and exposes its internal implementation details to anything that can access the Unit class.


Plus of course there are performance implications of doing this.


How do people typically separate their games code from their Rendering code?


Thanks in advance


Edited by BenS1

Share this post

Link to post
Share on other sites

Thanks both for your replies,


GK, I'm actually reading Game Engine Architecture 2nd Edition right now, but I haven't got to that part of the book yet. 


Previously I'd read the otherwise excellent Game Coding Complete book but this was the one area that I found confusing in that book.




Share this post

Link to post
Share on other sites

I've just read the 2 articles on Component Entity Systems, and they're pretty interesting and I can see how the memory layout is efficient from a cache point of view, but I'm not sure how well it would work in my case.


Lets imagine a an RTS game, with potentially thousands of units in play at any one time. You might use a Quad tree to partition the space and only update the units in view with every frame, and objects just outside of view are updated every other frame (Some on the odd frames and some on the even frames to balance the load) and units even further away are updated even less frequently. I'm not sure how this fits in with the CES idea.


Also, imagine if a have a world that can potentially support a theoretical 100,000 entities (Maybe 2000 units + loads of bullets, rockets, missiles and other objects). In reality most of the time you'd only have an average of 1,000 entities, but in the heat of a massive battle this could increase significantly. Having your systems walk all 100,000 entities every time when most of them will be empty seems inefficient. 


One final comment, for worlds with a large number of potential entities it may be better for createEntity and destroyEntity to maintain a list of the index to free entities so that creating a new entity can be done in constant time, rather then potentially having to walk up to 100,000 items in the array.


Interesting concept though and definitely looks like a good idea for many applications, but potentially doesn't work for everything, but then nothing ever does.


Thanks again


Share this post

Link to post
Share on other sites

The main selling point of ECS is code reuse and separation of concerns. By breaking up your typical classes into small self-contained components you can easily build more complex behaviours by combining them.

Cache coherency is a nice side effect that *can* occur if you design your systems right.


As for the mostly empty list, you could simply sort your objects so that unused ones get appended to the end of the list. Then you can trivially stop iterating once you hit the first unused element.

Share this post

Link to post
Share on other sites

Just to kind of echo Promit, that is how I separate the two.


A render lib can essentially be as simple as this:


function render(material, geometry)


     //rendering done here



The your game/app is just a system where by you organise a series of calls the the renderer.render function by whichever means suits your purpose.


The renderer  knows nothing of how you structure your game/app, it just sits there waiting for things to draw.

What you draw, the order you draw it, where you draw it... all this information is constructed outside of the renderer.


(obviously this is just high level speak and things can get a little more complex but in general you can make things work like this and it is very flexible)


For instance, a wrote a quick space shooter game using a bitmap blitting renderer and once it was done I was able to swap out the renderer like for like for a GPU based one. Took next to no time at all because there was so very few connections between the two systems.

Share this post

Link to post
Share on other sites

Thanks again all, I think I'm starting to get it now. 


I think the heart of my problem is that I'm not encapsulating the details of the renderer, so for example my Terrain class doesn't have a simple Material and Geometry but instead has much lower level information, such as:

	// The Vertex and Index Buffers describing a tile
	ID3D11Buffer*					m_TerrainVB;
	ID3D11Buffer*					m_TerrainIB;

	ID3D11Buffer*					m_SeaVB;
	ID3D11Buffer*					m_SeaIB;

	ID3D11Buffer*					m_BoxVB;
	ID3D11Buffer*					m_BoxIB;

	ID3D11VertexShader*				m_TerrainVS;
	ID3D11PixelShader*				m_TerrainPS;

	ID3D11VertexShader*				m_SeaVS;
	ID3D11PixelShader*				m_SeaPS;

	ID3D11VertexShader*				m_BoxVS;
	ID3D11PixelShader*				m_BoxPS;

	// The input layout for terrain vertices
	ID3D11InputLayout*				m_TerrainInputLayout;
	ID3D11InputLayout*				m_SeaInputLayout;

	// Constant Buffers
	ID3D11Buffer*					m_PerFrameCB;		// Per frame constant buffer
	ID3D11Buffer*					m_TerrainObjectCB;		// Per object constant buffer

I guess I need to better encapsulate these, so that I just have higher level Material and Geometry classes, and then I can use the above advice to just send the Material and Geometry to the renderer.


It's all starting to make sense, and should help to simplify and tidy up the code too (I hate messy code, so I've not been happy with this for a while now).




Share this post

Link to post
Share on other sites

i use an abstraction layer over D3D called the Z3D API / library. all graphics calls in the game code call the Z3D API. it translates Z3D API calls to directx calls. since its an abstraction layer, it could be modified to translate Z3D API calls to OpenGL (or any other 2D/3D graphics package), for example. 


the game places drawing information for an object into a drawinfo struct:


struct Zdrawinfo
int type,         // 0=mesh, 1=model, 2=2d billboard, 3=3d billboard
meshID,       // for models: modelID 
texID,        // for models: aniID 
float sx,sy,sz,x,y,z,rx,ry,rz,range;       // range is rng to camera. not currently used.
this is how the game passes information to the graphics engine.
the game then calls a graphics engine function such as
Zd(&drawinfo) // draws a mesh, model, sprite, 2d billboard, or 3d billboard
Zdraw(&drawinfo) // draws a mesh using eulers
Zdraw2(&drawinfo) // draws a mesh using world mat
these in turn add the info to the render queue.
assets are owned by the Z3D library, and referenced via ID numbers (array indices).
drawing a sprite looks like:
this gets translated into the appropriate D3DX_Sprite call
when its time to render and present, the info in the render queue gets turned into calls to draw indexed primitive,

so the game code contains no directx calls at all.


drawing text looks like:




and gets translated into D3DX_Sprite calls that use a custom font for speed. tx2() also transforms to current screen resolution on the fly!


there are also Z3D API calls for using (slower) D3DX_Fonts.


in OO terms, you'd have a renderer object with methods like

add_to_queue, draw_immediate (draw indexed primitive bypassing the render queue), showscene (process render queue and present), etc.


and you'd probably use some sort of struct to pass all the drawing parameters from the game to the renderer object.

Share this post

Link to post
Share on other sites
Sign in to follow this  

  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!