How to Separate the Rendering Code from the Game Code

Started by
12 comments, last by Norman Barrows 9 years, 6 months ago

I've been writing games as a hobby for about 25 years now, and in my day job I'm a LEad C++ developer writing realtime performance critical highly multithreaded code, however there is one thing I've always struggled with and that is how to separate the Rendering Code from the rest of the game code. It seems that lots of games books suggest that you should do it but they don't really explain how to do it.

For example, lets imagine you have a Command & Conquer style RTS game, internally you might have classes representing the terrain, objects on the terrain, units (Player, remote and/or AI) and bullets/rockets/lasers etc.

These classes contain all the information relating to the object, for example for a player unit you might have the units position, speed, health, waypoint list etc. You may also have rendering data too such as models\meshes, textures etc.

In this simple model (Which has worked fine for me for many years) you can have standard methods on the objects such as Update() and Draw(). It all works great.

The problem is, what if I want to port my DirectX game onto another platform that uses say OpenGL? My rendering code is so embedded into the game code that it'll be a major job to port it.

The books seem to suggest that you should have separate classes for your game objects and your rendering objects. So for example you might have a Unit class that supports an Update method, and a separate UnitRenderer class that supports the Draw class. Whilst this makes sense and would indeed make porting much easier, the problem is that the UnitRenderer would need to know so much about the internals of the Unit class that the Unit class would be forced to expose methods, attributes and data that we otherwise wouldn't want to expose from the Unit class. i.e. we might have a nice clean interface exposed from the Unit class that encapsulates the internal implementation and is sufficient for the rest of the game, but the need to expose internal information to the UnitRenderer means that your nice clean Unit class becomes a mess and exposes its internal implementation details to anything that can access the Unit class.

Plus of course there are performance implications of doing this.

How do people typically separate their games code from their Rendering code?

Thanks in advance

Ben

Advertisement


The problem is, what if I want to port my DirectX game onto another platform that uses say OpenGL? My rendering code is so embedded into the game code that it'll be a major job to port it.

The keywords to a possible solution for this are: Interfaces, and Aggregation.

The way I'm handling this in my engine is that I first of all have a lot of Interfaces for my graphics classes, which are implemented out for the actual renderer under the hood:


// this is used in the game code
class ITexture
{
public:
    // just an example method
    virtual void Resize(int width, int height) = 0;
};

// those are two possible implementations

class TextureDX11 : public ITexture
{
    void Resize(int width, int height) override;

private:

    ID3D11Texture2D* m_pTexture;
}

class TextureGL4 : public ITexture
{
    void Resize(int width, int height) override;

private:

   GLInt m_texture;
}

Whever you create a texture, you do it via a Factory, which eigther creates DX11 or GL4 variante. And all code uses the ITexture-interface, so it doesn't matter which backend you actually use. Note that using an actual interface might not even be the optimal solution due to virtual-function overhead, you could just use typedefs etc... instead.

Secondly, with such (or any similar) system you can have your game objects just determine what is being rendered, not how. For example, you can have your game-object use a mesh-object & texture-objects:


class GameObject
{
	void Draw(Renderer& renderer)
	{
		for(auto pTexture : m_pTextures)
		{
			renderer.BindTexture(pTexture);
		}
		
		renderer.BindMesh(*m_pMesh);
		
		renderer.DrawInstance();
	}
	
private:
	
	IMesh* m_pMesh;
	std::vector<ITexture*> m_pTextures;
}

This way, you only have very limited information about rendering in your game-objects. You can further seperate that if you want or feel it is waranted.

I would suggest this book

http://amzn.com/1466560010

Game Engine Architecture, Second Edition Hardcover – August 15, 2014

by Jason Gregory (Author)

Before I read this book (the first edition) I would have suggested doing it as show above. But after, I would do things differently.

First, you can use composition to create game objects. If it is drawn, add a Renderable component. If it moves, add a Movable component. If it hits stuff, add a collision component.

You can read more about it here:

http://www.gamedev.net/page/resources/_/technical/game-programming/understanding-component-entity-systems-r3013

The reason you don't want game objects to draw themselves is because you loose the ability to speed things up.

In the above example, each object do this:


 
Bind textures
Bind mesh
Draw

If all object draw themselves, you have no way to sort and batch the rendering code. For example, if you have a Render object that contains all the Renderable objects, you could do this:



bind texture (once)
bing mesh (once)
for each renderable that uses these
  render

Context switches tend to take a lot of time. If you have objects draw themselves you limit your options when things slow down. For a small game it won't matter. But when things get larger you may need some extra speed.

I think, therefore I am. I think? - "George Carlin"
My Website: Indie Game Programming

My Twitter: https://twitter.com/indieprogram

My Book: http://amzn.com/1305076532

Thanks both for your replies,

GK, I'm actually reading Game Engine Architecture 2nd Edition right now, but I haven't got to that part of the book yet.

Previously I'd read the otherwise excellent Game Coding Complete book but this was the one area that I found confusing in that book.

Thanks

Ben

I've just read the 2 articles on Component Entity Systems, and they're pretty interesting and I can see how the memory layout is efficient from a cache point of view, but I'm not sure how well it would work in my case.

Lets imagine a an RTS game, with potentially thousands of units in play at any one time. You might use a Quad tree to partition the space and only update the units in view with every frame, and objects just outside of view are updated every other frame (Some on the odd frames and some on the even frames to balance the load) and units even further away are updated even less frequently. I'm not sure how this fits in with the CES idea.

Also, imagine if a have a world that can potentially support a theoretical 100,000 entities (Maybe 2000 units + loads of bullets, rockets, missiles and other objects). In reality most of the time you'd only have an average of 1,000 entities, but in the heat of a massive battle this could increase significantly. Having your systems walk all 100,000 entities every time when most of them will be empty seems inefficient.

One final comment, for worlds with a large number of potential entities it may be better for createEntity and destroyEntity to maintain a list of the index to free entities so that creating a new entity can be done in constant time, rather then potentially having to walk up to 100,000 items in the array.

Interesting concept though and definitely looks like a good idea for many applications, but potentially doesn't work for everything, but then nothing ever does.

Thanks again

Ben

The main selling point of ECS is code reuse and separation of concerns. By breaking up your typical classes into small self-contained components you can easily build more complex behaviours by combining them.

Cache coherency is a nice side effect that *can* occur if you design your systems right.

As for the mostly empty list, you could simply sort your objects so that unused ones get appended to the end of the list. Then you can trivially stop iterating once you hit the first unused element.

You guys are all overcomplicating things drastically. I've found that the best solutions are also the simplest. The aforementioned interfaces are appealing to OO-addicts but ultimate serve very little in the way of actual practical purposes, and tend to create more strong couplings and implementation assumptions than you initially realize. (Mike Acton would call them "typical C++ bullshit".) Entity component systems have their uses, but ultimately if you don't understand how to solve this problem in the first place then ECS just adds another mess on top and serves to obscure the failure to tackle the actual problem.

Let's take a simple 2D RTS. What data is needed by just the renderer to compose the main scene?


struct Tile
{
    struct Texture* tileTexture;
    Vector2D position; //in whatever coordinates you want
};

struct GameUnit
{
    struct Sprite* sprite;
    int animationFrame;
    Vector2D position;
    float rotation;
    //add properties like color tint, alpha, etc as needed
};

That is the only communication that you need between the renderer and the game. The game's render function doesn't call any low level rendering functions; it assembles arrays of these structures. The renderer's job is to accept these structures and put them on screen. That's it. You may wish to elaborate on classes like Sprite and Texture, or leave them as opaque and configure them through some kind of overarching Graphics object. But this is, as written, very simple to express in pure C and it's not necessary to add more complexity even for modern shader based 3D games.

Notice a key difference in how I'm thinking about the problem and how you're thinking about the problem: the game code does not contain the information necessary to render the scene. That creates coupling between components. Instead, the game code builds the information necessary to render the scene, and passes it on. The change in perspective may seem slight, but it's fundamental to how modern graphics systems are designed. If you do this right, then the renderer can completely recreate a frame from its input data with no outside interference, including releasing the entirety of the game objects' memory -- or never creating them in the first place. And now with some judicious save/load code, you can capture, replay, and debug graphics frames in a tool that has NO game code in it.

SlimDX | Ventspace Blog | Twitter | Diverse teams make better games. I am currently hiring capable C++ engine developers in Baltimore, MD.

Just to kind of echo Promit, that is how I separate the two.

A render lib can essentially be as simple as this:

function render(material, geometry)

{

//rendering done here

}

The your game/app is just a system where by you organise a series of calls the the renderer.render function by whichever means suits your purpose.

The renderer knows nothing of how you structure your game/app, it just sits there waiting for things to draw.

What you draw, the order you draw it, where you draw it... all this information is constructed outside of the renderer.

(obviously this is just high level speak and things can get a little more complex but in general you can make things work like this and it is very flexible)

For instance, a wrote a quick space shooter game using a bitmap blitting renderer and once it was done I was able to swap out the renderer like for like for a GPU based one. Took next to no time at all because there was so very few connections between the two systems.

Thanks again all, I think I'm starting to get it now.

I think the heart of my problem is that I'm not encapsulating the details of the renderer, so for example my Terrain class doesn't have a simple Material and Geometry but instead has much lower level information, such as:


	// The Vertex and Index Buffers describing a tile
	ID3D11Buffer*					m_TerrainVB;
	ID3D11Buffer*					m_TerrainIB;

	ID3D11Buffer*					m_SeaVB;
	ID3D11Buffer*					m_SeaIB;

	ID3D11Buffer*					m_BoxVB;
	ID3D11Buffer*					m_BoxIB;

	ID3D11VertexShader*				m_TerrainVS;
	ID3D11PixelShader*				m_TerrainPS;

	ID3D11VertexShader*				m_SeaVS;
	ID3D11PixelShader*				m_SeaPS;

	ID3D11VertexShader*				m_BoxVS;
	ID3D11PixelShader*				m_BoxPS;

	// The input layout for terrain vertices
	ID3D11InputLayout*				m_TerrainInputLayout;
	ID3D11InputLayout*				m_SeaInputLayout;


	// Constant Buffers
	ID3D11Buffer*					m_PerFrameCB;		// Per frame constant buffer
	ID3D11Buffer*					m_TerrainObjectCB;		// Per object constant buffer

I guess I need to better encapsulate these, so that I just have higher level Material and Geometry classes, and then I can use the above advice to just send the Material and Geometry to the renderer.

It's all starting to make sense, and should help to simplify and tidy up the code too (I hate messy code, so I've not been happy with this for a while now).

Thanks

Ben

i use an abstraction layer over D3D called the Z3D API / library. all graphics calls in the game code call the Z3D API. it translates Z3D API calls to directx calls. since its an abstraction layer, it could be modified to translate Z3D API calls to OpenGL (or any other 2D/3D graphics package), for example.

the game places drawing information for an object into a drawinfo struct:

struct Zdrawinfo
{
int type, // 0=mesh, 1=model, 2=2d billboard, 3=3d billboard
meshID, // for models: modelID
texID, // for models: aniID
alphatest,cull,clamp,materialID,rad,cliprng,data[5];
float sx,sy,sz,x,y,z,rx,ry,rz,range; // range is rng to camera. not currently used.
D3DXMATRIX mWorld;
};
this is how the game passes information to the graphics engine.
the game then calls a graphics engine function such as
Zd(&drawinfo) // draws a mesh, model, sprite, 2d billboard, or 3d billboard
Zdraw(&drawinfo) // draws a mesh using eulers
Zdraw2(&drawinfo) // draws a mesh using world mat
these in turn add the info to the render queue.
assets are owned by the Z3D library, and referenced via ID numbers (array indices).
drawing a sprite looks like:
Zdrawspite(texID,x,y,x_scale,y_scale)
this gets translated into the appropriate D3DX_Sprite call
when its time to render and present, the info in the render queue gets turned into calls to draw indexed primitive,

so the game code contains no directx calls at all.

drawing text looks like:

tx2(x,y,"string")

and gets translated into D3DX_Sprite calls that use a custom font for speed. tx2() also transforms to current screen resolution on the fly!

there are also Z3D API calls for using (slower) D3DX_Fonts.

in OO terms, you'd have a renderer object with methods like

add_to_queue, draw_immediate (draw indexed primitive bypassing the render queue), showscene (process render queue and present), etc.

and you'd probably use some sort of struct to pass all the drawing parameters from the game to the renderer object.

Norm Barrows

Rockland Software Productions

"Building PC games since 1989"

rocklandsoftware.net

PLAY CAVEMAN NOW!

http://rocklandsoftware.net/beta.php

This topic is closed to new replies.

Advertisement