Game engine architecture

Started by
14 comments, last by EarthBanana 9 years, 9 months ago

I want to start implementing a game engine, but I don't have a clear understanding of how it should be designed. I've read much on the topic and it would seem the best way should be creating standalone generic components that are then used to create various objects. Would it look something like this?


class CMaterial
{
// texture, diffuse color...
};
class CMesh
{
CMaterial mMaterial;
vertex buffer
index buffer
// each mesh has exactly 1 material when using assimp (I think)
};
class CModel
{
constant buffer per model
std::vector<CMesh>mMeshes;
};

Then one would create a generic model for a humanoid:


class CHumanoid
{
CModel mModel;
CPhysics mPhysics;
CInputHandler mInputHandler;
};

Which would include draw,create and destroy functions.

You see, I'm not sure how I could implement the different components that I could use to make up some sort of a complete structure, such as a human (it needs to be rendered, physics needs to have effect on it, it need to be able to move, wield weapons ...).

Then it would need to have a rendering class that uses dx11 to draw it, would I make this a static/global class or what?

I'm very lost here as to how one should create a game engine from generic components that handle themselves and are as decoupled as possible.

Advertisement

Firstly get ready to re-factor... a lot.

Secondly keep things simple to begin with.

Many opt for an entity component system as it is flexible, you could take a look at unity as an example of one. Each game object has a number of components such as meshes, transforms, renderers etc...

It might give you some ideas anyway.

But start small! Don't try and solve every problem at once or you will struggle!

The idea of an engine is generally to run though a set of objects created by the user and update them based on some rules (such as physics) then draw them to screen.

Get that working first, then flesh it out as you go along exposing more control to the user as you go. Let them tweak positions, velocities, material textures... simple things like that to start... get that working and you will have a nice starting point!

Good luck

I would suggest starting off by just creating a renderer, take a basic program that uses DX11 and slowly refactor from there.

As a start I would follow something like the tutorials on rastertek.com and then you will see how slowly parts of the functionality gets refactored into their own classes for use by higher level systems. After a while following those tutorials you may find that you want to do things slightly differently, maybe have different functionality in different classes and so on but as a starter just do something simple like that to get the jist of how an engine is slowly built up over time.

It will be a very tedious task, but if you set yourself really short term goals you can achieve quite a lot :)

Thanks for the answers, I am quite patient and fine with taking it slow. I want to do things right from the start so when the project grows, adding new features would be as clean and simple as possible. Currently though, I dont even know how would I begin implementing a graphics engine, do I create a wrapper around dx11 to ensure I dont make any unnecessary calls (like setting a state that is already set)? Where would I begin? That is, I am not sure what are the bare minimum things Id need to implement for a graphics engine.

A single mesh may have multiple materials/shaders on it and thus may require multiple renders, each with a different set of vertices and materials, in order to render the whole thing.
So a mesh does not contain index/vertex buffers directly. A “mesh subset” has the material, shader, index buffer, and vertex buffer, and a mesh is an array of mesh subsets.


I won’t talk about physics etc. because that is the distant future, but every mesh should have an AABB and a bounding sphere in order to assist in frustum culling, octree insertion, etc., and these can later be used in physics when the time comes.


A model can only ever be loaded once and then what gets added to the game world is a model instance, which has a pointer to the model.
Every model instance has mesh instances that point to the model’s meshes, which is where it finds its vertex buffers etc. for rendering. Since each model instance can have individual properties change such as skin color, full-body alpha, lighting on/off, shadow-casting on/off, etc., these properties get copied to every model instance—in the case of rendering states and materials, the source model only acts as a default layout.


do I create a wrapper around dx11 to ensure I dont make any unnecessary calls (like setting a state that is already set).

Yes.


Input is a huge subject and I have discussed it in many posts, which you can easily find on your own:
https://www.google.co.jp/webhp?sourceid=chrome-instant&ion=1&espv=2&ie=UTF-8#q=site%3Agamedev.net%20Spiro%20Input
Of extreme note is: http://www.gamedev.net/topic/650640-is-using-static-variables-for-input-engine-evil/#entry5113267
Characters do not handle inputs.


L. Spiro

I restore Nintendo 64 video-game OST’s into HD! https://www.youtube.com/channel/UCCtX_wedtZ5BoyQBXEhnVZw/playlists?view=1&sort=lad&flow=grid

Since Im considering using assimp, if i understood it correctly then: a model is a car, the car is broken up into meshes (each bodypanel, if theyre different, windows of the glass) where each mesh has a different material (that includes a texture, diff color, blending states etc). Quote from the official page: "Each aiMesh refers to one material by its index in the array." and "A mesh represents a geometry or model with a single material." If there are better alternatives to assimp, please point me in the right direction. I have tried implementing an .obj loader but it was very slow :(.

I have never written a wrapper, so would this be the way:

pseudocode:

class CDirectX11States

{

void SetInputLayout(ID3D11InputLayout* inputLayout){
if (mCurrentInputLayout != inputLayout){
mCurrentInputLayout = inputLayout;
mDevCon->IASetInputLayout(inputLayout);
}
ID3D11InputLayout* mCurrentInputLayout;

};

class CDirectX11

{

device

devicecontext

swapchain

etc

};

Thanks for everyones patience!

Firstly get ready to re-factor... a lot.

I just had to comment on that... It is true "mahn"... You will prototype something simple. Then you will expand on that. Then you will see a better way to do things or organise your code. You will get ideas, come up with smarter things. You will learn new things. Every step on the way you will refactor over and over and over again. Sometimes I feel that refactoring is the biggest part of coding, lol, but slowly you will get there, and things will get better, but you will never get away from the refactoring thing. smile.png

PS: It is actually a good feeling having completed a tiny (or a massive) amount of refactoring and seeing your product is working, hopefully with less bugs and better quality over all. Although refactoring in itself doesn't seem to bring your code forward, it actually will increase your efficiency in the long run too to have a well written code base.

I tend to have problems knowing exactly how to structure my code right away, and that is why I end up having to do a lot of refactoring. That is probably why we have the design phase in software development in the first place? Anyway, lots of people are using SCRUM nowadays, and from my experience that creates just as much refactoring...

"A mesh represents a geometry or model with a single material."

While it is true that below the level of a “mesh” there are no common or official terms, a single mesh can definitely have more than 1 material and render call.
I outlined this on my site, where I referred to the sub-mesh level as “render part” for lack of a better term at the time (since then I have found “sub-mesh” to be better).
LSModelBreakdown1.png

In such a simple case as this you could possibly draw the red and green parts in one draw call (though it would be non-trivial in any generic sense), but the fact is that the red and green areas are 2 different materials on the same mesh and it is meant to illustrate that in a more advanced case you will have no choice but to draw the red, switch materials and shaders, and then draw the green.

If Assimp says that the red, green, and blue are 3 different meshes, it’s wrong.
If it says there are 2 meshes but a mesh can have only 1 material, it’s wrong.

I have never used Assimp and don’t know its limitations, but if it says a mesh can have only 1 material then that sounds like a pretty bad limitation.


would this be the way:

Yes, but that’s the active way.
A better way is the last-minute way.
Basic Graphics Implementation


L. Spiro

I restore Nintendo 64 video-game OST’s into HD! https://www.youtube.com/channel/UCCtX_wedtZ5BoyQBXEhnVZw/playlists?view=1&sort=lad&flow=grid

I like how unity handled mesh and material binding. Meshes don't carry material information. They just specify a slots for how many materials the mesh supports. There is a seperate class called a MeshRenderer that would take a mesh and an array of materials as input to render the geometry. This way, it is easy to specify an object with the same geometric shape, but with different materials.

EDIT: Thought of another piece of advice

do I create a wrapper around dx11 to ensure I dont make any unnecessary calls


Another reason to make a wrapper is to allow you to change graphics APIs. Your wrapper can supply a consistent interface to your graphics engine, but different sub classes of the wrapper can implement OpenGL, Mantle, DX12, Metal, or even game console graphics APIs.
My current game project Platform RPG

If Assimp says that the red, green, and blue are 3 different meshes, it’s wrong.
If it says there are 2 meshes but a mesh can have only 1 material, it’s wrong.

I have never used Assimp and don’t know its limitations, but if it says a mesh can have only 1 material then that sounds like a pretty bad limitation.

I have used assimp for my loader and I also have a similar approach to L.Spiro in the fact that I have Mesh and Submesh classes where only the Submesh part holds the rendering data.

It is purely the naming differences that make things confusing. an assimp aiScene is analogous to a Mesh, and the aiMesh is essentially the Submesh. There are a few other things to work around but as a whole assimp is great for model loading and I wouldn't use anything else at the moment.

This topic is closed to new replies.

Advertisement