Game Engine Layout

Started by
16 comments, last by socky 10 years, 9 months ago

Out of curiosity, I have my renderer implementation creating meshes/textures, which are then attached onto the scene-objects 'models' and 'materials' . If the renderer should only create the buffers, what kind of module should be responsible for knitting them together into textures (and then materials) and meshes?

I don't see a direct reason for the additional abstractation, hence my question.

Advertisement

They aren’t responsible for just the buffer parts.

The CVertexBuffer class also has an interface for filling in the vertices, normals, etc. It has to be very dynamic and the implementation is non-trivial, since it must be as flexible as possible for all possible use-cases, so in this this topic for his first engine I simplified my explanation.

For textures though a small distinction should be made.

A .PNG image could be loaded from disk, scaled up to the nearest power-of-2, and saved as a .BMP or your own custom format, all while never having been a texture seen by the GPU.

So in that case don’t confuse a CImage class with a CTexture2D (these are just example class names).

There are in fact so many things you can do to images specifically without them ever being used as textures or seen by the GPU that in my engine there is an entire library just for image loading and manipulating (LSImageLib).

My DXT compression tool links to LSImageLib.lib and not to LSGraphicsLib.lib, even though it is clearly meant only for making textures specifically for the GPU.

Just because images often get used by the GPU, do not conceptually stumble over the different responsibilities between images and textures. A CImage has the task of loading and manipulating image data, possibly converting it to DXT etc., but has no idea what a GPU or texture is. A CTexture2D’s job is specifically to take that CImage’s texel information and send that to the GPU. It is a bridge between image data and GPU texture data.

That is how textures get knit together and vertex buffers do a little more than just handle the OpenGL ID given to them.

As far as meshes go, a “mesh” doesn’t even necessarily relate to everything that can be rendered and as such has no business being a concept of which the renderer is aware.

GeoClipmap terrain for example is not a mesh, nor is volumetric fog.

In fact, at most meshes are related to only a handful of different types of renderables.

Each type of renderable knows how it itself is supposed to be rendered, thus they are above the graphics library/module. If in a hierarchy of engine modules/libraries the graphics module/library is higher than models, terrain, etc., it is exactly backwards.

I am not sure of to what abstraction you are referring.

The graphics library exposes primitive types such as vertex buffers and textures that can be created at will by anything.

Above that models, terrain, and volumetric fog create the resources they need from those exposed by the graphics library by themselves.

Above that a system for orchestrating an entire scene’s rendering, including frustum culling, render-target swapping, sorting of objects, post-processing, etc., needs to exist.

This higher-level system requires cooperation between other sub-systems.

The only library in your engine that actually knows about every other library and class is the main engine library itself. Due to its all-encompassing scope (and indeed its very job is to bring all of the smaller sub-systems together and make them work together) it is the only place where a scene manager can exist, since that is necessarily a high-level and broad-scoped set of knowledge and responsibilities.

All objects in the scene are held within the scene manager. It’s the only thing that understands that you have a camera and from it you want to render all the objects, perform post-processing, and end up with a final result that could either be sent to the display or further modified by another sub-system (what happens after it does its own job of rendering to the request back buffer is none of its concern—it only knows that it needs to orchestrate the rendering process).

People are usually not confused on the way logical updates happen, only on how renders happen, but they are exactly the same. In logical updates, the scene manager uses its all-empowering knowledge to gather generic physics data on each object in the scene and hands that data off to the physics engine which will do its math, update positions, and perform a few callbacks on objects to handle events, updated positions, etc.

The physics engine doesn’t know what a model or terrain is. It only knows about primitive types such as AABB’s, vectors, etc.

Nothing changes when we move over to rendering.

The scene manager does exactly the same thing. It culls out-of-view objects with the help of another class (again, a process that only requires 6 planes for a camera frustum, an octree, and AABB’s—nothing specific to any type of scene object) and gathers relevant information on the remaining objects for rendering.

The graphics library can assist with rendering by sorting the objects, but that does not mean it knows what a model or terrain is.

It’s exactly like a physics library. You gather from the models and terrain the small part of information necessary for the graphics library to sort objects in a render queue. That means an array of texture ID’s, a shader ID, and depth. It doesn’t pull any strings. It is very low-level and it works with only abstract data.

Why?

How could Bullet Physics be implemented into so many projects and games if it knew what terrain and models were?

The physics library and graphics library are at exactly the same level in the overall engine hierarchy: Below models, terrain, and almost everything else.

L. Spiro

I restore Nintendo 64 video-game OST’s into HD! https://www.youtube.com/channel/UCCtX_wedtZ5BoyQBXEhnVZw/playlists?view=1&sort=lad&flow=grid



Norman Barrows, on 15 Jul 2013 - 09:51 AM, said:
what if the ID is the offset into an array of shared texture pointers?

It isn’t bad on performance, but what is the purpose?
This could be just a matter of personal taste so I won’t really say it is wrong etc., but I personally would not go this way, mostly for reasons a bit higher-level than this.

My preference is not to have a global texture manager in the first place. Textures used on models are almost never the same as those used on buildings and terrain, so using a manager to detect and avoid redundant texture loads between these systems separately makes sense, and improves performance (searches will take place on smaller lists) while offering a bit of flexibility.
Each manager is also tuned specifically for that purpose. The model library’s texture manager has just the basic set of features for preventing the same texture from being loaded into memory twice. The building library uses image packages similar to .WAD files (a collection of all the images for that building in one file) and its texture manager is designed to work with those specifically.
Finally, the terrain library’s texture manager is well equipped for streaming textures etc.


Ultimately that is why I prefer to just have a pointer to the texture itself. No limits that way.
But it isn’t a big deal if you want to use indices in an array.


L. Spiro

i tend to take a CSG approach to drawing things ( i used to use POVRAY to create game graphics before DirectX was invented, hence the CSG approach).. i have textures, meshes, models, and animations. models are simply a collection of meshes and textures. models can be static or animated. everything is interchangeable. you can use any mesh with any texture. any model can use any mesh and any texture. any model can use any animation designed for any model with a similar limb layout.

in the previous version of my current project, i still associated all textures with specific models. there i used a texture file name to texture ID (index of array of d3dxtetures) lookup at load time to avoid duplicate loads, similar to what you do for character textures.. array indices were assigned in the order textures were loaded, and the model's texture ID's were set at load time. so in the model the texture was specified by filename, but in RAM its was specified as D3DXtexture[n], as in settex(tex[n]).

with the current version, i went to fully shared global resources. there's a pool of 300+ meshes and 300+ textures to work with to draw anything you want, or to make a model with. if you need a new mesh or texture, you just add it to the pool. then it will also be available for use for drawing other things or improving existing graphics. needless to say i can get a lot of reuse and mileage out of a given mesh or texture. as an example, the current project CAVEMAN 3.0 is a fps/rpg with a paleolithic setting. yet i do the entire game with just 2 rocks meshes! <g>. reuse keeps the total meshes and textures low, so no paging or load screens, just one load of everything at game start, like it should be. now if we could just figure out how to eliminate graphics load times all together...<g>

for a traditional non-CSG approach, splitting it based on data type makes sense. skinned meshes, memory images of static models, and streaming terrain chunks. 3 different types of data, 3 different texture managers (loaders).

Norm Barrows

Rockland Software Productions

"Building PC games since 1989"

rocklandsoftware.net

PLAY CAVEMAN NOW!

http://rocklandsoftware.net/beta.php

Out of curiosity, I have my renderer implementation creating meshes/textures, which are then attached onto the scene-objects 'models' and 'materials' . If the renderer should only create the buffers, what kind of module should be responsible for knitting them together into textures (and then materials) and meshes?

I don't see a direct reason for the additional abstractation, hence my question.

Basically people do this to separate difference concerns in the engine to make things more modular by reducing the dependencies between things. In the long run this helps with maintainability of the code base and the ability to swap things in and out based on different platforms the game is running on. In general the concerns about loading a resource are:

  1. How does an asset get loaded from disk
  2. How is the asset data from disk transformed into something the engine can use (lets call the end result a resource)
  3. How are resources aggregated together

Most people will put #1 and #2 together, and some put all three together. At my job #1 and #2 are separate but #2 and #3 are sometimes put together.

For #1 we have an asset loading system that handles the platform specifics of loading things off of disk/disc/memory cards/etc. It also tracks if we already loaded the resource, how many references there are to the asset, etc. It does not know how to take the raw asset data and convert it to something use-able by the game.

For #2 there is the idea of a translator that gets called by the asset loading system that will get the raw asset data and translates/transforms the raw data into the actual item that was being requested (a texture, a mesh, a scene, etc). The translator can ask the graphics system to create the resource or could pass the data to a factory object that can then build the requested resource. The translators are separate from the asset loading system and are registered with it.

The factory system we use for scenes, materials, models and meshes do both #2 and #3. Once they have the raw data given to them by the translator the factories go about parsing the data and asking the graphics system to create the necessary resources (mainly buffers) or requesting the asset system to load the asset for it (such as textures and shaders), which is handling question/concern #2. Then the factory will call setters on the appropriate class (a Mesh, Model, Material, Scene class) to store the resources, handling #3.

The translator does not reference a factory directly but instead will create a class of the given type (a Scene for example) and will call a method on that class to load itself using the raw data passed in. The class internally uses the factory to do the work of parsing the data and everything else.

Here is an example flow for loading a material.

A scene will have in its binary data the material definitions it uses. So the scene factory will get the material chunk and ask the material factory to handle it. The material factory will parse the material data and figure out the names of the textures that the material is using along with the various parameters that will be set on the shader. To load the texture the material factory will request the asset system to load the texture by name and gets a handle to the asset in return. The material can wait for the material to be loaded by checking the asset handle (bad idea) or check it periodically until it is loaded (which we do).

In the meantime the asset loading system is getting the data and calls the texture translator. The texture translator will request the graphics system to create the texture (which returns an interface to a texture so that we can hide differences between different graphics APIs behind it) from the raw binary data and then puts a reference to the texture interface into the asset handle. This is the same handle that the material has a reference to.

When the material sees that the asset has been loaded, it gets the texture interface from the handle and adds it to its internal structure that holds references to the textures it needs.

Not all of our assets use factories. Textures and shaders are two examples where the translator will ask the graphics system to create the requested type instead of going through a factory. So the flow for loading these items is slightly different than the example. For example a render object that wants to use a particular shader will ask the asset loading system to load the shader. The render object will then periodically check if the shader is loaded by asking the asset handle. Once it is loaded and translated the render object will get the shader from the asset handle and use it. No factories involved in this case.

So that is one way of putting things together without having the graphics system put it all together and associate it with the scene objects. I hope this gives you a different perspective on how things can be loaded and associated together in a game engine.

Not saying this is the best or right way, just a different way. :)

in my case, things are divided up more like this:

there is currently a library which manages low-level image loading/saving stuff, and also things like DXTn encoding, ...

this was originally part of the renderer, but ended up being migrated into its own library, mostly because the ability to deal with a lot of this also turned out to be relevant to batch tools, and because the logic for one of my formats (my customized and extended JPEG variant) started getting a bit bulky.

another library, namely the "low-level renderer" mostly manages loading textures from disk into OpenGL textures (among other things, like dealing with things like fonts and similar, ...).

then there is the "high level renderer", which deals with materials ("shaders") as well as real shaders, 3D models, and world rendering.

internally, they are often called "shaders" partly as the system was originally partly influenced by the one in Quake3 (though more half-assed).

calling them "materials" makes more sense though, and creates less confusion with the use of vertex/fragment shaders, which the system also manages.

materials are kind of ugly, as partly they seem to belong more in the low-level renderer than in the high-level renderer.

also, the need to deal with them partly over on the server-end has led to a partially-redundant version of the system existing over in "common", which has also ended up with redundant copies of some of the 3D model loaders (as sometimes this is also relevant to the server-end).

this is partly annoying as currently the 2 versions of the material system use different material ID numbers, and it is necessary to call a function to map from one to another prior to binding the active material (lame...).

worse still, the material system has some hacking into dealing with parts of the custom-JPEG variants' layer system (in ways which expose format internals), and with making the whole video-texture thing work, which is basically an abstraction failure.

this would imply a better architecture being:

dedicated (renderer-independent) image library for low-level graphics stuff;

* and probably separating layer management from the specific codecs, and providing a clean API for managing multi-layer images, and also providing for I/P Frame images (it will also need to deal with video, and possibly animation *1). as-is, a lot of this is tied to the specific codecs. note: these are not standard codecs (*2).

common, which manages low-level materials and basic 3D model loading, ...

* as well as all the other stuff it does (console, cvars, voxel terrain, ...).

low-lever renderer, which manages both textures, and probably also rendering materials, while still keeping textures and materials separate.

* sometimes one needs raw textures and not a full material.

high-level renderer, which deals with scene rendering, and provides the drawing logic for various kinds of models, ...

probably with some separation between the specific model file-formats and how they are drawn (as-is, there is some fair bit of overlap, better would probably be if a lot of format-specific stuff could be moved out of the renderer, say allowing the renderer to more concern itself with efficiently culling and drawing raw triangle meshes).

the uncertainty then is how to best separate game-related rendering concerns (world-structure, static vs skeletal models, ...) from the more general rendering process (major rendering passes, culling, drawing meshes, ...). as-is, all this is a bit of an ugly and tangled mess.

*1: basically, the whole issue of using real-time CPU-side layer composition, vs using multiple drawing passes on the GPU, vs rendering down the images offline and leaving them mostly as single-layer video frames (note: may still include separate layers for normal/bump/specular/luminance/... maps, which would be rendered down separately). some of this mostly came up during the initial (still early) development of a tool to handle 2D animation (vs the current strategy of doing most of this stuff manually frame-at-a-time in Paint.NET), where generally doing all this offline is currently the only really viable option.

*2: currently, I am still mostly still using my original expanded format (BTJ / "BGBTech JPEG"), mostly as my considered replacement "BTIC1" wasn't very good (speed was good, but size/quality was pretty bad, and the design was problematic, it was based on additional compression of DXTn). another codec was designed and partly implemented which I have called "BTIC2B", which is designed mostly to hopefully outperform BTJ regarding decode speed, but is otherwise (vaguely) "similar" to BTJ. some format tweaks were made, but I am not really sure it will actually be significantly faster.

more so, there are "bigger fish" as far as the profiler goes. ironically, my sound-mixing currently uses more time in the profiler than currently is used by video decoding (mostly dynamic reverb). as well as most of the time going into the renderer and voxel code, ... (stuff for culling, skeletal animation, checking for block-updates, ...).

Thanks for all the advice, I've came up with a second and hopefully better layout, though I still have a few questions:

For general models:

class VertexBuffer
{
    public:
        unsigned int VBO_ID;
        float* vertexes, normals, uvs;
        int* indexes;
        //I was thinking in moving to a separate class, but a class just for this single variable feels like overdoing it. I don't think I'll be playing with the index order for models after the initial loading
};
class Material
{
    public:
        float[4] diffuse, ambient, specular;
        unsigned int TEXTURE_ID;
};
//This enum will define which shader to use for a specific Mesh
enum MeshType
{
    Normal, Glass, Etc
};
class Mesh
{
    public:
        VertexBuffer* VBO;
        Material* material;
        MeshType meshType;
};
class Model
{
    public:
        Mesh* mesh;
        int meshCount;
        void Load(); //comment below
        void Draw(); //comment below
};

I'd like to make Models load and draw themselves, but without giving them direct access to the disk or the reader, and I can't figure how. I'm thinking of how it would be better to do when developing, something like this:

void GameState::Initialize()
{
    //For this to work, I'd have to pass a pointer to my resource module to the model somehow
    Model* playerModel = new Model(&resourceModule);
    playerModel.Load("assetName");
 
    //Another way is to use a Model Manager, who has a pointer to the resource and can receive the reading stream
    Model* playerModel = modelManager.NewModel("assetName");    
}

The same for Draw(), as suggested, but I'm not sure how it would work:

void GameState::Draw()
{
    vector<GameObjects*> onScreenObjects = gameScene.GetOnScreenObjectsSorted();
    for(int i = 0; i < onScreenObjects.size(); i++)
    {
        //For this to work, the objects must have a pointer to my graphic module and call the functions
        onScreenObjects[i].Draw(&gameEngine.graphicModule); 
 
        //Or I call it from the module itself
        gameEngine.graphicModule.Draw(onScreenObject[i]);
    }
}

For the models to be able to draw themselves they have to "see" the graphic module somehow, but I'm not sure if it's okay to send the pointer for each Draw() call. The models would be tightly knotted to the module that I think I'd be nasty to upgrade/evolve, as a single change in the graphic module interface could mean lots of rewriting everywhere.

I know since it's my first engine I expect this to happen quite a lot, so I'm trying to make it easier for when it does.

For the Engine & Game:

class GameEngine
{
    public:
        virtual Initialize() = 0; //This will call all initialize functions on the modules that are required for running
        virtual Destroy();
        //Functions to load VBOs, activate Textures, and other draw calls
        GraphicModule graphics;
        //Functions to read/write to disk and find the proper stream for an asset by name or ID in my resource format files
        ResourceModule resources;
        //Still working on this
        SoundModule sounds;
        //Will just receive events from the OS and store them for use later, responsible for managing inputs be it keyboard & mouse, touch, etc
        InputModule inputs;
        //other modules to be added as needed
 
        //Responsible for creating window and managing input events depending on the OS, and other OS related stuff such as getting time
        OSModule oSystem;
};
class MyEngine : public GameEngine
{
    //implementation here
}
class Game
{
    public:
        //Using this engine interface I hope to be able to develop my engine further as I add new features and such, and still keep the games I'm currently developing working correctly with it.
        GameEngine* gameEngine;
        //Using LSpiro's post about Time
        GameTime* gameTime;
        //Using LSpiro's blog post about managing game state, this will be the current gameState
        GameState* gameState;
        //This will read from the inputModule only the inputs that are linked to a Game Action, time stamped, and will keep them in a buffer to be read from the Update() method
        GameInput* gameInput;
};

I know it isn't much specific, but I just want to get the general layout idea done in a good way so I don't bang my head later thinking "Why did I made it this way?"

Again, many thanks for the advices, I feel the idea is much more robust and organized now.

For general models:

Much better than before, but a few major problems remain.
#1: Don’t use “unsigned int” for your VBO ID. OpenGL/OpenGL ES give you a GLuint, and that is what you use. Never assume the typedef for GLuint will be “unsigend int” no matter how much you “know” it is.

#2: Don’t put indices on a vertex buffer. An index buffer is not the responsibility of the VertexBuffer class. That goes to the IndexBuffer class. Your comment mentions it will just be a single variable. Why won’t it be another IBO? Why use VBO’s but not IBO’s? There are plenty of things to add to an IndexBuffer class, not just a single pointer. Besides, the same index buffer can be used with many vertex buffers.

#3: If you are considering later adding support for DirectX (as you mentioned you might) then the GLuint VBO_ID member does not belong on the VertexBuffer class.
Organize your classes this way:
VertexBufferBase // Holds all data and methods that are the same for all implementations of VertexBuffer classes across all API’s you ever want to support.  This would be the in-memory form of the vertex buffer data before it has been sent to the GPU.
VertexBufferOpenGl : public VertexBufferBase // Holds the GLuint ID from OpenGL and provides implementations for methods related directly to OpenGL.  I showed you an example of this with my COpenGlStandardTexture::CreateApiTexture() method.
VertexBuffer : public VertexBufferOpenGl // All high-level functionality of a vertex buffer common to all vertex buffers on any API.  Its CreateVertexBuffer() method prepares a few things and then calls CreateApiVertexBuffer(), which will be implemented by VertexBufferOpenGl.

In short, if the class name does not have “OpenGl” in it, that class has no relationship what-so-ever to OpenGL.
Later when you add DirectX support, you can do this:
VertexBufferDirectX9 : public VertexBufferBase

#ifdef DX9
VertexBuffer : public VertexBufferDirectX9
#elif defined( OGL )
VertexBuffer : public VertexBufferOpenGl
#else
#error "Unrecognized API."
#endif


I'd like to make Models load and draw themselves, but without giving them direct access to the disk or the reader, and I can't figure how.

As I mentioned in my initial post, use streams. They are also fairly easy to roll on your own.
Once you are using streams, the model can load its data from memory, files, networks, whatever, all while not knowing the source of the object.


I'm thinking of how it would be better to do when developing, something like this:

void GameState::Initialize()
{
    //For this to work, I'd have to pass a pointer to my resource module to the model somehow
    Model* playerModel = new Model(&resourceModule);
    playerModel.Load("assetName");
 
    //Another way is to use a Model Manager, who has a pointer to the resource and can receive the reading stream
    Model* playerModel = modelManager.NewModel("assetName");    
}

I have an answer for this question, but before I can even get to it you need to fix another major problem.
How do you plan to handle a game where there are, for example, a bunch of enemy spaceships, but only 3 different kinds?
There are 50 of type A, 35 of type B, and 3 of type C.
Are you really going to load the model data for type A 50 times?

A Model class is supposed to be shared data, by a higher-level class called ModelInstance.
Model needs ModelManager to prevent the same model file from being loaded more than once, so your “Another way” is the only way.
Then you create all of your ModelInstance objects. ModelManager hands out shared pointers to the Model class that was either created for that instance or already created for another instance previously.




The same for Draw(), as suggested, but I'm not sure how it would work:

void GameState::Draw()
{
    vector<GameObjects*> onScreenObjects = gameScene.GetOnScreenObjectsSorted();
    for(int i = 0; i < onScreenObjects.size(); i++)
    {
        //For this to work, the objects must have a pointer to my graphic module and call the functions
        onScreenObjects[i].Draw(&gameEngine.graphicModule); 
 
        //Or I call it from the module itself
        gameEngine.graphicModule.Draw(onScreenObject[i]);
    }
}

Why would your GameState class make the final calls on the objects?
void GameState::Draw()
{
    gameScene.Draw(&gameEngine.graphicModule);
}
Done and done.
The scene manager can handle everything related to the objects in the scene. That is exactly its job.
Inside GameScene::Draw() it handles culling and orchestrates the entire rendering process.
And be 100% sure that you are not doing this:

vector<GameObjects*> onScreenObjects = gameScene.GetOnScreenObjectsSorted();
No reason to slowly copy a whole vector every frame.
Do this:
vector<GameObjects*> & onScreenObjects = gameScene.GetOnScreenObjectsSorted();
Or this:
vector<GameObjects*> onScreenObjects;
gameScene.GetOnScreenObjectsSorted(onScreenObjects);


For the models to be able to draw themselves they have to "see" the graphic module somehow, but I'm not sure if it's okay to send the pointer for each Draw() call. The models would be tightly knotted to the module that I think I'd be nasty to upgrade/evolve, as a single change in the graphic module interface could mean lots of rewriting everywhere.

If the graphics module instead knew what a model was, that would be ridiculously tight knotting.
If you have models and you have a graphics module and you want the models to be able to be drawn to your screen then you have to have some level of knotting no matter what.

Changing and evolving interfaces is a necessary part of the pain you are required to endure to join the ranks of a programmer.


L. Spiro

I restore Nintendo 64 video-game OST’s into HD! https://www.youtube.com/channel/UCCtX_wedtZ5BoyQBXEhnVZw/playlists?view=1&sort=lad&flow=grid


Changing and evolving interfaces is a necessary part of the pain you are required to endure to join the ranks of a programmer.

Damn, that's an horribly accurate statement.

This topic is closed to new replies.

Advertisement