Sign in to follow this  
Danicco

Game Engine Layout

Recommended Posts

Danicco    449

I'm finishing up my first engine layout for OpenGL, and this is what I got so far:

 

//This has info that's needed to draw images in the shader

[Material]

.char[3] diffuse, specular, ambient; //rgb that will be used in the shader

.char* textureData; //the rgba data from an image

.char* normalData; //the normal map from an image

.int textureID; //the textureID that will be filled when it's loaded in the GPU

 

//A mesh is the data to draw an object in screen

[Mesh]

.float* vertexes;

.float* normals;

.float* uvs;

.int* vertexIndexes;

.Material material; //A mesh can only have a single material

.MeshType meshType; //this will be an enum that will specify which shader to use, if it's glass, water, or something that need some specific shader effect

.int vboID; //the ID that will be filled when it's loaded in the GPU

 

//This resource object will contain all data about a single object, such as a "House"

[ResourceObject]

.string name; //a name to identify it, not sure if it'll be needed

.Mesh* meshes; //the resource object can have multiple meshes, such as a "door", "window", "walls", etc

 

//This is the class that will be responsible for loading and unloading objects from disk in the RAM

[ResourceEngine]

.list<ResourceObject> listObjects; //a list of all objects that are currently loaded

.list<Material> listMaterials; //a list of all materials that are currently loaded

.LoadObject(int count, ...); //load X resources in the list, will use "string" to identify them

.LoadMaterial(int count, ...); //same as above, but for materials

.UnloadObject(int objectID); //unload an object from the list

.UnloadMaterial(int materialID); //unload a material from the list

 

//This is the class responsible for loading the objects in the GPU and manage shaders

[GraphicEngine]

.list<Shader> listShaders; //a list of shaders, not sure if this will be needed

.LoadObject(ResourceObject* object); //load an object in the GPU, will fill the "vboID"

.LoadMaterial(Material* material); //load an material in the GPU, will fill the "textureID"

.Unload(int vboID); //unload a VBO

.Unload(int textureID); //unload a texture

.CompileShaders(string shaderCodes); //this will compile the shader, and fill the list and specify an ID to it, which an object will be able to call it if needed

 

//This class has functions specific to each OS that I'll be using, such as the currentTime or read/write files, and all other classes will access it

[OSFunctions]

.GetCurrentTime(); //returns the float of the currentTime

//Other functions that I'll be filling as the game requires

 

//This class will keep track of the game's ACTIONs and playerInput

[InputEngine]

.keyboard[256]; //For each key in the keyboard, has a bool to check if it's pressed or not

.mouse[8]; //Same for mouse, and mousewheel

//I'm thinking in moving these two to OS, since mobiles have different controls and it'll depend on the target platform the game will be

.list<Actions> listActions; //A list of all actions in the game, loaded from a resourceFile as strings such as "PLAYER_MOVERIGHT"

.list<Actions> playerInput; //A list of all actions input by the player, with timestamps

 

//This is an actual on-screen object, it uses a resourceObject data and there can be multiple of them on screen

[GameObject]

.ResourceObject* resourceObject; //the data of this object

.float positionX, positionY, positionZ;

.float directionX, directionY, directionZ;

.float speedX, speedY, speedZ;

//other variables needed depending on the game

 

//This is the main class where most of the code will be

[GameEngine]

.SceneGraph sceneGraph; //basically a list of all GameObjects currently on the level

.Initialize(); //empty function that's called on startup, will be filled differently for each game

.Update(); //empty function that's called every frame

.Draw(); //function that gets a list of objects that "can be seen" from the sceneGraph, and draws them

 

So, my game would be something like this:

int WINAPI WinMain(HINSTANCE hInstance, HINSTANCE hPrevInstance, LPSTR lpCmdLine, int nCmdShow)
{
    OSFunctions.Initialize(); //this will start the window based on which OS I'm in
    GameEngine gameEngine;
    gameEngine.Initialize(); //this function will load all stuff needed and set the default values for everything

    while(game.isRunning == true)
    {
        if(PeekMessage(&msg, NULL, 0, 0, PM_REMOVE))
        {
            TranslateMessage(&msg);
            DispatchMessage(&msg);
        }
        gameEngine.update(); //the main update function
        gameEngine.draw(); //and draw
    }
}
//The message proc function here with all input calling the InputEngine functions to update each key's states

And my GameEngine's Update() would have something like this:

void GameEngine::Update()
{
    switch(gameState)
    {
        case GameState::Startup_Loading:
        {
            //Loading from the disk to the resourceList
            ResourceEngine.Load(2, "logo", "title");

            //Now filling the objects' data before adding to the sceneGraph
            GameObject* gameLogo = new GameObject();
            gameLogo.resourceObject = listObjects[0];
            gameLogo.positionX = 10; //etc

            sceneGraph.add(gameLogo);
            
            gameState = GameState::Startup;
            break;
        }
        case GameState::Startup:
        {
            //displays the logo, counts a time, then changes the gameState to something else
            gameState = GameState::Title_Screen;
            break;
        }
        case GameState::MainGame_Loading:
        {
            resourceList.Clear();
            ResourceEngine.Load(1, "playerCharacter");            

            GameObject* playerCharacter = new GameObject();
            playerCharacter.resourceObject = listObjects[0];
            //fill positions and such
            
            gameState = GameState::MainGame;
            break;
        }
        case GameState::MainGame:
        {
            for(int i = 0; i < sceneGraph.GetVisibleObjectsCount; i++)
            {
                //each object has it's own update, which will check for input and change it's own state and manage it's own animations
                sceneGraph.GetVisibleObject(i).Update();
            }
            break;
        }
    }
}

This is how I think it'll be working, and I'll be coding lots of GameObjects for each game, get enums for flags and change them according to events in game, among other stuff.

This is my first engine/game, and I'd like to ask for opinions, advices and comments of how this can be done better, or if there's room for optimization, if there's anything wrong, etc...

 

I have little to no experience with engines, so any input is highly appreciated!

Share this post


Link to post
Share on other sites
CRYP7IK    1327

I have a question: Is this supposed to be used in one game? If so this engine seems fine, minus having no alpha which might be on purpose.

 

However if you then tried to adapt it to some other game the GameEngine class leaves much to be desired especially if the update function gets too large with too many different states. You can fix that with a slightly more advanced state machine to look after the states(Which would then be a separate class).

 

I was writing up a StateMachine example class when I remembered about the crazy number of gdnet articles, check these out:

 

http://www.gamedev.net/blog/812/entry-2256166-state-machines-in-games-%E2%80%93-part-1/

http://www.gamedev.net/blog/812/entry-2256167-state-machines-in-games-%E2%80%93-part-2/

http://www.gamedev.net/blog/812/entry-2256173-state-machines-in-games-%E2%80%93-part-3/

http://www.gamedev.net/blog/812/entry-2256179-state-machines-in-games-%E2%80%93-part-4/

http://www.gamedev.net/blog/812/entry-2256190-state-machines-in-games-%E2%80%93-part-5/

 

http://www.gamedev.net/page/resources/_/technical/game-programming/state-machines-in-games-r2982

 

Although, again your engine is very nice and if it works for your purpose don't over-engineer it.

 

 

 

Share this post


Link to post
Share on other sites
Danicco    449

I have a question: Is this supposed to be used in one game? If so this engine seems fine, minus having no alpha which might be on purpose.

 

It's my first engine, and I hope to start developing all my games with it.

By alpha you mean transparency? I forgot to add it to the texture/object.

 

Thank you for the links, that's going to be useful for flag events since I think there's going to be a lot of them until I finish a game and I was looking for some better way to manage it, besides the states.

 

 

You would be better off just using a texture pointer directly, or better still a shared pointer to a texture. Textures may have ID’s (and they very well should, to prevent redundant texture loads), but using that ID rather than a direct pointer wastes precious cycles, as you then need to find that texture via binary searches, hash tables, etc.

 

 

Sorry but I didn't understand it, what do you mean by not using the ID, but the pointer instead?

ResourceObject will have a pointer to a Material which has the textureID, that's the unsigned int that gets the value from the glGenTextures(count, unsigned int*) function, and I have that so I can keep track if the texture's been loaded in the RAM (has a value > 0) or not (any value > 0), and when I have to draw I call glBindTexture(GL_TEXTURE_2D, object->material.textureID).

Is there a better way to do this?

 

 

 

Those are usually just characters, ammunition/weapons, and various other props. Other things, such as terrain, vegetation, buildings (especially interiors), skyboxes, etc., are all made efficient by utilizing rendering pipelines unique to themselves.

 

 

 

Oh, I totally forgot about those... would creating a specific object for each of these types be the best solution? Something like a TerrainObject which doesn't need to have a mesh, but materials only and the height data?

 

 

 

When you break it down like this it starts to make sense that models should actually just load themselves. Models know what models are. They know what their binary format is. There is no reason to push the responsibility of the loading process onto any other module or class.

 

 

Wouldn't this make it harder for different games? I was going for the ResourceEngine with the idea that I'll be building my resource binary files from another program, my map editor, which will probably be different for each game, and I'd only have code to writing/reading from disk in that class so it's easier to change for the game I'm writing.

 

 

 

The main problem you have here is that the graphic module knows what a “ResourceObject” (a model) is when it should be entirely the other way around.

 

 

I remember that at first, my idea was to have the models load & draw themselves, they'd have access to the draw functions, but I choose to have a single class responsible for all the drawing and the models only being objects with their data stored so it's easier to change the draw functions if I ever need it.

I'm using OpenGL for now, so all the OpenGL functions are in this GraphicEngine class and I just need to know how should I display an object's data on screen, and if I were to change to Direct3D or something else specific (mobiles that don't use OpenGL) I would only have to rewrite this part.

 

I don't know if this ever happens, but is there a better way to deal with this?

 

 

 

This is one of those cases where I can only say, “You’re doing it wrong,” but will let you learn from personal experience. You will need that experience to understand why this is wrong even if I told you, and, besides, it would require a major restructure of your engine.

 

 

Oh but that's exactly what I'm trying to avoid... I'm fine with having to rewrite a lot of my engine code as I get more experienced, but I'm trying to get at least the basics (and properly understand it) so I don't go fully bad right off the start. I had another post about handling player input and that's what I could get from it, I've seen some other posts and tutorials but it's still... very "high level", there's much concept and little to no code or explanations that I can understand so I'm quite lost in this part.

 

I'm sorry about the questions, I appreciate very much these replies, but I'd just like to ask the "why"s to understand why something is better than another and such.

 

Thank you very much for the information, I'm still reading the last 2 topics on Time and Orientation, and everything has been a great help so far!

Share this post


Link to post
Share on other sites
cr88192    1570

 


or it returns a shared pointer to that model.

Don’t use ID’s for everything, as, as I already mentioned, it wastes precious cycles.
 

 

what if the ID is the offset into an array of shared texture pointers?

 

 

this is pretty much what my engine is doing.

typically, integer values are used.

 

a person could probably even get along ok with a short, working under the assumption that it is unlikely there will ever be more than 32k textures.

 

 

in a few cases, I have used bytes for color and normal arrays, mostly because in some places RAM use is an issue (*1), and using bytes I only need about 1/4 as much space. this mostly then just leaves floats for XYZ and ST coordinate arrays.

 

*1: my engine currently is operating fairly close to the limits of a 32-bit address space, so involves some amount of "trying to avoid wasting memory where possible".

 

this is mostly done with things like the voxel terrain geometry, though ironically the geometry for the terrain tends to be a fair bit smaller than the actual raw voxel data (most of the voxels don't actually end up needing geometry). better probably would have been using "voxel indices" instead, but sadly this would require some non-trivial changes and introduce new possible issues (need to re-index voxels on block-updates, ...). (it is a hard choice between the issues of indices and currently needing to spend 8-bytes for every voxel, when many/most chunks only have a small number of unique voxel types...).

 

 

also considered a few times was possibly separating the high-level object types from their rendering (using generic "Mesh" objects for nearly everything), but sadly this hasn't been done.

 

as-is, there are a few semi-generic cases though:

Mesh: generally represents a piece of static geometry;

Brush: pretty much anything that looks like a world brush (may include patches and meshes as well as plane-bound brushes);

Model: pretty much anything that is instantiated and placed in the scene, uses vtables for sub-types (may hold Mesh or other more specialized model types, such as skeletal models or sprites);

Region/Chunk: main systems used by the terrain system (a Chunk is basically an elaborate case of a mesh, with more involved rendering logic to deal with the various cases, and may potentially hold raw voxel data, and potentially also Mesh or Light objects, *2);

...

 

 

*2: visible chunks currently optionally hold voxel data. it may be absent in cases where the engine has decided to store it in an RLE-packed format.

 

the Region object may also hold Brush or Entity/SEntity objects (as an alternative to storing them in World), but this still isn't really fully developed (spawned entities are not persistent, ...). note that these are stored using Region-local coordinates. I chose region for holding this stuff (vs Chunk) mostly due to it being a slightly more convenient size and involving probably less overhead.

 

 

granted, a lot of this may be N/A depending on the type of game.

Share this post


Link to post
Share on other sites
L. Spiro    25622

You would be better off just using a texture pointer directly, or better still a shared pointer to a texture. Textures may have ID’s (and they very well should, to prevent redundant texture loads), but using that ID rather than a direct pointer wastes precious cycles, as you then need to find that texture via binary searches, hash tables, etc.

 
 
Sorry but I didn't understand it, what do you mean by not using the ID, but the pointer instead?
ResourceObject will have a pointer to a Material which has the textureID, that's the unsigned int that gets the value from the glGenTextures(count, unsigned int*) function, and I have that so I can keep track if the texture's been loaded in the RAM (has a value > 0) or not (any value > 0), and when I have to draw I call glBindTexture(GL_TEXTURE_2D, object->material.textureID).
Is there a better way to do this?

OpenGL may be a state machine, but you need to think of each object you get from OpenGL (textures, vertex buffers, etc.) as…objects. In other words, a class called CTexture2D (use whatever naming convention you like) completely wraps around and manages the lifetime of the GLuint that OpenGL gives you.
That GLuint does not exist in the eyes of any other class in your whole engine. Nothing needs to know what that thing is because that is the soul responsibility of the CTexture2D class.
Here are actual examples from my own engine:
	// == Various constructors.
	LSE_CALLCTOR COpenGlStandardTexture::COpenGlStandardTexture() :
		m_uiTexture( 0 ) {
	}
	LSE_CALLCTOR COpenGlStandardTexture::~COpenGlStandardTexture() {
		ResetApi();
	}
	/**
	 * Activate this texture in a given slot.
	 *
	 * \param _ui32Slot Slot in which to place this texture.
	 * \return Returns true if the texture is activated successfully.
	 */
	LSBOOL LSE_CALL COpenGlStandardTexture::Activate( LSUINT32 _ui32Slot ) {
		if ( !m_uiTexture ) { return false; }
		if ( m_ui32LastTextures[_ui32Slot] != m_ui32Id ) {
			m_ui32LastTextures[_ui32Slot] = m_ui32Id;
			COpenGl::ActivateTexture( _ui32Slot );
			::glBindTexture( GL_TEXTURE_2D, m_uiTexture );
		}

		return true;
	}
	/**
	 * Reset everything related to OpenGL textures.
	 */
	LSVOID LSE_CALL COpenGlStandardTexture::ResetApi() {
		if ( m_uiTexture ) {
			::glDeleteTextures( 1, &m_uiTexture );
			m_uiTexture = 0;
		}
	}
	/**
	 * Create an OpenGL texture and fill it with our texel data.  Mipmaps are generated if necessary.
	 *
	 * \return Returns true if the creation and filling of the texture succeeds.  False indicates a resource error.
	 */
	LSBOOL LSE_CALL COpenGlStandardTexture::CreateApiTexture() {
		ResetApi();

		// Generate a new texture.
		::glGenTextures( 1, &m_uiTexture );
		glWarnError( "Uncaught" );	// Clear error flag.
		::glBindTexture( GL_TEXTURE_2D, m_uiTexture );

		[…] LOTS OF PREPARATION CODE OMITTED.


		// Set filters etc. first (improves performance in many drivers).
		::glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, iMagFilter[ui32FilterIndex] );
		::glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, iMinFilter[ui32FilterIndex] );
		if ( m_ui32Usage & LSG_TFT_ANISOTROPIC ) {
			::glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MAX_ANISOTROPY_EXT, CStd::Max<LSUINT32>( 1UL, CFnd::GetMetrics().ui32MaxAniso >> 1UL ) );
		}

		static const GLint iClamp[] = {
			GL_CLAMP_TO_EDGE,
			GL_REPEAT,
			GL_MIRRORED_REPEAT,
		};
		::glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, iClamp[m_wmWrap] );
		::glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, iClamp[m_wmWrap] );

		if ( glWarnError( "Error setting filters on standard texture" ) ) {
			ResetApi();
			COpenGlStandardTexture::ClearLastSetTexAndUnbindAll();
			return false;
		}


		// Now fill in the texture data.
		switch ( m_iTexels.GetFormat() ) {
			case LSI_PF_DXT1 : {}
			case LSI_PF_DXT3 : {}
			case LSI_PF_DXT5 : {
				COpenGl::glCompressedTexImage2D( GL_TEXTURE_2D, 0, GetOpenGlInternalFormatStandard( m_iTexels.GetFormat() ),
					piUseMe->GetWidth(), piUseMe->GetHeight(), 0,
					CDds::GetCompressedSize( piUseMe->GetWidth(), piUseMe->GetHeight(), CDds::DdsBlockSize( m_iTexels.GetFormat() ) ),
					piUseMe->GetBufferData() );
				break;
			}
			case LSI_PF_DXT2 : {}
			case LSI_PF_DXT4 : {
				return false;
			}
			default : {
				::glTexImage2D( GL_TEXTURE_2D, 0, GetOpenGlInternalFormatStandard( m_iTexels.GetFormat() ),
					piUseMe->GetWidth(), piUseMe->GetHeight(), 0, GetOpenGlFormatStandard( m_iTexels.GetFormat() ),
					GetOpenGlTypeStandard( m_iTexels.GetFormat() ), piUseMe->GetBufferData() );
			}
		}
		

		if ( glWarnError( "Error creating standard texture" ) ) {
			ResetApi();
			return false;
		}
		return true;
	}
Notice that the class manages the lifetime of m_uiTexture and correctly destroys it if the class itself its destroyed. This ensures no resource leaks are possible, at least at this level.
Every resource you get from OpenGL should be managed similarly.

That means that your material will never ever have the GLuint ID of an OpenGL texture. It will have a (hopefully shared) pointer to CTexture2D or such.
You asked about DirectX below. This is also heavily related to that, and I will explain below.

 

Those are usually just characters, ammunition/weapons, and various other props. Other things, such as terrain, vegetation, buildings (especially interiors), skyboxes, etc., are all made efficient by utilizing rendering pipelines unique to themselves.

 
 
Oh, I totally forgot about those... would creating a specific object for each of these types be the best solution? Something like a TerrainObject which doesn't need to have a mesh, but materials only and the height data?

Probably.


 

When you break it down like this it starts to make sense that models should actually just load themselves. Models know what models are. They know what their binary format is. There is no reason to push the responsibility of the loading process onto any other module or class.

 
 
Wouldn't this make it harder for different games? I was going for the ResourceEngine with the idea that I'll be building my resource binary files from another program, my map editor, which will probably be different for each game, and I'd only have code to writing/reading from disk in that class so it's easier to change for the game I'm writing.

If you change your model’s format you will have to change loading code somewhere either way. It is no more difficult to open and edit “Model.cpp” than “ResourceManager.cpp”.
But it will be much cleaner and easier to browse to the location where you want to edit if you keep the loading code separate, because Model.cpp will not be a monolithic 10,000-line file.

 

The main problem you have here is that the graphic module knows what a “ResourceObject” (a model) is when it should be entirely the other way around.

 
 
I remember that at first, my idea was to have the models load & draw themselves, they'd have access to the draw functions, but I choose to have a single class responsible for all the drawing and the models only being objects with their data stored so it's easier to change the draw functions if I ever need it.
I'm using OpenGL for now, so all the OpenGL functions are in this GraphicEngine class and I just need to know how should I display an object's data on screen, and if I were to change to Direct3D or something else specific (mobiles that don't use OpenGL) I would only have to rewrite this part.
 
I don't know if this ever happens, but is there a better way to deal with this?

For a model to be able to draw itself and for all of the rendering commands to be neatly located inside the graphics module and only the graphics module are not mutually exclusive.

When I said a model would draw itself, I didn’t mean that it would call OpenGL functions directly. The graphics module should hide away all the calls to OpenGL/DirectX and provide a single unified interface that the models can use to set blending modes and render.

In my engine, the LSGraphicsLib library has a CFnd class which if, of course, the foundation for all graphics. Here are examples.
OpenGL:
	/**
	 * Sets culling to on or off.
	 *
	 * \param _bVal Whether culling is enabled or disabled.
	 */
	LSE_INLINE LSVOID LSE_FCALL COpenGl::SetCulling( LSBOOL _bVal ) {
		_bVal = _bVal != false;
		if ( m_bCulling != _bVal ) {
			m_bCulling = _bVal;
			if ( _bVal ) { ::glEnable( GL_CULL_FACE ); }
			else { ::glDisable( GL_CULL_FACE ); }
		}
	}

	/**
	 * Sets the cull winding order.
	 *
	 * \param _cmMode The cull winding.
	 */
	LSE_INLINE LSVOID LSE_FCALL COpenGl::SetCullMode( LSG_CULL_MODES _cmMode ) {
		assert( _cmMode == LSG_CM_CW || _cmMode == LSG_CM_CCW );
		if ( m_cmCullMode != _cmMode ) {
			m_cmCullMode = _cmMode;
			static const GLenum eModes[] = {
				GL_CW,
				GL_CCW
			};
			::glFrontFace( eModes[_cmMode] );
		}
	}
DirectX 9:
	/**
	 * Sets culling to on or off.
	 *
	 * \param _bVal Whether culling is enabled or disabled.
	 */
	LSE_INLINE LSVOID LSE_FCALL CDirectX9::SetCulling( LSBOOL _bVal ) {
		_bVal = _bVal != false;
		if ( m_bCulling != _bVal ) {
			m_bCulling = _bVal;
			if ( !_bVal ) {
				m_pd3dDevice->SetRenderState( D3DRS_CULLMODE, D3DCULL_NONE );
			}
			else {
				static D3DCULL cModes[] = {
					D3DCULL_CCW,
					D3DCULL_CW
				};
				m_pd3dDevice->SetRenderState( D3DRS_CULLMODE, cModes[m_cmCullMode] );
			}
		}
	}

	/**
	 * Sets the cull winding order.
	 *
	 * \param _cmMode The cull winding.
	 */
	LSE_INLINE LSVOID LSE_FCALL CDirectX9::SetCullMode( LSG_CULL_MODES _cmMode ) {
		assert( _cmMode == LSG_CM_CW || _cmMode == LSG_CM_CCW );
		if ( m_cmCullMode != _cmMode ) {
			m_cmCullMode = _cmMode;
			if ( m_bCulling ) {
				static const D3DCULL cCull[] = {
					D3DCULL_CCW,
					D3DCULL_CW
				};
				m_pd3dDevice->SetRenderState( D3DRS_CULLMODE, cCull[_cmMode] );
			}
		}
	}
Then in LSFFnd.h (the .h for the CFnd class), I do this:
#ifdef LSG_OGL
	typedef COpenGl						CBaseApi;
#elif defined( LSG_OGLES )
	typedef COpenGlEs					CBaseApi;
#elif defined( LSG_DX9 )
	typedef CDirectX9					CBaseApi;
#elif defined( LSG_DX10 )
	typedef CDirectX10					CBaseApi;
#elif defined( LSG_DX11 )
	typedef CDirectX11					CBaseApi;
#endif	// #ifdef LSG_OGL

[…]

	/**
	 * Sets culling to on or off.
	 *
	 * \param _bVal Whether culling is enabled or disabled.
	 */
	LSE_INLINE LSVOID LSE_FCALL CFnd::SetCulling( LSBOOL _bVal ) {
		CBaseApi::SetCulling( _bVal );
	}

	/**
	 * Sets the cull winding order.
	 *
	 * \param _cmMode The cull winding.
	 */
	LSE_INLINE LSVOID LSE_FCALL CFnd::SetCullMode( LSG_CULL_MODES _cmMode ) {
		CBaseApi::SetCullMode( _cmMode );
	}
Now the graphics module is doing its job. My models and terrain can call CFnd::SetCulling() and CFnd::SetCullMode() without having to worry about the underlying API.
To finish the example of my OpenGL texture class, here are the same methods on the DirectX 9 side of things:
	// == Various constructors.
	LSE_CALLCTOR CDirectX9StandardTexture::CDirectX9StandardTexture() :
		m_pd3dtTexture( NULL ) {
	}
	CDirectX9StandardTexture::~CDirectX9StandardTexture() {
		ResetApi();
	}
	/**
	 * Activate this texture in a given slot.
	 *
	 * \param _ui32Slot Slot in which to place this texture.
	 * \return Returns true if the texture is activated successfully.
	 */
	LSBOOL LSE_CALL CDirectX9StandardTexture::Activate( LSUINT32 _ui32Slot ) {
		if ( !m_pd3dtTexture ) { return false; }
		if ( m_ui32LastTextures[_ui32Slot] != m_ui32Id ) {
			m_ui32LastTextures[_ui32Slot] = m_ui32Id;
			if ( FAILED( CDirectX9::GetDirectX9Device()->SetTexture( _ui32Slot, m_pd3dtTexture ) ) ) { return false; }
			CDirectX9::SetMinMagMipFilters( _ui32Slot, m_tftDirectX9FilterType, m_tftDirectX9FilterType, m_bMipMaps ? D3DTEXF_LINEAR : D3DTEXF_NONE );
			CDirectX9::SetAnisotropy( _ui32Slot, m_bMipMaps ? (CFnd::GetMetrics().ui32MaxAniso >> 1UL) : 0UL );
			
			static const D3DTEXTUREADDRESS taAddress[] = {
				D3DTADDRESS_CLAMP,
				D3DTADDRESS_WRAP,
				D3DTADDRESS_MIRROR,
			};
			CDirectX9::SetTextureWrapModes( _ui32Slot, taAddress[m_wmWrap],
				taAddress[m_wmWrap] );
		}
		return true;
	}
	/**
	 * Reset everything related to Direct3D 9 textures.
	 */
	LSVOID LSE_CALL CDirectX9StandardTexture::ResetApi() {
		CDirectX9::SafeRelease( m_pd3dtTexture ); // Sets m_pd3dtTexture to NULL by itself.
	}
	/**
	 * Create a DirectX 9 texture and fill it with our texel data.  Mipmaps are generated if necessary.
	 *
	 * \return Returns tue if the creation and filling of the texture succeeds.  False indicates a resource error.
	 */
	LSBOOL LSE_CALL CDirectX9StandardTexture::CreateApiTexture() {
		CDirectX9::SafeRelease( m_pd3dtTexture );

		[…] LOTS OF SETUP CODE OMITTED.

		// Reroute DXT textures.
		switch ( piUseMe->GetFormat() ) {
			case LSI_PF_DXT1 : {}
			case LSI_PF_DXT2 : {}
			case LSI_PF_DXT3 : {}
			case LSI_PF_DXT4 : {}
			case LSI_PF_DXT5 : {
				return CreateApiTextureDxt( (*piUseMe), ui32Usage, pPool );
			}
		}


		// Create a blank texture.
		HRESULT hRes = CDirectX9::GetDirectX9Device()->CreateTexture( piUseMe->GetWidth(), piUseMe->GetHeight(),
			m_bMipMaps ? 0UL : 1UL,
			ui32Usage,
			d3dfTypes[piUseMe->GetFormat()], pPool,
			&m_pd3dtTexture, NULL );
		if ( FAILED( hRes ) ) {
			return false;
		}
		

		[…] LOTS OF MIPMAP ETC. CODE OMITTED.

		return true;
	}
 

I'm sorry about the questions, I appreciate very much these replies, but I'd just like to ask the "why"s to understand why something is better than another and such.

If you really want to tackle full and smooth input systems, I have described much of the process here.
 
 

what if the ID is the offset into an array of shared texture pointers?

It isn’t bad on performance, but what is the purpose?
This could be just a matter of personal taste so I won’t really say it is wrong etc., but I personally would not go this way, mostly for reasons a bit higher-level than this.

My preference is not to have a global texture manager in the first place. Textures used on models are almost never the same as those used on buildings and terrain, so using a manager to detect and avoid redundant texture loads between these systems separately makes sense, and improves performance (searches will take place on smaller lists) while offering a bit of flexibility.
Each manager is also tuned specifically for that purpose. The model library’s texture manager has just the basic set of features for preventing the same texture from being loaded into memory twice. The building library uses image packages similar to .WAD files (a collection of all the images for that building in one file) and its texture manager is designed to work with those specifically.
Finally, the terrain library’s texture manager is well equipped for streaming textures etc.


Ultimately that is why I prefer to just have a pointer to the texture itself. No limits that way.
But it isn’t a big deal if you want to use indices in an array.


L. Spiro Edited by L. Spiro

Share this post


Link to post
Share on other sites
KaiserJohan    2317

Out of curiosity, I have my renderer implementation creating meshes/textures, which are then attached onto the scene-objects 'models' and 'materials' . If the renderer should only create the buffers, what kind of module should be responsible for knitting them together into textures (and then materials) and meshes?

I don't see a direct reason for the additional abstractation, hence my question. 

Share this post


Link to post
Share on other sites
L. Spiro    25622

They aren’t responsible for just the buffer parts.

 

The CVertexBuffer class also has an interface for filling in the vertices, normals, etc.  It has to be very dynamic and the implementation is non-trivial, since it must be as flexible as possible for all possible use-cases, so in this this topic for his first engine I simplified my explanation.

 

For textures though a small distinction should be made.

A .PNG image could be loaded from disk, scaled up to the nearest power-of-2, and saved as a .BMP or your own custom format, all while never having been a texture seen by the GPU.

So in that case don’t confuse a CImage class with a CTexture2D (these are just example class names).

 

There are in fact so many things you can do to images specifically without them ever being used as textures or seen by the GPU that in my engine there is an entire library just for image loading and manipulating (LSImageLib).

My DXT compression tool links to LSImageLib.lib and not to LSGraphicsLib.lib, even though it is clearly meant only for making textures specifically for the GPU.

 

Just because images often get used by the GPU, do not conceptually stumble over the different responsibilities between images and textures.  A CImage has the task of loading and manipulating image data, possibly converting it to DXT etc., but has no idea what a GPU or texture is.  A CTexture2D’s job is specifically to take that CImage’s texel information and send that to the GPU.  It is a bridge between image data and GPU texture data.

 

 

That is how textures get knit together and vertex buffers do a little more than just handle the OpenGL ID given to them.

As far as meshes go, a “mesh” doesn’t even necessarily relate to everything that can be rendered and as such has no business being a concept of which the renderer is aware.

GeoClipmap terrain for example is not a mesh, nor is volumetric fog.

In fact, at most meshes are related to only a handful of different types of renderables.

 

Each type of renderable knows how it itself is supposed to be rendered, thus they are above the graphics library/module.  If in a hierarchy of engine modules/libraries the graphics module/library is higher than models, terrain, etc., it is exactly backwards.

 

I am not sure of to what abstraction you are referring.

The graphics library exposes primitive types such as vertex buffers and textures that can be created at will by anything.

Above that models, terrain, and volumetric fog create the resources they need from those exposed by the graphics library by themselves.

Above that a system for orchestrating an entire scene’s rendering, including frustum culling, render-target swapping, sorting of objects, post-processing, etc., needs to exist.

 

This higher-level system requires cooperation between other sub-systems.

The only library in your engine that actually knows about every other library and class is the main engine library itself.  Due to its all-encompassing scope (and indeed its very job is to bring all of the smaller sub-systems together and make them work together) it is the only place where a scene manager can exist, since that is necessarily a high-level and broad-scoped set of knowledge and responsibilities.

 

All objects in the scene are held within the scene manager.  It’s the only thing that understands that you have a camera and from it you want to render all the objects, perform post-processing, and end up with a final result that could either be sent to the display or further modified by another sub-system (what happens after it does its own job of rendering to the request back buffer is none of its concern—it only knows that it needs to orchestrate the rendering process).

 

People are usually not confused on the way logical updates happen, only on how renders happen, but they are exactly the same.  In logical updates, the scene manager uses its all-empowering knowledge to gather generic physics data on each object in the scene and hands that data off to the physics engine which will do its math, update positions, and perform a few callbacks on objects to handle events, updated positions, etc.

The physics engine doesn’t know what a model or terrain is.  It only knows about primitive types such as AABB’s, vectors, etc.

 

Nothing changes when we move over to rendering.

The scene manager does exactly the same thing.  It culls out-of-view objects with the help of another class (again, a process that only requires 6 planes for a camera frustum, an octree, and AABB’s—nothing specific to any type of scene object) and gathers relevant information on the remaining objects for rendering.

 

The graphics library can assist with rendering by sorting the objects, but that does not mean it knows what a model or terrain is.

It’s exactly like a physics library.  You gather from the models and terrain the small part of information necessary for the graphics library to sort objects in a render queue.  That means an array of texture ID’s, a shader ID, and depth.  It doesn’t pull any strings.  It is very low-level and it works with only abstract data.

 

Why?

How could Bullet Physics be implemented into so many projects and games if it knew what terrain and models were?

The physics library and graphics library are at exactly the same level in the overall engine hierarchy: Below models, terrain, and almost everything else.

 

 

L. Spiro

Edited by L. Spiro

Share this post


Link to post
Share on other sites
Norman Barrows    7179


 
Norman Barrows, on 15 Jul 2013 - 09:51 AM, said:
what if the ID is the offset into an array of shared texture pointers?

 

 

It isn’t bad on performance, but what is the purpose?
This could be just a matter of personal taste so I won’t really say it is wrong etc., but I personally would not go this way, mostly for reasons a bit higher-level than this.

My preference is not to have a global texture manager in the first place. Textures used on models are almost never the same as those used on buildings and terrain, so using a manager to detect and avoid redundant texture loads between these systems separately makes sense, and improves performance (searches will take place on smaller lists) while offering a bit of flexibility.
Each manager is also tuned specifically for that purpose. The model library’s texture manager has just the basic set of features for preventing the same texture from being loaded into memory twice. The building library uses image packages similar to .WAD files (a collection of all the images for that building in one file) and its texture manager is designed to work with those specifically.
Finally, the terrain library’s texture manager is well equipped for streaming textures etc.


Ultimately that is why I prefer to just have a pointer to the texture itself. No limits that way.
But it isn’t a big deal if you want to use indices in an array.


L. Spiro

 

i tend to take a CSG approach to drawing things ( i used to use POVRAY to create game graphics before DirectX was invented, hence the CSG approach).. i have textures, meshes, models, and animations. models are simply a collection of meshes and textures.  models can be static or animated. everything is interchangeable. you can use any mesh with any texture. any model can use any mesh and any texture. any model can use any animation designed for any model with a similar limb layout.

 

in the previous version of my current project, i still associated all textures with specific models. there i used a texture file name to texture ID (index of array of d3dxtetures) lookup at load time to avoid duplicate loads, similar to what you do for character textures..  array indices were assigned in the order textures were loaded, and the model's texture ID's were set at load time. so in the model the texture was specified by filename, but in RAM its was specified as D3DXtexture[n], as in settex(tex[n]).

 

 

with the current version, i went to fully shared global resources. there's a pool of 300+ meshes and 300+ textures to work with to draw anything you want, or to make a model with. if you need a new mesh or texture, you just add it to the pool. then it will also be available for use for drawing other things or improving existing graphics. needless to say i can get a lot of reuse and mileage out of a given mesh or texture. as an example, the current project CAVEMAN 3.0 is a fps/rpg with a paleolithic setting. yet i do the entire game with just 2 rocks meshes! <g>. reuse keeps the total meshes and textures low, so no paging or load screens, just one load of everything at game start, like it should be. now if we could just figure out how to eliminate graphics load times all together...<g>

 

for a traditional non-CSG approach, splitting it based on data type makes sense. skinned meshes, memory images of static models, and streaming terrain chunks. 3 different types of data, 3 different texture managers (loaders). 

Share this post


Link to post
Share on other sites
corliss    138

Out of curiosity, I have my renderer implementation creating meshes/textures, which are then attached onto the scene-objects 'models' and 'materials' . If the renderer should only create the buffers, what kind of module should be responsible for knitting them together into textures (and then materials) and meshes?

I don't see a direct reason for the additional abstractation, hence my question. 

 

Basically people do this to separate difference concerns in the engine to make things more modular by reducing the dependencies between things. In the long run this helps with maintainability of the code base and the ability to swap things in and out based on different platforms the game is running on. In general the concerns about loading a resource are:

  1. How does an asset get loaded from disk 
  2. How is the asset data from disk transformed into something the engine can use (lets call the end result a resource)
  3. How are resources aggregated together

 

Most people will put #1 and #2 together, and some put all three together. At my job #1 and #2 are separate but #2 and #3 are sometimes put together.

 

For #1 we have an asset loading system that handles the platform specifics of loading things off of disk/disc/memory cards/etc. It also tracks if we already loaded the resource, how many references there are to the asset, etc. It does not know how to take the raw asset data and convert it to something use-able by the game.

 

For #2 there is the idea of a translator that gets called by the asset loading system that will get the raw asset data and translates/transforms the raw data into the actual item that was being requested (a texture, a mesh, a scene, etc). The translator can ask the graphics system to create the resource or could pass the data to a factory object that can then build the requested resource. The translators are separate from the asset loading system and are registered with it.

 

The factory system we use for scenes, materials, models and meshes do both #2 and #3. Once they have the raw data given to them by the translator the factories go about parsing the data and asking the graphics system to create the necessary resources (mainly buffers) or requesting the asset system to load the asset for it (such as textures and shaders), which is handling question/concern #2. Then the factory will call setters on the appropriate class (a Mesh, Model, Material, Scene class) to store the resources, handling #3.

 

The translator does not reference a factory directly but instead will create a class of the given type (a Scene for example) and will call a method on that class to load itself using the raw data passed in. The class internally uses the factory to do the work of parsing the data and everything else.

 

Here is an example flow for loading a material.

 

A scene will have in its binary data the material definitions it uses. So the scene factory will get the material chunk and ask the material factory to handle it. The material factory will parse the material data and figure out the names of the textures that the material is using along with the various parameters that will be set on the shader. To load the texture the material factory will request the asset system to load the texture by name and gets a handle to the asset in return. The material can wait for the material to be loaded by checking the asset handle (bad idea) or check it periodically until it is loaded (which we do).

 

In the meantime the asset loading system is getting the data and calls the texture translator. The texture translator will request the graphics system to create the texture (which returns an interface to a texture so that we can hide differences between different graphics APIs behind it) from the raw binary data and then puts a reference to the texture interface into the asset handle. This is the same handle that the material has a reference to.

 

When the material sees that the asset has been loaded, it gets the texture interface from the handle and adds it to its internal structure that holds references to the textures it needs.

 

Not all of our assets use factories. Textures and shaders are two examples where the translator will ask the graphics system to create the requested type instead of going through a factory. So the flow for loading these items is slightly different than the example. For example a render object that wants to use a particular shader will ask the asset loading system to load the shader. The render object will then periodically check if the shader is loaded by asking the asset handle. Once it is loaded and translated the render object  will get the shader from the asset handle and use it. No factories involved in this case.

 

So that is one way of putting things together without having the graphics system put it all together and associate it with the scene objects. I hope this gives you a different perspective on how things can be loaded and associated together in a game engine.

 

Not saying this is the best or right way, just a different way. :)

Share this post


Link to post
Share on other sites
cr88192    1570

in my case, things are divided up more like this:

 

there is currently a library which manages low-level image loading/saving stuff, and also things like DXTn encoding, ...

this was originally part of the renderer, but ended up being migrated into its own library, mostly because the ability to deal with a lot of this also turned out to be relevant to batch tools, and because the logic for one of my formats (my customized and extended JPEG variant) started getting a bit bulky.

 

another library, namely the "low-level renderer" mostly manages loading textures from disk into OpenGL textures (among other things, like dealing with things like fonts and similar, ...).

 

then there is the "high level renderer", which deals with materials ("shaders") as well as real shaders, 3D models, and world rendering.

internally, they are often called "shaders" partly as the system was originally partly influenced by the one in Quake3 (though more half-assed).

calling them "materials" makes more sense though, and creates less confusion with the use of vertex/fragment shaders, which the system also manages.

 

materials are kind of ugly, as partly they seem to belong more in the low-level renderer than in the high-level renderer.

also, the need to deal with them partly over on the server-end has led to a partially-redundant version of the system existing over in "common", which has also ended up with redundant copies of some of the 3D model loaders (as sometimes this is also relevant to the server-end).

 

this is partly annoying as currently the 2 versions of the material system use different material ID numbers, and it is necessary to call a function to map from one to another prior to binding the active material (lame...).

 

worse still, the material system has some hacking into dealing with parts of the custom-JPEG variants' layer system (in ways which expose format internals), and with making the whole video-texture thing work, which is basically an abstraction failure.

 

 

this would imply a better architecture being:

dedicated (renderer-independent) image library for low-level graphics stuff;

* and probably separating layer management from the specific codecs, and providing a clean API for managing multi-layer images, and also providing for I/P Frame images (it will also need to deal with video, and possibly animation *1). as-is, a lot of this is tied to the specific codecs. note: these are not standard codecs (*2).

 

common, which manages low-level materials and basic 3D model loading, ...

* as well as all the other stuff it does (console, cvars, voxel terrain, ...).

 

low-lever renderer, which manages both textures, and probably also rendering materials, while still keeping textures and materials separate.

* sometimes one needs raw textures and not a full material.

 

high-level renderer, which deals with scene rendering, and provides the drawing logic for various kinds of models, ...

probably with some separation between the specific model file-formats and how they are drawn (as-is, there is some fair bit of overlap, better would probably be if a lot of format-specific stuff could be moved out of the renderer, say allowing the renderer to more concern itself with efficiently culling and drawing raw triangle meshes).

 

the uncertainty then is how to best separate game-related rendering concerns (world-structure, static vs skeletal models, ...) from the more general rendering process (major rendering passes, culling, drawing meshes, ...). as-is, all this is a bit of an ugly and tangled mess.

 

 

*1: basically, the whole issue of using real-time CPU-side layer composition, vs using multiple drawing passes on the GPU, vs rendering down the images offline and leaving them mostly as single-layer video frames (note: may still include separate layers for normal/bump/specular/luminance/... maps, which would be rendered down separately). some of this mostly came up during the initial (still early) development of a tool to handle 2D animation (vs the current strategy of doing most of this stuff manually frame-at-a-time in Paint.NET), where generally doing all this offline is currently the only really viable option.

 

*2: currently, I am still mostly still using my original expanded format (BTJ / "BGBTech JPEG"), mostly as my considered replacement "BTIC1" wasn't very good (speed was good, but size/quality was pretty bad, and the design was problematic, it was based on additional compression of DXTn). another codec was designed and partly implemented which I have called "BTIC2B", which is designed mostly to hopefully outperform BTJ regarding decode speed, but is otherwise (vaguely) "similar" to BTJ. some format tweaks were made, but I am not really sure it will actually be significantly faster.

 

more so, there are "bigger fish" as far as the profiler goes. ironically, my sound-mixing currently uses more time in the profiler than currently is used by video decoding (mostly dynamic reverb). as well as most of the time going into the renderer and voxel code, ... (stuff for culling, skeletal animation, checking for block-updates, ...).

Share this post


Link to post
Share on other sites
Danicco    449

Thanks for all the advice, I've came up with a second and hopefully better layout, though I still have a few questions:

 

For general models:

class VertexBuffer
{
    public:
        unsigned int VBO_ID;
        float* vertexes, normals, uvs;
        int* indexes;
        //I was thinking in moving to a separate class, but a class just for this single variable feels like overdoing it. I don't think I'll be playing with the index order for models after the initial loading
};
class Material
{
    public:
        float[4] diffuse, ambient, specular;
        unsigned int TEXTURE_ID;
};
//This enum will define which shader to use for a specific Mesh
enum MeshType
{
    Normal, Glass, Etc
};
class Mesh
{
    public:
        VertexBuffer* VBO;
        Material* material;
        MeshType meshType;
};
class Model
{
    public:
        Mesh* mesh;
        int meshCount;
        void Load(); //comment below
        void Draw(); //comment below
};

 

I'd like to make Models load and draw themselves, but without giving them direct access to the disk or the reader, and I can't figure how. I'm thinking of how it would be better to do when developing, something like this:

 

void GameState::Initialize()
{
    //For this to work, I'd have to pass a pointer to my resource module to the model somehow
    Model* playerModel = new Model(&resourceModule);
    playerModel.Load("assetName");
 
    //Another way is to use a Model Manager, who has a pointer to the resource and can receive the reading stream
    Model* playerModel = modelManager.NewModel("assetName");    
}

 

The same for Draw(), as suggested, but I'm not sure how it would work:

 

void GameState::Draw()
{
    vector<GameObjects*> onScreenObjects = gameScene.GetOnScreenObjectsSorted();
    for(int i = 0; i < onScreenObjects.size(); i++)
    {
        //For this to work, the objects must have a pointer to my graphic module and call the functions
        onScreenObjects[i].Draw(&gameEngine.graphicModule); 
 
        //Or I call it from the module itself
        gameEngine.graphicModule.Draw(onScreenObject[i]);
    }
}

 

For the models to be able to draw themselves they have to "see" the graphic module somehow, but I'm not sure if it's okay to send the pointer for each Draw() call. The models would be tightly knotted to the module that I think I'd be nasty to upgrade/evolve, as a single change in the graphic module interface could mean lots of rewriting everywhere.

I know since it's my first engine I expect this to happen quite a lot, so I'm trying to make it easier for when it does.

 

For the Engine & Game:

class GameEngine
{
    public:
        virtual Initialize() = 0; //This will call all initialize functions on the modules that are required for running
        virtual Destroy();
        //Functions to load VBOs, activate Textures, and other draw calls
        GraphicModule graphics;
        //Functions to read/write to disk and find the proper stream for an asset by name or ID in my resource format files
        ResourceModule resources;
        //Still working on this
        SoundModule sounds;
        //Will just receive events from the OS and store them for use later, responsible for managing inputs be it keyboard & mouse, touch, etc
        InputModule inputs;
        //other modules to be added as needed
 
        //Responsible for creating window and managing input events depending on the OS, and other OS related stuff such as getting time
        OSModule oSystem;
};
class MyEngine : public GameEngine
{
    //implementation here
}
class Game
{
    public:
        //Using this engine interface I hope to be able to develop my engine further as I add new features and such, and still keep the games I'm currently developing working correctly with it.
        GameEngine* gameEngine;
        //Using LSpiro's post about Time
        GameTime* gameTime;
        //Using LSpiro's blog post about managing game state, this will be the current gameState
        GameState* gameState;
        //This will read from the inputModule only the inputs that are linked to a Game Action, time stamped, and will keep them in a buffer to be read from the Update() method
        GameInput* gameInput;
};

 

I know it isn't much specific, but I just want to get the general layout idea done in a good way so I don't bang my head later thinking "Why did I made it this way?"

 

Again, many thanks for the advices, I feel the idea is much more robust and organized now.

Share this post


Link to post
Share on other sites
L. Spiro    25622

For general models:

Much better than before, but a few major problems remain.
#1: Don’t use “unsigned int” for your VBO ID. OpenGL/OpenGL ES give you a GLuint, and that is what you use. Never assume the typedef for GLuint will be “unsigend int” no matter how much you “know” it is.

#2: Don’t put indices on a vertex buffer. An index buffer is not the responsibility of the VertexBuffer class. That goes to the IndexBuffer class. Your comment mentions it will just be a single variable. Why won’t it be another IBO? Why use VBO’s but not IBO’s? There are plenty of things to add to an IndexBuffer class, not just a single pointer. Besides, the same index buffer can be used with many vertex buffers.

#3: If you are considering later adding support for DirectX (as you mentioned you might) then the GLuint VBO_ID member does not belong on the VertexBuffer class.
Organize your classes this way:
VertexBufferBase // Holds all data and methods that are the same for all implementations of VertexBuffer classes across all API’s you ever want to support.  This would be the in-memory form of the vertex buffer data before it has been sent to the GPU.
VertexBufferOpenGl : public VertexBufferBase // Holds the GLuint ID from OpenGL and provides implementations for methods related directly to OpenGL.  I showed you an example of this with my COpenGlStandardTexture::CreateApiTexture() method.
VertexBuffer : public VertexBufferOpenGl // All high-level functionality of a vertex buffer common to all vertex buffers on any API.  Its CreateVertexBuffer() method prepares a few things and then calls CreateApiVertexBuffer(), which will be implemented by VertexBufferOpenGl.

In short, if the class name does not have “OpenGl” in it, that class has no relationship what-so-ever to OpenGL.
Later when you add DirectX support, you can do this:
VertexBufferDirectX9 : public VertexBufferBase

#ifdef DX9
VertexBuffer : public VertexBufferDirectX9
#elif defined( OGL )
VertexBuffer : public VertexBufferOpenGl
#else
#error "Unrecognized API."
#endif


I'd like to make Models load and draw themselves, but without giving them direct access to the disk or the reader, and I can't figure how.

As I mentioned in my initial post, use streams.  They are also fairly easy to roll on your own.
Once you are using streams, the model can load its data from memory, files, networks, whatever, all while not knowing the source of the object.

 

I'm thinking of how it would be better to do when developing, something like this:
 
 

void GameState::Initialize()
{
    //For this to work, I'd have to pass a pointer to my resource module to the model somehow
    Model* playerModel = new Model(&resourceModule);
    playerModel.Load("assetName");
 
    //Another way is to use a Model Manager, who has a pointer to the resource and can receive the reading stream
    Model* playerModel = modelManager.NewModel("assetName");    
}

I have an answer for this question, but before I can even get to it you need to fix another major problem.
How do you plan to handle a game where there are, for example, a bunch of enemy spaceships, but only 3 different kinds?
There are 50 of type A, 35 of type B, and 3 of type C.
Are you really going to load the model data for type A 50 times?

A Model class is supposed to be shared data, by a higher-level class called ModelInstance.
Model needs ModelManager to prevent the same model file from being loaded more than once, so your “Another way” is the only way.
Then you create all of your ModelInstance objects. ModelManager hands out shared pointers to the Model class that was either created for that instance or already created for another instance previously.




The same for Draw(), as suggested, but I'm not sure how it would work:
 
 

void GameState::Draw()
{
    vector<GameObjects*> onScreenObjects = gameScene.GetOnScreenObjectsSorted();
    for(int i = 0; i < onScreenObjects.size(); i++)
    {
        //For this to work, the objects must have a pointer to my graphic module and call the functions
        onScreenObjects[i].Draw(&gameEngine.graphicModule); 
 
        //Or I call it from the module itself
        gameEngine.graphicModule.Draw(onScreenObject[i]);
    }
}

Why would your GameState class make the final calls on the objects?
void GameState::Draw()
{
    gameScene.Draw(&gameEngine.graphicModule);
}
Done and done.
The scene manager can handle everything related to the objects in the scene. That is exactly its job.
Inside GameScene::Draw() it handles culling and orchestrates the entire rendering process.
And be 100% sure that you are not doing this:

vector<GameObjects*> onScreenObjects = gameScene.GetOnScreenObjectsSorted();
No reason to slowly copy a whole vector every frame.
Do this:
vector<GameObjects*> & onScreenObjects = gameScene.GetOnScreenObjectsSorted();
Or this:
vector<GameObjects*> onScreenObjects;
gameScene.GetOnScreenObjectsSorted(onScreenObjects);


For the models to be able to draw themselves they have to "see" the graphic module somehow, but I'm not sure if it's okay to send the pointer for each Draw() call. The models would be tightly knotted to the module that I think I'd be nasty to upgrade/evolve, as a single change in the graphic module interface could mean lots of rewriting everywhere.

If the graphics module instead knew what a model was, that would be ridiculously tight knotting.
If you have models and you have a graphics module and you want the models to be able to be drawn to your screen then you have to have some level of knotting no matter what.

Changing and evolving interfaces is a necessary part of the pain you are required to endure to join the ranks of a programmer.


L. Spiro

Share this post


Link to post
Share on other sites
socky    135


Changing and evolving interfaces is a necessary part of the pain you are required to endure to join the ranks of a programmer.

 

Damn, that's an horribly accurate statement.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this