Jump to content

  • Log In with Google      Sign In   
  • Create Account


Game engine architecture

  • You cannot reply to this topic
15 replies to this topic

#1 eee   Members   -  Reputation: 118

Like
0Likes
Like

Posted 16 July 2014 - 03:34 AM

I want to start implementing a game engine, but I don't have a clear understanding of how it should be designed. I've read much on the topic and it would seem the best way should be creating standalone generic components that are then used to create various objects. Would it look something like this? 

class CMaterial
{
// texture, diffuse color...
};
class CMesh
{
CMaterial mMaterial;
vertex buffer
index buffer
// each mesh has exactly 1 material when using assimp (I think)
};
class CModel
{
constant buffer per model
std::vector<CMesh>mMeshes;
};

Then one would create a generic model for a humanoid:

class CHumanoid
{
CModel mModel;
CPhysics mPhysics;
CInputHandler mInputHandler;
};

Which would include draw,create and destroy functions.

 

You see, I'm not sure how I could implement the different components that I could use to make up some sort of a complete structure, such as a human (it needs to be rendered, physics needs to have effect on it, it need to be able to move, wield weapons ...).

 

Then it would need to have a rendering class that uses dx11 to draw it, would I make this a static/global class or what?

 

I'm very lost here as to how one should create a game engine from generic components that handle themselves and are as decoupled as possible.



Sponsor:

#2 bwhiting   Members   -  Reputation: 654

Like
3Likes
Like

Posted 16 July 2014 - 04:35 AM

Firstly get ready to re-factor... a lot.

Secondly keep things simple to begin with.

Many opt for an entity component system as it is flexible, you could take a look at unity as an example of one. Each game object has a number of components such as meshes, transforms, renderers etc...

It might give you some ideas anyway.

 

But start small! Don't try and solve every problem at once or you will struggle!

 

The idea of an engine is generally to run though a set of objects created by the user and update them based on some rules (such as physics) then draw them to screen.

Get that working first, then flesh it out as you go along exposing more control to the user as you go. Let them tweak positions, velocities, material textures... simple things like that to start... get that working and you will have a nice starting point!

 

Good luck



#3 JordanBonser   Members   -  Reputation: 329

Like
3Likes
Like

Posted 16 July 2014 - 05:08 AM

I would suggest starting off by just creating a renderer, take a basic program that uses DX11 and slowly refactor from there.

 

As a start I would follow something like the tutorials on rastertek.com and then you will see how slowly parts of the functionality gets refactored into their own classes for use by higher level systems. After a while following those tutorials you may find that you want to do things slightly differently, maybe have different functionality in different classes and so on but as a starter just do something simple like that to get the jist of how an engine is slowly built up over time. 

 

It will be a very tedious task, but if you set yourself really short term goals you can achieve quite a lot :) 



#4 eee   Members   -  Reputation: 118

Like
0Likes
Like

Posted 16 July 2014 - 05:39 AM

Thanks for the answers, I am quite patient and fine with taking it slow. I want to do things right from the start so when the project grows, adding new features would be as clean and simple as possible. Currently though, I dont even know how would I begin implementing a graphics engine, do I create a wrapper around dx11 to ensure I dont make any unnecessary calls (like setting a state that is already set)? Where would I begin? That is, I am not sure what are the bare minimum things Id need to implement for a graphics engine.


Edited by eee, 16 July 2014 - 05:40 AM.


#5 L. Spiro   Crossbones+   -  Reputation: 12334

Like
6Likes
Like

Posted 16 July 2014 - 05:57 AM

A single mesh may have multiple materials/shaders on it and thus may require multiple renders, each with a different set of vertices and materials, in order to render the whole thing.
So a mesh does not contain index/vertex buffers directly.  A “mesh subset” has the material, shader, index buffer, and vertex buffer, and a mesh is an array of mesh subsets.
 
 
I won’t talk about physics etc. because that is the distant future, but every mesh should have an AABB and a bounding sphere in order to assist in frustum culling, octree insertion, etc., and these can later be used in physics when the time comes.
 
 
A model can only ever be loaded once and then what gets added to the game world is a model instance, which has a pointer to the model.
Every model instance has mesh instances that point to the model’s meshes, which is where it finds its vertex buffers etc. for rendering.  Since each model instance can have individual properties change such as skin color, full-body alpha, lighting on/off, shadow-casting on/off, etc., these properties get copied to every model instance—in the case of rendering states and materials, the source model only acts as a default layout.
 
 

do I create a wrapper around dx11 to ensure I dont make any unnecessary calls (like setting a state that is already set).

Yes.


Input is a huge subject and I have discussed it in many posts, which you can easily find on your own:
https://www.google.co.jp/webhp?sourceid=chrome-instant&ion=1&espv=2&ie=UTF-8#q=site%3Agamedev.net%20Spiro%20Input
Of extreme note is: http://www.gamedev.net/topic/650640-is-using-static-variables-for-input-engine-evil/#entry5113267
Characters do not handle inputs.
 
 
L. Spiro
It is amazing how often people try to be unique, and yet they are always trying to make others be like them. - L. Spiro 2011
I spent most of my life learning the courage it takes to go out and get what I want. Now that I have it, I am not sure exactly what it is that I want. - L. Spiro 2013
I went to my local Subway once to find some guy yelling at the staff. When someone finally came to take my order and asked, “May I help you?”, I replied, “Yeah, I’ll have one asshole to go.”
L. Spiro Engine: http://lspiroengine.com
L. Spiro Engine Forums: http://lspiroengine.com/forums

#6 eee   Members   -  Reputation: 118

Like
0Likes
Like

Posted 16 July 2014 - 07:11 AM

Since Im considering using assimp, if i understood it correctly then: a model is a car, the car is broken up into meshes (each bodypanel, if theyre different, windows of the glass) where each mesh has a different material (that includes a texture, diff color, blending states etc). Quote from the official page: "Each aiMesh refers to one material by its index in the array." and "A mesh represents a geometry or model with a single material." If there are better alternatives to assimp, please point me in the right direction. I have tried implementing an .obj loader but it was very slow :(.

 

I have never written a wrapper, so would this be the way:

pseudocode:

 

class CDirectX11States

{

void SetInputLayout(ID3D11InputLayout* inputLayout){
if (mCurrentInputLayout != inputLayout){
mCurrentInputLayout = inputLayout;
mDevCon->IASetInputLayout(inputLayout);
}
ID3D11InputLayout* mCurrentInputLayout;

};

class CDirectX11

{

 

device

devicecontext

swapchain

etc

};

 

Thanks for everyones patience!



#7 aregee   Members   -  Reputation: 795

Like
3Likes
Like

Posted 16 July 2014 - 10:28 AM

Firstly get ready to re-factor... a lot.

 

 

I just had to comment on that...  It is true "mahn"...  You will prototype something simple.  Then you will expand on that.  Then you will see a better way to do things or organise your code.  You will get ideas, come up with smarter things.  You will learn new things.  Every step on the way you will refactor over and over and over again.  Sometimes I feel that refactoring is the biggest part of coding, lol, but slowly you will get there, and things will get better, but you will never get away from the refactoring thing.  smile.png

 

PS: It is actually a good feeling having completed a tiny (or a massive) amount of refactoring and seeing your product is working, hopefully with less bugs and better quality over all.  Although refactoring in itself doesn't seem to bring your code forward, it actually will increase your efficiency in the long run too to have a well written code base.

 

I tend to have problems knowing exactly how to structure my code right away, and that is why I end up having to do a lot of refactoring.  That is probably why we have the design phase in software development in the first place?  Anyway, lots of people are using SCRUM nowadays, and from my experience that creates just as much refactoring...


Edited by aregee, 16 July 2014 - 10:29 AM.


#8 L. Spiro   Crossbones+   -  Reputation: 12334

Like
1Likes
Like

Posted 16 July 2014 - 03:00 PM

"A mesh represents a geometry or model with a single material."

While it is true that below the level of a “mesh” there are no common or official terms, a single mesh can definitely have more than 1 material and render call.
I outlined this on my site, where I referred to the sub-mesh level as “render part” for lack of a better term at the time (since then I have found “sub-mesh” to be better).
LSModelBreakdown1.png

In such a simple case as this you could possibly draw the red and green parts in one draw call (though it would be non-trivial in any generic sense), but the fact is that the red and green areas are 2 different materials on the same mesh and it is meant to illustrate that in a more advanced case you will have no choice but to draw the red, switch materials and shaders, and then draw the green.
 
If Assimp says that the red, green, and blue are 3 different meshes, it’s wrong.
If it says there are 2 meshes but a mesh can have only 1 material, it’s wrong.

I have never used Assimp and don’t know its limitations, but if it says a mesh can have only 1 material then that sounds like a pretty bad limitation.
 
 

would this be the way:

Yes, but that’s the active way.
A better way is the last-minute way.
Basic Graphics Implementation


L. Spiro

Edited by L. Spiro, 16 July 2014 - 03:01 PM.

It is amazing how often people try to be unique, and yet they are always trying to make others be like them. - L. Spiro 2011
I spent most of my life learning the courage it takes to go out and get what I want. Now that I have it, I am not sure exactly what it is that I want. - L. Spiro 2013
I went to my local Subway once to find some guy yelling at the staff. When someone finally came to take my order and asked, “May I help you?”, I replied, “Yeah, I’ll have one asshole to go.”
L. Spiro Engine: http://lspiroengine.com
L. Spiro Engine Forums: http://lspiroengine.com/forums

#9 HappyCoder   Members   -  Reputation: 2286

Like
0Likes
Like

Posted 16 July 2014 - 03:33 PM

I like how unity handled mesh and material binding. Meshes don't carry material information. They just specify a slots for how many materials the mesh supports. There is a seperate class called a MeshRenderer that would take a mesh and an array of materials as input to render the geometry. This way, it is easy to specify an object with the same geometric shape, but with different materials.

EDIT: Thought of another piece of advice

do I create a wrapper around dx11 to ensure I dont make any unnecessary calls


Another reason to make a wrapper is to allow you to change graphics APIs. Your wrapper can supply a consistent interface to your graphics engine, but different sub classes of the wrapper can implement OpenGL, Mantle, DX12, Metal, or even game console graphics APIs.

Edited by HappyCoder, 16 July 2014 - 03:39 PM.


#10 JordanBonser   Members   -  Reputation: 329

Like
0Likes
Like

Posted 17 July 2014 - 02:04 AM

 

If Assimp says that the red, green, and blue are 3 different meshes, it’s wrong.
If it says there are 2 meshes but a mesh can have only 1 material, it’s wrong.

I have never used Assimp and don’t know its limitations, but if it says a mesh can have only 1 material then that sounds like a pretty bad limitation.
 

 

I have used assimp for my loader and I also have a similar approach to L.Spiro in the fact that I have Mesh and Submesh classes where only the Submesh part holds the rendering data. 

 

It is purely the naming differences that make things confusing. an assimp aiScene is analogous to a Mesh, and the aiMesh is essentially the Submesh. There are a few other things to work around but as a whole assimp is great for model loading and I wouldn't use anything else at the moment. 



#11 JETeran   Members   -  Reputation: 265

Like
0Likes
Like

Posted 17 July 2014 - 02:14 AM

I'm sure you can find useful help/tips in Jason's Game Engine Architecture book. It's a good way to begin and even master.


Success with courage.

José Eduardo Terán

Project Manager & Game Programming Professor. DSK International Campus

Indie Game Developer
Google + Account


#12 eee   Members   -  Reputation: 118

Like
0Likes
Like

Posted 17 July 2014 - 05:49 AM

 

"A mesh represents a geometry or model with a single material."

While it is true that below the level of a “mesh” there are no common or official terms, a single mesh can definitely have more than 1 material and render call.
I outlined this on my site, where I referred to the sub-mesh level as “render part” for lack of a better term at the time (since then I have found “sub-mesh” to be better).
LSModelBreakdown1.png

In such a simple case as this you could possibly draw the red and green parts in one draw call (though it would be non-trivial in any generic sense), but the fact is that the red and green areas are 2 different materials on the same mesh and it is meant to illustrate that in a more advanced case you will have no choice but to draw the red, switch materials and shaders, and then draw the green.
 
If Assimp says that the red, green, and blue are 3 different meshes, it’s wrong.
If it says there are 2 meshes but a mesh can have only 1 material, it’s wrong.

I have never used Assimp and don’t know its limitations, but if it says a mesh can have only 1 material then that sounds like a pretty bad limitation.
 
 

would this be the way:

Yes, but that’s the active way.
A better way is the last-minute way.
Basic Graphics Implementation


L. Spiro

 

I read your site and liked it, however I couldn't fully understand how to implement what you call the last-minute way. For example, I don't understand what CStd::Min( LSG_MAX_TEXTURE_UNITS, CFndBase::m_mMetrics.ui32MaxTexSlot ); does. 

 

Am I right in understanding you have two arrays :  an array of last active states (for example m_psrvLastActiveTextures[i] ) and the ones that are active at the time of the render call (for example m_psrvActiveTextures[i])?

And then you set the last active to the current one if they are not already equal?

 

I am still fuzzy on the details, though, it seems similar to the active way because you are swapping a current sate and a last one? Whats the difference?

 

Thanks!



#13 L. Spiro   Crossbones+   -  Reputation: 12334

Like
3Likes
Like

Posted 17 July 2014 - 08:22 AM

For example, I don't understand what CStd::Min( LSG_MAX_TEXTURE_UNITS, CFndBase::m_mMetrics.ui32MaxTexSlot ); does.

The API may support 128 texture slots (CFndBase::m_mMetrics.ui32MaxTexSlot), but the engine only supports 32 (LSG_MAX_TEXTURE_UNITS).

 

Am I right in understanding you have two arrays

Not directly.

I have a structure which encapsulates all state information.
/** The render-state copy that allows us to know which states have changed since the last system update. */
struct LSG_RENDER_STATE {
	/** Shader resources. */
	LSG_SHADER_RESOURCE *				psrShaderResources[LSG_MAX_TEXTURE_UNITS];

	/** Sampler states. */
	LSG_SAMPLER_STATE *				pssSamplers[LSG_MAX_SHADER_SAMPLERS];

	/** Render targets. */
	LSG_RENDER_TARGET *				prtRenderTargets[LSG_MAX_RENDER_TARGET_COUNT];

	/** Input layouts. */
	const LSG_INPUT_LAYOUT *			pilInputLayout;

	/** Vertex buffers. */
	LSG_VERTEX_BUFFER				pvbVertexBuffers[LSG_MAX_VERTEX_STREAMS];

	/** Depth/stencil buffer. */
	LSG_DEPTH_STENCIL *				pdsDepthStencil;

	/** Rasterizer state. */
	LSG_RASTERIZER_STATE *				prsRasterState;

	/** Blend state. */
	LSG_BLEND_STATE *				pbsBlendState;

	/** Depth/stencil state. */
	LSG_DEPTH_STENCIL_STATE *			pdssDepthStencilState;

	/** Viewports. */
	LSG_VIEWPORT					vViewports[LSG_MAX_VIEWPORTS_SCISSORS];

	/** Scissors. */
	LSG_RECT					rScissors[LSG_MAX_VIEWPORTS_SCISSORS];

	/** Blend factors. */
	LSGREAL						fBlendFactors[4];

	/** Clear color. */
	LSGREAL						fClearColor[4];

	/** Clear depth. */
	LSGREAL						fClearDepth;

	/** Stencil reference value. */
	LSUINT32					ui32StencilRef;

	/** 32-bit sample coverage for the output-merger state. */
	LSUINT32					ui32SampleMask;

	/** Total active vertex buffers. */
	LSUINT32					ui32TotalVertexBuffers;

	/** Stencil clear value. */
	LSUINT8						ui8ClearStencil;

	/** Total viewports/scissors. */
	LSUINT8						ui8TotalViewportsAndScissors;

	/** Total render targets. */
	LSUINT8						ui8TotalRenderTargets;

};
Since my engine is cross-platform, LSG_SHADER_RESOURCE, LSG_SAMPLER_STATE, etc., are all typedefs.
LSGDirectX11.h:
	/** Sampler description. */
	typedef D3D11_SAMPLER_DESC			LSG_SAMPLER_DESC;

	/** Rasterizer description. */
	typedef D3D11_RASTERIZER_DESC			LSG_RASTERIZER_DESC;

	/** Render-target blend description. */
	typedef D3D11_RENDER_TARGET_BLEND_DESC		LSG_RENDER_TARGET_BLEND_DESC;

	/** Blend description. */
	typedef D3D11_BLEND_DESC			LSG_BLEND_DESC;

	/** Depth/stencil operation description. */
	typedef D3D11_DEPTH_STENCILOP_DESC		LSG_DEPTH_STENCILOP_DESC;

	/** Depth/stencil description. */
	typedef D3D11_DEPTH_STENCIL_DESC		LSG_DEPTH_STENCIL_DESC;

	/** The 1D texture type. */
	typedef ID3D11Texture1D				LSG_TEXTURE_1D;

	/** The 2D texture type. */
	typedef ID3D11Texture2D				LSG_TEXTURE_2D;

	/** The 3D texture type. */
	typedef ID3D11Texture3D				LSG_TEXTURE_3D;

	/** The sampler type. */
	typedef ID3D11SamplerState			LSG_SAMPLER_STATE;

	/** The shader resource view type. */
	typedef ID3D11ShaderResourceView		LSG_SHADER_RESOURCE;

	/** The render target type. */
	typedef ID3D11RenderTargetView			LSG_RENDER_TARGET;

	/** The depth/stencil type. */
	typedef ID3D11DepthStencilView			LSG_DEPTH_STENCIL;

	/** The rasterizer state type. */
	typedef ID3D11RasterizerState			LSG_RASTERIZER_STATE;

	/** The blend state type. */
	typedef ID3D11BlendState			LSG_BLEND_STATE;

	/** The depth/stencil state. */
	typedef ID3D11DepthStencilState			LSG_DEPTH_STENCIL_STATE;

	/** The viewport type. */
	typedef D3D11_VIEWPORT				LSG_VIEWPORT;

	/** The scissor rectangle type. */
	typedef D3D11_RECT				LSG_RECT;

	/** The input layout type. */
	typedef ID3D11InputLayout			LSG_INPUT_LAYOUT;

	/** The vertex buffer type. */
	typedef struct LSG_VERTEX_BUFFER {
		const CVertexBuffer *			pvbVertBuffer;			/**< The vertex buffer pointer. */
		LSUINT32				ui32Offset;			/**< Offset of the vertex buffer. */
	} * LPLSG_VERTEX_BUFFER, * const LPCLSG_VERTEX_BUFFER;
Spoiler


The graphics device has 2 LSG_RENDER_STATE objects.
		/** The last render state. */
		LSG_RENDER_STATE CDirectX11::m_rsLastRenderState;

		/** The current render state. */
		LSG_RENDER_STATE CDirectX11::m_rsCurRenderState;
When you want to set a viewport (for example):
	LSE_INLINE LSVOID LSE_CALL CDirectX11::SetViewport( LSREAL _fX, LSREAL _fY,
		LSREAL _fWidth, LSREAL _fHeight, LSUINT32 _ui32Target ) {
		assert( _ui32Target < LSG_MAX_VIEWPORTS_SCISSORS );

		assert( static_cast<FLOAT>(_fX) >= static_cast<FLOAT>(LSG_VIEWPORT_MIN) && static_cast<FLOAT>(_fX) <= static_cast<FLOAT>(LSG_VIEWPORT_MAX) );
		assert( static_cast<FLOAT>(_fY) >= static_cast<FLOAT>(LSG_VIEWPORT_MIN) && static_cast<FLOAT>(_fY) <= static_cast<FLOAT>(LSG_VIEWPORT_MAX) );

		assert( static_cast<FLOAT>(_fWidth) >= 0.0f && static_cast<FLOAT>(_fX + _fWidth) <= static_cast<FLOAT>(LSG_VIEWPORT_MAX) );
		assert( static_cast<FLOAT>(_fHeight) >= 0.0f && static_cast<FLOAT>(_fY + _fHeight) <= static_cast<FLOAT>(LSG_VIEWPORT_MAX) );

		m_rsCurRenderState.vViewports[_ui32Target].TopLeftX = static_cast<FLOAT>(_fX);
		m_rsCurRenderState.vViewports[_ui32Target].TopLeftY = static_cast<FLOAT>(_fY);
		m_rsCurRenderState.vViewports[_ui32Target].Width = static_cast<FLOAT>(_fWidth);
		m_rsCurRenderState.vViewports[_ui32Target].Height = static_cast<FLOAT>(_fHeight);
	}
m_rsCurRenderState is the current state and you can modify each part of it all you want. Redundant or not, it just updates the local copy.


At some point you are going to want to make a draw call. This is when you check for redundancies.

	LSVOID LSE_CALL CDirectX11::ApplyRenderStates( LSBOOL _bForce ) {
		const LSG_RENDER_STATE & rsCurState = m_rsCurRenderState;
		LSG_RENDER_STATE & rsLastState = m_rsLastRenderState;

#define LSG_CHANGED( MEMBER )	((rsCurState.MEMBER != rsLastState.MEMBER) || _bForce)
#define LSG_UPDATE( MEMBER )	rsLastState.MEMBER = rsCurState.MEMBER
…

		// Rasterizer state.
		if ( LSG_CHANGED( prsRasterState ) ) {
			GetDirectX11Context()->RSSetState( rsCurState.prsRasterState );
			LSG_UPDATE( prsRasterState );
		}

		// Blend state.
		if ( LSG_CHANGED( pbsBlendState ) ||
			LSG_CHANGED( fBlendFactors[0] ) || LSG_CHANGED( fBlendFactors[1] ) || LSG_CHANGED( fBlendFactors[2] ) || LSG_CHANGED( fBlendFactors[3] ) ||
			LSG_CHANGED( ui32SampleMask ) ) {
			GetDirectX11Context()->OMSetBlendState( rsCurState.pbsBlendState,
				rsCurState.fBlendFactors,
				rsCurState.ui32SampleMask );
			LSG_UPDATE( pbsBlendState );
			LSG_UPDATE( fBlendFactors[0] );
			LSG_UPDATE( fBlendFactors[1] );
			LSG_UPDATE( fBlendFactors[2] );
			LSG_UPDATE( fBlendFactors[3] );
			LSG_UPDATE( ui32SampleMask );
		}

		// Depth/stencil state.
		if ( LSG_CHANGED( pdssDepthStencilState ) || LSG_CHANGED( ui32StencilRef ) ) {
			GetDirectX11Context()->OMSetDepthStencilState( rsCurState.pdssDepthStencilState,
				rsCurState.ui32StencilRef );
			LSG_UPDATE( pdssDepthStencilState );
			LSG_UPDATE( ui32StencilRef );
		}

…

		// Viewports.
		LSBOOL bSetViewports = LSG_CHANGED( ui8TotalViewportsAndScissors );
		LSBOOL bSetScissors = bSetViewports;
		if ( !bSetViewports ) {
			// Check for changes in each viewport.
			for ( LSUINT32 I = m_rsCurRenderState.ui8TotalViewportsAndScissors; I--; ) {
				if ( LSG_CHANGED( vViewports[I].TopLeftX ) || LSG_CHANGED( vViewports[I].TopLeftY ) ||
					LSG_CHANGED( vViewports[I].Width ) || LSG_CHANGED( vViewports[I].Height ) ||
					LSG_CHANGED( vViewports[I].MinDepth ) || LSG_CHANGED( vViewports[I].MaxDepth ) ) {
						bSetViewports = true;
						break;
				}
			}
		}
		if ( bSetViewports ) {
			GetDirectX11Context()->RSSetViewports( rsCurState.ui8TotalViewportsAndScissors,
				rsCurState.vViewports );
			// Update our copies of what we just sent to the hardware.
			for ( LSUINT32 I = m_rsCurRenderState.ui8TotalViewportsAndScissors; I--; ) {
				LSG_UPDATE( vViewports[I] );
			}
		}

…

#undef LSG_UPDATE
#undef LSG_CHANGED
}
	LSVOID LSE_CALL CGfx::Render( void * _pibIndexBuffer,
		LSUINT32 _ui32StartVertex,
		LSUINT32 _ui32PrimitiveCount ) {
…

		// Apply render states.
		ApplyRenderStates();

		// Render.
		psShader->PreRender();

…

		if ( _pibIndexBuffer ) {
			VertexBuffers()[0].pvbVertBuffer->RenderApi( 0, 0 );		// Makes it apply certain necessary settings before rendering.
			pibIndexBuffer->RenderApi( _ui32StartVertex, _ui32PrimitiveCount );
		}
		else {
			VertexBuffers()[0].pvbVertBuffer->RenderApi( _ui32StartVertex, _ui32PrimitiveCount );
		}

…
}

Redundancies are checked at the last minute, just before rendering. Hence it is last-minute.


L. Spiro
It is amazing how often people try to be unique, and yet they are always trying to make others be like them. - L. Spiro 2011
I spent most of my life learning the courage it takes to go out and get what I want. Now that I have it, I am not sure exactly what it is that I want. - L. Spiro 2013
I went to my local Subway once to find some guy yelling at the staff. When someone finally came to take my order and asked, “May I help you?”, I replied, “Yeah, I’ll have one asshole to go.”
L. Spiro Engine: http://lspiroengine.com
L. Spiro Engine Forums: http://lspiroengine.com/forums

#14 eee   Members   -  Reputation: 118

Like
0Likes
Like

Posted 18 July 2014 - 03:59 AM

In order to create a model system, I need to choose a file format I am going to primarily support. I couldn't find too much information on which ones are used in the game industry, but it would seem .fbx is a good choice? Would it be possible to export models as compiled vertex and index buffers and then load them into the engines buffers? That would be very fast and efficient and it would make loading quite simple on the engine side (albeit not the exporter side, fbx is quite undocumented and I don't know where to start). 

 

What should my hierarchy look like?

 

CModel

{

std::vector<CMesh*>mMeshes;

XMFLOAT3 mPos;

// what else?

};

CMesh

{

std::vector<CSubMesh*>mSubMeshes;

// what else?

};

CSubMesh

{

CMaterial* mMaterial;

vbuffer

ibuffer

pshader

vshader

// what else?

};

 

Thank you !



#15 TiagoCosta   Crossbones+   -  Reputation: 1880

Like
0Likes
Like

Posted 18 July 2014 - 09:09 AM

I couldn't find too much information on which ones are used in the game industry, but it would seem .fbx is a good choice?

 

Most engines use custom formats to store and load meshes at runtime. FBX is an interchange file format not optimized for runtime loading. Try to create a (binary) format specific to the needs of your engine that makes loading ultra fast, something like: <N><array of N vertices><...>, so when you load the model you can simply read the number of vertices and memcpy N*sizeof(Vertex) directly to the correct place (eg: graphics API vertex buffer).

 

My engine uses an hierarchy like this:

Actor
{
    Model
    Position
    Actor specific constants
}

Model
{
    Mesh
    vector<Subset>
}

Mesh
{
    Vertex Buffer
    Index Buffer
}

Subset
{
    Start index
    Number of indices
    Material
}

Material
{
    Textures
    Shader
    Other data
}

This way you can have multiple Actors at different positions using the same model, and multiple models using the same mesh but with different materials, etc


Edited by TiagoCosta, 18 July 2014 - 09:11 AM.

Tiago Costa
Aqua Engine - my DirectX 11 game "engine" - In development

#16 EarthBanana   Members   -  Reputation: 810

Like
1Likes
Like

Posted 18 July 2014 - 11:48 AM

Assimp does not limit 1 material per mesh - very roughly speaking what Assimp calls a Scene is what LSpiro is referring to as a mesh, and what Assimp calls a Mesh is what LSpiro is calling a Subset

 

Assimp loads the "Scene" as an array of "Meshes" which each have an index in to the array of Materials for the scene

 

This translates, in my engine, as a Mesh which contains an array of Submeshes, each with integer handles to the material.. Each Submesh can refer to only 1 material.. Each Material then contains an integer handle to the Shader responsible for drawing the material

 

When using components, don't call your resources components - anything that you have to load could be considered a resource - textures, meshes, animations, audio clips, shaders, etc

You only want to load these things once - and either pass around handles or pointers to these within the engine

 

IE if you want to have a Render component, it should contain a pointer or handle to a mesh rather than the mesh itself







PARTNERS