# Various engine design questions

This topic is 4852 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

Im writing my own 3D-engine (or actually, learning to do that) and was wondering about a couple of things. I use plugins (.so under Unix and .dll in win32) and don't want to re-write the entire OpenGL renderer plugin when i start working on the AI or something and realize that i have worked myself into a corner (as a beginner it is very easy to do just that). For example when i started to work on my 3D-engine i only knew about glVertex3f(...) and such wich means that you have to send the rendering data to the gfx every frame. Then i learned about GL display lists and vertex buffer objects wich puts the data on the gfx so that you don't have to send it every frame. So i had to throw away and re-write alot of code to supports VBOs. I really want to avoid such problems or atleast minimize them. Right now im at a cross road in the design of my renderer, should i:
class Renderer
{
public:
Resource poly_vect_lock; // mutex locking and stuff
vector<Polygon> poly_vect;
}; // end class Renderer


and let the AI, physics, etc. plugins directly mess with the stuff the renderer cares about or should i go through an abstractation:
class Renderer
{
public:
VFRV request_polygon_write(Polygon *poly, ID poly_id THREAD_ID thread_id, UINT mutex_timeout) { } // we want to mess with a poly
VRFV done_with_poly(ID poly_id THREAD_ID thread_id, UINT mutex_timeout) { } // we are done with the poly so the renderer can put it back to the VBA or whatever
}; // end class Renderer

class Renderer_stl : public Renderer
{
public:
...
protected:
Resource poly_vect_lock; // mutex locking and stuff
vector<Polygon> poly_vect;
}; // end class Renderer_stl : public Renderer


i think the last solution is the best, but what about performance? Unfortunatly im not a low-level zealot so i don't know that much about assembler and the inner workings of C++. Futhermore i try to support threading (as you can see), is that i could idea? Should i just ignore it for now and then paste in on later or should i try to atleast make room for it so that i can implement it later? For example the main rendering is linear but perhaps i want to software optimize a texture over a number of frames, so i start a new thread, then call
texture_vect_lock->lock_mutex(UINT timeout, THREAD_ID thread_id);

and then call the
optimize_texture(Texture *texture, what_ever);

from the new thread. With VBOs and the ability to lock single textures in my texture vector this becomes a real headache. And as you can see i use pointers instead of handles since if i use handles functions like
FRV optimize_texture(ID texture_id, ...)

has to mess with the texture containers directly which means that if i swith from STL to something else it means that i have to re-write some stuff in them, that's why i use pointers. If you use pointers the functions don't care if the texture is stored in a STL vector, a simple array, VBOs or what-ever. Again this is easier to mess around with but what about performance? Anything else i should know? And one last question, what is this ASSERT(...) macro stuff? I have seen it in my book "Game programming gems 4" but have no idea what it does (the assert macro that is :) ). Greatfull for any help // BBB

##### Share on other sites
Writing a 3D engine is a very big project for someone just starting to program. I think a good idea would be to start small and add features to it as you go. As you implement new features in the engine, write applications that use those features.

Don't worry about making it perfect the first time. Plan on completely rewriting your engine many many times.

##### Share on other sites
make a fully functional 2d game first, just to give you the idea of how to program games

##### Share on other sites
It's important that you design a good interface and stick with that.

If you look at the Bridge pattern you will see that it's possible to seperate your interface completely from your implementation. This would be perfect for your renderer. Your other classes only see the interface and they call function X to do something, if you change the implementation of function X they won't notice and your code still works.

Create a good interface and don't change it if other code depends on it. It is really important to make sure that other code is not depending on the inner workings of a class. They should never expect that the polygon list is a std::vector or that an algorithm will first do X and then Y.. if you don't do that you will have to rewrite a lot of code if you change the implementation.

http://msdn.microsoft.com/library/default.asp?url=/library/en-us/vccore98/html/_core_the_assert_macro.asp

You will use the ASSERT macro when you are developing your game. You can use it to check things that should always be true if the program is running correctly.

If you compile a release version of your project ASSERT will be skipped in your code and will not work in the finale exe.

##### Share on other sites
Go for the 2nd method. By separating your graphics, AI etc you are decoupling the various sum-systems. This has many advantages:

- implementation changes only impact on the changed subsystem (unless you change the interface symantics - in which case the change will knock on to clients). For example if you wanted to support 2 different gfx libraries you can do this easily behind a standard generic gfx API and swap them in and out without changing any of your other (non gfx) code.

- you can re-use subsystems independently - for example you may want to re-use your AI code in a command line tool - you don't want to have to include loads of 3D gfx classes too.

- testing/bugfixing/maintenance are all far easier as you can treat the subsystems separately.

The performance issue is an interesting one - if you're smart you should only have the overhead of vtable lookups (which you'll have with 1st option too). You'll obvioulsy have a lot more classes - wrapper classes for your APIs etc. I think an important point to engine design (or any software design) is premature optimisation is bad..

##### Share on other sites
Renderer shouldn't need to maintain geometry in it, it should probably iterate through the main data structure of the game take a renderable object and draw it, look up the vistor pattern. Basically what i mean is this:

class tri_mesh;struct renderer {   void draw(const tri_mesh& t) const;};

that is really simplified but you get the idea.

No i wouldn't support threading unless you design for it wright at the start even then.

##### Share on other sites
There isn't really a need for doing all that stuff you're talking about. The renderer should ONLY draw stuff. Nothing more, nothing less.
All the data you want to visualize with your renderer should be contained separately.
For example, when i want to draw a mesh, i instance a mesh-object, and tell my renderer to draw it.
CStaticMesh gMesh;gRenderingDevice->DrawStaticMesh(&gMesh);

Since my rendering-class is declared a friend to CStaticMesh, it can reach the vertex/index-data in the mesh and draw it.
Don't stick all sorts of data into the rendering-class itself, keep it separate, keep it simple, and keep it modular.

Edit: snk_kid and me were posting the same thing at the same time ;)

##### Share on other sites
I'd just like to bring this thread back up.

I use a base class with several virtual functions in it, including: Render() and Update(). My render simply takes a reference to an instance of a CMesh class and calls Update followed by Render. In Update the CMesh class assigns pointers and prepares anything ready for rendering.

I'm using a scene-graph so basically all my objects are linked together logically. I keep my mesh data, skeletal data, keyframes etc all separately and use a function interface to interact with the data although all I really need to render a simple object is a pointer to the start of the vertices array, I just pass this onto my API (I prefere OpenGL) and thats it.

I have no need to store muliple copies of my object, each object is stored in contiguous (whole) data and I don't have overhead in copying data (simply because theres no need to move data, only pointers to data) - theres an acceptable lag in calculating frames from keyframes, but I do a-lot of pre-calculating at the start on splines (curves) for movement etc.

I dont use theading but I'm using OOP so I could (If I wanted) add it in later without re-writing anything.

Simply, if you just keep your design and interafaces clean, un-ambiguous and definately modular you will be able to produce efficient, flexible code that can be built upon without fuss and will keep options open without backing you into a closed corner.

[Edited by - dmatter on August 18, 2004 2:57:24 PM]

##### Share on other sites
For a simple render interface, take a look at Striving for Graphics API Independence. It's all about providing a clear and interface to the rendering subsytem. Bear in mind you may not want ot get this low down (dealing with vertices and triganles), instead you may wish to take a higher level approach and ask your renderer to render a mesh/model. Of course, this method takes away a little flexibility, but can be useful if the way you push triangles to the renderer doesn't change that much between projects.

===

[Edited by - Nurgle on August 19, 2004 1:05:58 AM]

##### Share on other sites
Quote:
 Original post by BBBActually, i do think that my idea is better since im going toput the mesh data into a the VBA. But ofcourse, if the gfxdoesn't support hardware accleration and i have to use softwarerendering then my way would be awaste of memory since i would have 2 copyies of the mesh,one inside the renderer-object and the other somewhere else.But i have a really cool idea that would'n work if i usedyour way.

Well, you can keep the meshdata in a VBA, nothing's stopping you from that. Personally i keep it in both systemmemory and videomemory so i could easily access it for collision and such, while still rendering it from vidmem.
So what wouldn't work if you did it my way?

##### Share on other sites
I might add that programming into corners, while naturally avoided is however one of the best ways to learn (albeit fustrating at times!)

##### Share on other sites
Quote:
 Original post by tok_junior...Well, you can keep the meshdata in a VBA, nothing's stopping you from that. Personally i keep it in both systemmemory and videomemory so i could easily access it for collision and such, while still rendering it from vidmem.So what wouldn't work if you did it my way?

Because VBA is something directly related to the GFX, it seems
unlogical to put its functionality in some other place then
the renderer. If i have understud your approch right i would
need to make the mesh container which is outside the renderer
handle the VBA, which means i would have to create a OGL
specialazation of the Mesh container class et.c.

Futhermore i think im starting to get a good grasp on

But i have a performance question, what is faster?

method 1:

class Someclass
{
public:
int variabel1;
int variabel2;
}; // end class Someclass

Someclass *someclass = new Someclass;

what_ever(*someclass.varibel1);

or method 2:

class Someclass1
{
public:
int variabel1;
}; // end class Someclass1

class Someclass2 : public Someclass1
{
public:
int variabel2;
}; // end class Someclass2

Someclass1 *someclass = new Someclass2;

what_ever(*someclass.varibel1);

Is there a performance penalty with inheritage?

##### Share on other sites
You can keep the VBA in the renderer. Just copy the vertices from the mesh to it on load or first render. Keeping the renderer as a friend to all object it uses makes this very easy.
But use vertexbuffers, nothing else ;)
Inheritance in itself doesn't incur any overhead, but virtual methods does. It's so small though (an indirect call instead of an immediate) that it's not worth caring about. Still shouldn't declare more methods virtual than necessary though, ofcourse ;)

##### Share on other sites
I use dlls for DX9 and OpenGL, both of them have *almost* full functionality of either API (i don't support anything slow since it's a bit pointless, so definately NO glVertex3f)

I have an IRenderer interface that gets returned from the DLL, and that inturn can create a number of interface types. I use Vertex Buffer Objects under OpenGL, and VertexBuffers under DirectX. The interfaces are as follows....

//--------------------------------------------------------------------------/// \brief	A list of all supported primitive types.///enum PrimitiveType {	PRIM_POINT_LIST,	PRIM_LINE_LIST,	PRIM_LINE_STRIP,	PRIM_TRIANGLE_LIST,	PRIM_TRIANGLE_STRIP,	PRIM_TRIANGLE_FAN};//--------------------------------------------------------------------------/// \brief	The base type for all resources allocated within the DLL. If you///			like, it's a really crap form of IUnknown. The only reason was to///			provide a single Release mechanism within the renderer./// struct IBase {	/// dtor	virtual ~IBase() {}};//--------------------------------------------------------------------------///	\brief	This class provides a generic interface for a DX9 vertex buffer///			or a Vertex Buffer Object within OpenGL. The method of using this///			class may at first seem alien, however it does make sense (honest)///			First of all, set the size and format for the data you will need///			by calling the SetSize function.///			You must then LOCK the buffer. This prevents other areas of the///			rendering pipeline from playing with the data so that you can ///			upload the relevant data to the graphics card. ///			Having locked the buffer, call The SetVertices, SetNormals functions///			etc to your hearts content to set all of the data you need. ///			Having updated all of the data you need, you MUST unlock the ///			the buffer before calling render on it.///			Yes, this is an arse, but it's the way DirectX works and it ///			actually does make some sense when you see how it all works///			under the surface ;)///struct IVertexBuffer : public IBase {	/// CALL THIS BEFORE CALLING AND FUNCTIONS TO SET THE DATA ELEMENTS!!!	virtual void Lock()=0;	/// CALL THIS AFTER SETTING THE DATA YOU NEED!!!	virtual void UnLock()=0;	//----------------------------------------------------------------------	/// \brief	This function sets the number of elements to be held in the 	///			vertex buffer. 	/// \param	NumElement	-	the number of elements needed in the buffer	/// \param	Desc		-	A description for the vertex buffer data	/// \note	YOU MUST CALL THIS FUNCTION BEFORE LOCKING, SETTING DATA	///			OR RENDERING!!!	///		virtual void SetSize(unsigned int numElements,VertexDescription Desc) = 0;	//----------------------------------------------------------------------	//	THESE FUNCTONS ARE USED TO SET THE DATA ON THE GRAPHICS CARD. YOU MUST	//	SET THE DATA BETWEEN CALLS TO LOCK() AND UNLOCK(). FAILING TO DO	//	SO WILL BLOW YOUR APP UP!!!	//----------------------------------------------------------------------	virtual void SetVertices(float*,unsigned int Stride=3)=0;	virtual void SetNormals(float*,unsigned int Stride=3)=0;	virtual void SetBiNormals(float*,unsigned int Stride=3)=0;	virtual void SetTangents(float*,unsigned int Stride=3)=0;	virtual void SetColours(float*,unsigned int Stride=3)=0;	virtual void SetUV0(float*,unsigned int Stride=2)=0;	virtual void SetUV1(float*,unsigned int Stride=2)=0;	virtual void SetUV2(float*,unsigned int Stride=2)=0;	virtual void SetUV3(float*,unsigned int Stride=2)=0;	//----------------------------------------------------------------------	/// \brief	Call this function to render the data	/// \param	PrimitiveType	-	one of the primitive types supported.	/// 	virtual void Render(PrimitiveType)=0;};//--------------------------------------------------------------------------///	\brief	This class is used so that you can create an indexed vertex buffer.///			Most of the setup is the same as the normal VertexBuffer, however///			there are additional mechanisms to handle the setting of an index ///			list.///				You may either store the indices as 2byte or 4byte indices,///			but the specification of that data is always the same. ///struct IIndexBuffer 	: public IVertexBuffer{	//----------------------------------------------------------------------	/// \brief	This function sets the number of elements to be held in the 	///			vertex buffer. 	/// \param	NumElement	-	the number of elements needed in the buffer	/// \param	Desc		-	A description for the vertex buffer data	///		virtual void SetIndices(unsigned int numElements,unsigned short* data) = 0;	virtual void SetIndices(unsigned int numElements,unsigned int*   data) = 0;};//--------------------------------------------------------------------------///	\brief	This class is used so that you can create an indexed vertex buffer.///			Most of the setup is the same as the normal VertexBuffer, however///			there are additional mechanisms to handle the setting of an index ///			list.///				You may either store the indices as 2byte or 4byte indices,///			but the specification of that data is always the same. ///struct IDynamicIndexBuffer 	: public IIndexBuffer{	//----------------------------------------------------------------------	/// \brief	This method is a slightly optimised way for me to update	///			the dynamic vertex buffers - i.e, Verts, Norms, BiNorms & Tangents	///			This function will dynamically re-calculate the normals,	///			tangents and bi-normals as defined by the vertex format.	///			For example, if the vertex format only specifies normals, 	///			then only they will be re-calculated. The same applies 	///			to tangent and bi-normals.	///	virtual void Update(float* v) = 0;};

Other interfaces retuned from the renderer include ILight, IMaterial, ITexture, ICgShader & ICubeMap - all of which are inherited from the common base IBase since i want a single method within IRenderer to release them.

Incase you are wondering about the inheritance hierarchy i use (it does seem to be a bit impossible looking at the interfaces above), the dll makes use of templates to specialse the buffers based upon their base class, ie :

template<typename T>struct IGL_VertexBuffer 	: public T {};template<typename T>struct IGL_IndexBuffer 	: public IGL_VertexBuffer<T> {};template<typename T>struct IGL_DynamicBuffer 	: public IGL_IndexBuffer<T> {};// then, the concrete classes returned by the OpenGL DLL are....IGL_VertexBuffer<IVertexBuffer>IGL_IndexBuffer<IIndexBuffer>IGL_DynamicBuffer<IDynamicIndexBuffer>

Might not make full sense looking at this, however it's just because i'm plain lazy.... ;)

Setting other rendering states is done vis the IRenderer interface in a similar way to the way that IDirect3DDevice9::SetRenderState works (check the DX9 docs). All mesh and scene management is then done within the engine itself since this represents the least amount of work.

All of this works under both Windows and Linux via dll's and shared objects. I wouldn't start putting much more into the dll's since you will end up duplicating the code. Infact, multiple dll's exist under OpenGL in order to specialise support for the following :

Geforce3/4
Geforce FX +

This is because frankly OpenGL is a pain in the arse when it comes to writing generic code that works on different vendors cards. You will basically end up hacking stuff for specific cards which can be a bit nasty. I would follow the DirectX method for trying to abstract this stuff because getting OpenGL to act like DirectX is fairly easy - doing it the other way round is a nightmare....

Probably the only thing that is duplicated is the calculation of the tangents and bi-normals since you may want to specialise that for DirectX or openGL rather than using what would be a slower generic method.

The main caveats are getting the co-ordinate systems the same. It is easier in my opinion to do this under DirectX than in OpenGL, mainly because they tell you how to do it in the docs. (Look for the funcs ending in RH, ie D3DXMatrixPerspectiveFovRH)

The transforms will take a bit of fiddling to get right, i created a push/pop matrix stack within DirectX, and beware of the matrix transform order. You might not want glRotate / glScale / glTranslate equivalents because it's actually easier chucking the dll's whole matrices (ala, glMultMatrix, glLoadMatrix).

Hope that helps

rob

I should also mention that when creating a buffer, i specify it's format at creation time by filling this structure and passing it in as a creation parameter to IRenderer

//--------------------------------------------------------------------------/// \brief	This class is used to wrap the possible vertex formats that you///			may decide are useful within your game engine/renderer. It kinda///			loosely wraps the Flexible Vertex Formats (FVF) found within ///			DirectX. ///			The idea is that you can generically specify a vertex format you///			wish to use, and then pass that format to the IVertexBuffer///			interface which *should* be able to handle it. In total, this ///			structure takes a grand total of 4 bytes. The flages therefore////			are designed to provide either boolean on/off states for certain ///			entities (ie, normals will always have 3 values), or thy may ///			provide a way of specifying the number of components required ///			for a specific data entity (ie, tex coords may have 2,3, or 4 ///			components.///			Set the format you need by setting the flags, then pass it///			to the set size function within one of the vertex ///			buffer classes.///struct VertexDescription {	struct {		/// 0=No verts, 1=3D verts, 2=4Dverts		unsigned int vflag:2;			/// bool for normal usage in buffer		unsigned int nflag:1;		/// bool for bi-normals		unsigned int bnflag:1;		/// bool for tangents		unsigned int tanflag:1;		/// bool for fog coordinates		unsigned int fogflag:1;		/// 0=no colours, 1=RGB colours (floats), 2=RGBA colours (floats)		unsigned int cflag:2;		/// 0=no UVs, 1=2D uvs, 2=3D uvs, 3=4Duvs		unsigned int tflag0:2;		unsigned int tflag1:2;		unsigned int tflag2:2;		unsigned int tflag3:2;		/// reserved for later usage		unsigned int padding:16;	};	VertexDescription() {		vflag=0;		nflag=0;		bnflag=0;		tanflag=0;		cflag=0;		tflag0=0;		tflag1=0;		tflag2=0;		tflag3=0;		padding=0;	}};

[Edited by - RobTheBloke on August 22, 2004 10:32:56 PM]

##### Share on other sites
ignore threading. It is not trivial to get working correctly. Your design on paper may make sense, but it will probably lead to a much slower app (trust me, been doing this for years). Adding a mutex to each poly is definately a bad idea, stop it now ;)

Seriously, you will have bad enough race conditions to handle with a single threaded app, let alone a multi-threaded one. You are underestimating the complexity needed for even a basic game (not trying to sound patronising, it's just experiance talking).

I know your intention may be to abstract OpenGL into an interface, but the excercise is pointless. I know DX9 is Win32 only, however if you take tips from the layout of that, you will be able to create a far better Abstraction layer for your renderer than by following the OpenGL API.

Consider that OpenGL has many vendor extensions to deal with, most of say the NV extensions basically work in the same way (all be it with more flexibility) as the DirectX9 spec. There is no glVertex3f equivalent under DX9, nor would you ever want to use it if you want real time performance.

Seriously, look at DX9 spec, use that to determine which OpenGL extensions you should support (since GL is more flexible, it is easier to make it work like DX, than the other way round)

the DX9 spec also gives you a very good idea as to what is fast, and so it really is worth looking at! First try abstracting the simpler classes like lights, materials etc. You cant go too far wrong with these, you can however go very wrong when dealing with geometry....

##### Share on other sites
I'll say it once more, do not thread the app. It WILL be slower to execute than a single threaded app. You obviously know the theory, but you are trying to jam a square peg into a round hole.... A game does not need threading.

if you have a dual processor machine then you *may* see some benefits from say making an AI thread, and a renderer thread. There will be NO benefits in trying to multi-thread a renderer - it really is a BAD idea.....

##### Share on other sites
heh. you misunderstand me. I know that addpoly() type functions are not GL specific, nor DX specific, however they *do* follow the abstraction route of someone who's looked at OpenGL rather than DirectX. Ideally, you want to SetNumPolys(), then SetPoly() if you like.

Batching this stuff like that is what you will eventually end up with since it is the fastest way of doing things.

Heavy batch processing such as BSP generation really should be done offline in a seperate data pre-processing stage. Threading this makes no sense since it will still take a few minutes to a few hours to complete. Ideally your final engine should be fairly simple (read: as fast as possible, with any heavy calculation done offline or during a setup phase).

You might possibly want threading for say progress loading screens, but that can be it. You can use multi-threading for networking, but thats probably the only place you might want it to be honest. It would still work without the multi-threading however, but worrying about threading is a small issue when dealing with an engine - there are many other problems that will rear up and bite you (which are often un-expected and a fun challenge to solve)

##### Share on other sites
Quote:
 Original post by RobTheBlokeheh. you misunderstand me. I know that addpoly() type functions are not GL specific, nor DX specific, however they *do* follow the abstraction route of someone who's looked at OpenGL rather than DirectX. Ideally, you want to SetNumPolys(), then SetPoly() if you like.

Ahh, i see :) .

Quote:
 Batching this stuff like that is what you will eventually end up with since it is the fastest way of doing things.

I was thinking more in terms of first

with all of mine polygons, then

renderer->done_adding_polys(); //start putting stuff into VBO et.c.

Quote:
 Heavy batch processing such as BSP generation really should be done offline in a seperate data pre-processing stage. Threading this makes no sense since it will still take a few minutes to a few hours to complete. Ideally your final engine should be fairly simple (read: as fast as possible, with any heavy calculation done offline or during a setup phase).

Actually, im going to make a ingame GUI (have a retarded version
And i want to for example build a BSP or something similar
without having to restart the engine.
While the BSP-generation thread is working
the user can for example write a game script in a built-in
script editor. As you probally have figured out by now im not a
fan with stand-alone map editors for FPSes like Unreal.
Why have to copies of the engine wasting both memory and CPU
time when you can have one engine and then run the editor and
the game as GUI applications (of course you have to be able to
let the game app bypass the GUI for performance when playing
the game, but thats besides the point).

And since im an old Unreal map designer, C++ programmer and a
100% Automatic Data Proccesing zealot then if
there is something that won't be pure crap in my engine then
it's going to be the level editor! I have tried alot of level
editors and when it comes to productivity UED whipes the floor
clean with all the competitors,
but there is still ALOT of room left for improvements.

And the same goes for IDE´s; MSVC++ and Dev-C++ are okej but
i have a million of cool and (hopefully) easy implemented ideas.

Quote:
 You might possibly want threading for say progress loading screens, but that can be it. You can use multi-threading for networking, but thats probably the only place you might want it to be honest. It would still work without the multi-threading however, but worrying about threading is a small issue when dealing with an engine - there are many other problems that will rear up and bite you (which are often un-expected and a fun challenge to solve)

Yes, threading is hard but you are never going to get to the
moon unless you aim at it, all programmers should know that!
I rather risking biting off a bigger piece then i can chew rather
making a pacman clone :) .

Anyway thx for the help!
Ill make the interior of the renderer single-threaded and
put stuff like BSP proccesing in a GIU app and take a look
at DX9.

##### Share on other sites
very_fast_poly_pusher? Well, why would I ever use sucky_pusher_that_crashes? Why not just call very_fast_poly_pusher all the time?

The most common use for threading in games is disc access and networking. Reading from the disc is -SLOW-, and most of the time the CPU just sits there. You can be processing the last thing you loaded while you load the next one. Have you noticed how much faster Doom3's load times are then some of the last games using the Quake3 engine? Doom3 is loading more data, faster. Guess how they did that!

##### Share on other sites
Quote:
 Original post by BBBSo, how should i solve this thread dilemma?

You know when you need to synchronize, so do it in the tier instead of in each class.
If you have other synch-objects than mutexes available under linux, i'd recommend using them. Mutexes are supposed to be global no matter which OS you use, so if for some reason you've got 2 instances of the engine running, the mutexes of one instance will block the other.

Quote:
 Original post by BBBThis is turning into a nice engine design discussion :) .

I don't really think it is. It's more of a "point the guy who's gotten into too deep water in the general direction of the more shallow parts" discussion.
I don't think you're ready for something as big as you're trying to do here, you really should go for something more simple at first. Like the threading. You obviously haven't made a threaded app before, and doing something as big as a game-engine as a first try, that's a VERY BAD IDEA.

##### Share on other sites
There's a pretty major leap from using structures to doing a game-engine...
There's a pretty major leap from anything below the level of a game-engine to the actual game-engine.
Writing something as complex as an engine is probably the most cross-disciplinary piece of work you'll ever make, and to gain an understanding of all these different disciplines, and how to get them to cooperate, you'll need years of experience. It's as simple as that.
Before you do this, make a 2D-game. An a COMPLETE game, not just the beginning of one, then thinking "Ah, i know how to do the rest, so why bother when im not learning anything?". You WILL learn.
After that, make 50 more simple games (you can start using simple 3D for them aswell), and then you can start thinking about doing a very simple GRAPHICS engine. Not a game-engine, a graphics-engine.
You should by then have learned that most of the stuff you're trying to do in these posts here are the wrong way to go about everything.

##### Share on other sites

This topic is 4852 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Create an account

Register a new account

• ### Forum Statistics

• Total Topics
628647
• Total Posts
2984032

• 10
• 9
• 9
• 10
• 21