• Create Account

Like
4Likes
Dislike

# Enginuity, Part II

By Richard Fine | Published Jun 10 2003 09:09 PM in Game Programming

#39 memory pointer it' engine game system smart
 If you find this article contains errors or problems rendering it unreadable (missing images or files, mangled code, improper text formatting, etc) please contact the editor so corrections can be made. Thank you for helping us improve this resource

Memory Management, Error Logging, and Utility Classes;
or, How To Forget To Explode Your Underwear

Hello! Welcome to part 2 of my silly-named series. This article we're going to cover the layout of the engine as a whole, and then move on to two of the most vital parts to an engine: Memory Management and Error Logging.

If you haven't sat down and look at any of the pre-packaged libraries I mentioned last time yet (SDL, SDL_Net, FMOD, and OpenGL), don't worry. We won't be going near them this time, with the exception of a cursory glance when I show you the design of the monster we're going to create.

Enginuity: Overview

Yup, that's it. Take a step back and reflect.

The first thing that should jump out at you is the grey bar down the right-hand side. That represents the Application itself - while the rest of the engine will stay the same between projects, the Application part changes - that's how each game is made to be different. The program starts at the App Entry Point (you may know it as main() or WinMain()), and passes control out to the Kernel (almost immediately). Later on, various calls are made back into the application itself, to request the specific bits of data that the engine works with.

Hopefully, you'll also see how it breaks down into some fairly obvious chunks. At the bottom is the double-outlined 'KERNEL' layer - that's our 'foundation layer,' and provides services to the rest of the engine. Next up is the 'task pool,' which contains the tasks to render the screen, update input devices, etc - and also, the 'Appstate' task. The AppState system (or 'Gestalt,' as I like to call it) allows you to switch the 'mode' that your program is in - for example, changing from being in-game to being in a menu would most likely be a change in application state. The AppState system calls back to the AppState factory in the application, allowing you to provide the states for the engine to use.

Next up is the CLIENT/SERVER system. Now, just to get something straight - Client/Server doesn't have to mean *network* Client/Server. Anything providing a 'service' is a Server, and anything using that 'service' is a Client - in truth, the relationship between the Kernel and the rest of the engine is a client/server relationship, in that the Kernel provides 'services' to the engine. In this particular situation, the Server in the C/S system provides a 'common gamestate' - so all clients using that server will effectively be 'in the same game.' Even for a single-player game, this works - it just means that the common gamestate is only being used by one client. The Server will be the point from which AI code is called, and game rules are checked up on - it doesn't make conceptual sense to have the client do this.

Having said all that, the C/S system *does* provide the network support in the engine. If the Client and Server in a game are on the same machine, then the network system picks it up and uses the LocalComm boxes (which move messages from A to B directly in system memory, rather than sending them out to the network drivers, round the loopback, and in again). If they aren't - that is, the player is joining a network game - then it uses the RemoteComm boxes to handle connections to other computers. Both RemoteComm and LocalComm can be used simultaneously (i.e. when hosting *and* playing on one machine).

The C/S boxes also handle common networking tasks - checking that a client is running the same version of the app as the server, enumerating games on the LAN, and so on. They do this through the use of 'messages,' which are simply a numerical code attached to a blob of data. As such, certain message codes are handled by the boxes themselves, but for everything else, there's 'handlers.' And you'll like this: you can pick a number for your message type (assuming it's not already in use), and register a 'handler' for it. That handler gets called whenever a message with that code is received - so you can register IDN_CHAT_MESSAGE to call the handler RecieveChatMessage() (which, clientside, would display the message or something, while serverside it might filter it for profanity or server commands). Because the message handlers are in the app itself, it's more or less totally extendable.

Finally, there's the gamestate itself. The gamestate (as you probably already know) is the blob of data that describes everything you could need to know about the game - not just things like player scores or elapsed time, but player positions or world collision data. It comes in two flavours - both of which are usually pretty similar - one for the client, and one for the server. The information that each part of the game needs to work with is often different (albeit, not by much). An example: in a multiplayer game, the Server will need information on all players, while the Client may only need information about it's own player. A more important example would be bots and AI - all a bot's AI variables should be stored as part of the game state, but there's no point sending that info to the client (as they'll only need the bot's position, say). In any case, you're responsible for creating each gamestate (through the Gamestate Factory), so it's more or less up to you.

Setting up the Build Environment

You're probably going to need some kind of coherent 'project' to keep all your files together (unless you're some kind of hardcore kernel hacker / masochist). I'm familiar with MSVC6 so that's what I'll use, but most of this is applicable to you if you use something else (such as Borland C++ Builder or dev-c++). And although this is all meant to be cross-platform, I'm going to assume we're building under Win32.

Firstly, creation of the project itself. The project type should be 'Win32 Application,' and should be an 'Empty Project.' MSVC generates the project files in the place you picked, and then we dive into the project settings: SDL demands that we use the multithreaded version of the runtime library (Project -> Settings -> C/C++ -> Category: Code Generation -> Use run-time library: Multithreaded DLL).

You also need to set up the linker to link the engine with the required libraries. Either using #pragma commands, or going through the 'Linker' tab in project settings, add sdl.lib, sdlmain.lib, opengl32.lib, glu32.lib, fmodvc.lib, and sdl_net.lib to the list. I assume you already set up the locations of the libraries to be included in the search path (Tools->Options->Directories), along with the include files?

The last thing I'd recommend is to set up the debugging environment a little; specifically, the working directory. Given that you're going to be working with both Debug and Release builds over time, and each build is going to share the same resources, you want to put those resources in a common place. It's also quite useful to keep assets separate from your code. I create a 'runtime' folder as a subdirectory of the project folder, and build up my 'install' of the project's assets in there; the debugger gets set up to use the 'runtime' folder as the working directory (Project->Settings->Debug->Working Directory).

Now that we've got that out of the way, let's move on to the first of our topics du jour - Memory Management.

Memory Management

Memory is one of your top resources. It's your workspace; it's the floor of your room, where you can put toys while you play with them. And if you don't put the toys away once in a while, you'll run out of space and won't be able to play with any more toys. Until your mother comes up with a black binbag and starts putting everything into it shouting that you.. sorry, childhood flashback. *Ahem*. Moving on.

One of the most disrepectful things you can do to a player's system is leak memory. Your program's leaving mess on the pavement, and you're not scooping up after it. Gradually, the player's system gets slower and slower as the OS pages more and more memory to disk; until eventually they have to stop playing and reboot.

So here's the first thing our memory manager needs to do: track memory usage. It needs to keep an eye on all the blocks of memory we carve out, to ensure that said blocks get released again in the proper fashion.

Now, I've been a little, uh, 'economical with the truth.' We don't need to track *all* our memory. There's two types of memory involved: stack memory, and heap memory. Heap memory we worry about; stack memory we don't. There's also a few things that it's fairly impractical to manage - things like Singletons, or other 'large objects' that are so obvious that failing to release them would cause other noticable bad behaviour (things like the Kernel or Application objects). But for most of the objects our engine handles, they could slip away into obscurity at any time, never to be seen again... We need to keep a list of pointers to our objects.

Here's how we approach it. We create a new class, IMMObject:

class IMMObject
{
private:
static std::list<IMMObject *> liveObjects;
protected:
IMMObject();
virtual ~IMMObject();
public:
};

//a 'static initialiser' is needed in one of the source files
//to give the std::list a definitive presence
std::list<IMMObject *> IMMObject::liveObjects;

IMMObject::IMMObject()
{
liveObjects.push_back(this);
}

IMMObject::~IMMObject()
{
// We add an empty virtual destructor to make sure
// that the destructors in derived classes work properly.
}
Righty ho. To clean up all objects floating around at the end of the program, we loop through the liveObjects list, deleteing each pointer - and voila, no memory leaks. So that's ok - except, we can only do that at the end of the program. What if we're in the middle of the game and find ourselves running low on memory? We can't just delete all our objects and start again, but there'll probably be some objects we *could* delete, if we knew about them. So here we get to the next requirement of our memory manager: Garbage Collection. We should be able to remove from memory all the 'orphaned' objects that aren't needed any more.

But wait: how do we tell if an object isn't needed any more? We could have some kind of a flag that the object's user sets when it's done with it, but that's potentially disasterous if objects are being shared around (and they most certainly will be). So, coupled to Garbage Collection is a third requirement: Reference Counting. A system for tracking how many things are using an object, and for marking it as 'collectable' when they all say they're done. So, we try the IMMObject class again:

class IMMObject
{
private:
static std::list<IMMObject *> liveObjects;
long refCount;
protected:
IMMObject();
~IMMObject();
public:
void Release();
static void CollectGarbage();
};

IMMObject::IMMObject()
{
liveObjects.push_back(this);

//update the constructor to initialise refCount to zero
refCount=0;
}

{
++refCount;
}

void IMMObject::Release()
{
--refCount;
}

void IMMObject::CollectGarbage()
{
for(std::list< IMMObject *>::iterator it=liveObjects.begin();
it!=liveObjects.end(); )
{
IMMObject *ptr=(*it);
++it;
if(ptr->refCount<=0)
{
liveObjects.remove(ptr);
delete ptr;
}
}
}
There are two problems with this approach. Firstly, there's the rather icky construction in the CollectGarbage() function - the iterator has to be incremented at a weird time to make sure it doesn't get stepped on by the call to remove(). Also, this method is bad when the ratio of live objects to dead objects is high: when you've got 5000 objects being managed, but only 10 of them need removing, that's still 5000 objects being checked; not good. A better solution is to give the liveObjects list a parter - deadObjects:

class IMMObject
{
private:
static std::list<IMMObject *> liveObjects;
long refCount;
protected:
IMMObject();
~IMMObject();
public:
void Release();
static void CollectGarbage();
};

void IMMObject::Release()
{
--refCount;
if(refCount<=0)
{
liveObjects.remove(this);
}
}

void IMMObject::CollectGarbage()
{
{
delete (*it);
}
}
Much neater, don't you think? (There's still an optimisation there - liveObjects.remove still searches the list for the object (which can take even longer than the initial method, in fact), so the object should store some kind of iterator allowing the list to remove it directly. But I leave it up to you.)

There's two more things we should add to the IMMObject class before we move on. Firstly, a fail-safe function, to be called at the end of the program, that will purge the liveObjects list (and log anything unreleased, because if there's anything still around at that time then something screwy's going on). Secondly, if we're going to track all the objects that are around, we might as well lay a base for tracking memory usage, too; so we add a pure virtual function, for derived classes to implement, that returns the size of the object.

class IMMObject
{
private:
static std::list<IMMObject *> liveObjects;
long refCount;
protected:
IMMObject();
virtual ~IMMObject();
public:
void Release();
static void CollectGarbage();
static void CollectRemainingObjects(bool bEmitWarnings=false);
virtual unsigned long size()=0;
};

//define a quick macro to make things easier on derived classes
#define AUTO_SIZE unsigned long size(){return sizeof(*this);}

void IMMObject::CollectRemainingObjects(bool bEmitWarnings)
{
CollectGarbage();
for(std::list<IMMObject*>::iterator it=liveObjects.begin();
it!=liveObjects.end(); it++)
{
IMMObject *o=(*it);
if(bEmitWarnings)
{
//log some kind of error message here
}
delete o;
}
liveObjects.clear();
}
And there we go: a nice little base class to automatically memory-manage our objects. You might want to polish it up a bit - inline the AddRef()/Release() functions, for example - but again, I leave it up to you.

Smart Pointers

Those memory-managed objects are nice, but they're a bit of a pain to use on their own. Having to call AddRef()/Release() every time you deal with one isn't just tedious - it's asking for trouble. What would be good would be if AddRef and Release would just sort of.. call themselves, and that's where Smart Pointers come in.

Smart Pointers are objects that behave (and, indeed, can be treated) just like pointers - except that they do more than plain variable pointers. In our case, we can set up a Smart Pointer class to call AddRef() on an object when it's assigned to it, and Release() when it lets go of it. Then we can just do 'ptr = obj' in code, and the smart pointer takes care of the reference counting for us!

The faint-hearted amongst you: be warned that this next section uses most of C++'s 'advanced' features. If you're not comfortable with the *whole* language - including operator overloading and templates - leave now, and don't come back till you've bought several heavy books on the subject. Whether you beat yourself to death with them or actually read them is up to you. The rest of us: onward!

Now. The first, most obvious thing to say, is that smart pointers will have a pointer to the object they're pointing at. (I said obvious, not easy). That is, a smart pointer object set to point at object 'cheese' will need to have a pointer member variable that actually points to 'cheese' - without it, we wouldn't get very far. The smart pointer class itself acts something like a wrapper for that pointer. But I ask you: what type should the pointer be? Veteran C programmers amongst you might suggest void*, but we can do better than that. The more astute of you may well say that IMMObject* would be suitable - it's better than void*, but they both suffer from the same problem, which is that I can mix my object types. I can take a pointer to an object of 'CMonkey,' and assign it to a pointer which something expects to have an object of type 'CTable.' (In short, they lack type safety). The best solution is to use templates, and have each smart pointer custom-built to store a particular type of object pointer. So here's the initial code:

template<class T>
class CMMPointer
{
protected:
T* obj;
public:
//Constructors - basic
CMMPointer()
{
obj=0;
}
//Constructing with a pointer
CMMPointer(T *o)
{
obj=0;
*this=o;
}
//Constructing with another smart pointer (copy constructor)
CMMPointer(const CMMPointer<T> &p)
{
obj=0;
*this=p;
}

//Destructor
~CMMPointer()
{
if(obj)obj->Release();
}

//Assignement operators - assigning a plain pointer
inline operator =(T *o)
{
if(obj)obj->Release();
obj=o;
}
//Assigning another smart pointer
inline operator =(const CMMPointer<T> &p)
{
if(obj)obj->Release();
obj=p.obj;
}
};
OK. That will now let us create a smart pointer object, and assign to it an IMMObject* (the thing you assign to it has to be derived from something with AddRef()/Release() methods, at least, otherwise it won't compile). Still, it's pretty useless without some other basic pointer operations - like accessing the pointer. D'oh! Never mind. We can also take the opportunity to catch null pointer exceptions - our accessor functions can simply check that the pointer isn't NULL before returning it. Watch and learn:

template<class T>
class CMMPointer
{
protected:
T* obj;
public:
//Constructors, destructor, and assignments are same as last time

//Access as a reference
inline T& operator *() const
{
assert(obj!=0 && "Tried to * on a NULL smart pointer");
return *obj;
}
//Access as a pointer
inline T* operator ->() const
{
assert(obj!=0 && "Tried to -> on a NULL smart pointer");
return obj;
}
};
Almost there now. We're just missing a few more things - like, for example, a simple way to convert back to normal pointers, or a way to check whether the pointer is NULL without causing an assert() in the process :-)

template<class T>
class CMMPointer
{
protected:
T* obj;
public:
//Constructors, destructor, assignments and accessors same as before

//Conversion - allow the smart pointer to be automatically
//converted to type T*
inline operator T*() const
{
return obj;
}

inline bool isValid() const
{
return (obj!=0);
}
inline bool operator !()
{
return !(obj);
}
inline bool operator ==(const CMMPointer<T> &p) const
{
return (obj==p.obj);
}
inline bool operator ==(const T* o) const
{
return (obj==o);
}
};
That should about do it.I've not included other operators - such as pointer math ops (+/-) - because it doesn't really make sense with smart pointers. You're meant to be pointing to objects, not arbitrary locations in memory. What we've got there, though, should be enough for 95% of the time - it should replace your average normal pointer absolutely transparently, with no need to change things - the only places where it's more complex is when converting one pointer type to another, and the aforementioned pointer math. There's ways of doing both.

When should smart pointers be used? The simple answer is: any time you need to 'retain' a pointer - keep it for any length of time that might include a garbage collection sweep. You don't need to use smart pointers if you're just using the pointer in a single function and then dropping it from the stack. That can be particularly useful when deciding on parameters for functions: if SomeFunction(CMMPointer &p) doesn't keep the pointer somewhere once it's returned, then it'd probably be easier to have it as SomeFunction(SomeObject *p). Accessing the object through a smart pointer obviously incurs a small speed cost, but it builds up; you should bear that in mind in speed-critical parts of the engine.

Now that we have a smart pointer, it's time to create our very first memory-managed object - another part of the memory-manager system!

Aside from actual game objects, the second most common dynamically allocated objects in our engine will be buffers. Buffers for decompressing resources, for serialising network messages.. you name it, there's a buffer for it. But you can't derive int[1000] from IMMObject - looks like we need another wrapper. Two, in fact - one for fixed-size buffers, and one for dynamic-sized (runtime-sized) buffers. The fixed-size one isn't really necessary, but it's *very* easy to do. These 'buffer wrappers' are the objects I affectionately term 'blobs,' and look like this:

template<class T, int i>
class  CMMBlob : public IMMObject
{
protected:
T buffer[i];
public:
inline T& operator [](int index)
{
assert(index<i && "Bad index on CMMBlob::[]");
return buffer[index];
}
inline operator T*()
{
return buffer;
}
AUTO_SIZE;
};
template<class T>
class CMMDynamicBlob : public IMMObject
{
protected:
unsigned long dataSize;
T *buffer;
public:
inline T& operator [](int index)
{
assert(index<dataSize && "Bad index on CMMDynamicBlob::[]");
return buffer[index];
}
inline operator T*()
{
return buffer;
}
CMMDynamicBlob(unsigned long size)
{
dataSize=size;
buffer=new T[size];
assert(buffer!=0 &&
"DynamicBlob buffer could not be created - out of memory?");
}
~CMMDynamicBlob()
{
if(buffer)delete[] buffer;
}
unsigned long size()
{
return dataSize+sizeof(this);
}
inline unsigned long blobSize()
{
return dataSize;
}
};
You see now why I said the fixed-size blob would be easy? That's just how easy simple objects are to handle - you can just group together a few variables in a class and have them memory-managed as a discrete object. The fixed-size blob takes the buffer type and size as template parameters; the dynamic blob takes the buffer type as the template parameter, and the buffer size as the constructor argument (so you can work it out at runtime). Note that the fixed-size blob uses the AUTO_SIZE macro, defined above, while the DynamicBlob reports the *actual* size of the object - including both the allocated buffer and the wrapper. If you just want the size of the buffer itself, you should use the seperate blobSize() function - because requiring you to remember to subtract the 8 bytes or so that the wrapper uses is, as usual, asking for problems.

Each class provides access control, for two reasons: firstly, it's far too easy to severly mess things up by reallocating the buffer pointer or deleting it yourself (not that you'd ever want to do that, but accidents happen), so by forcing access to go through the [] and T* operators, we completely protect buffer itself from the outside world. It's still possible to do something like "delete (sometype*)(*obj);" but it's less likely because the syntax is more unweildy. The second reason is those asserts - we have the opportunity to check that we're not trying to access memory outside of the buffer.

Woo! That, ladies and gentlemen, is the end of Memory Management. We now have a (relatively) robust system for tracking objects within our engine, and trust me, we'll be using it. It's totally independent of any other library or class (with the exceptions of the assert() calls), making it ideal for reuse. It doesn't depend on any platform-specific functionality, such as byte order. Looks like we're meeting the spec, then. On to...

Error Logging

There comes a time in every young engine's life when... things don't quite work as they should. Sockets don't connect, resources can't be found... if we're ever going to have a hope of finding and fixing the problems, we are of course going to need to know about them. So we have a system for recording errors as and when they occur - 'error logging.' In truth, logfiles can and should be used for recording all kinds of events, not just errors - if something's going wrong and not recording what, recording the things that *are* working will help you find the problem by process of elimination.

We could just create an ofstream object and store it in a global variable - such a method works (up to a point) but is pretty basic. We can do better than that! Our logging system will support multiple logfiles, predefined localisable messages, and parameter replacement.

Multiple logfiles are just useful. In extreme cases this could mean one logfile per subsystem - one each for video errors, sound errors, etcetera; I'm not going to go that far, and just have 3 logfiles (CLIENT, SERVER, and APP). CLIENT and SERVER will record - you guessed it - log files relevant to the Client or Server portions of the program (Client includes all the Video, Sound, and other 'player-end' tasks - Server will tend to be less used, but will record connections being opened and closed, games being started and stopped, and AI/physics messages), while APP will record Kernel-level messages, along with those messages that don't seem to 'fit' into CLIENT and SERVER. You can, of course, record a message to more than one logfile at a time.

"Predefined localisable messages," more often known as a "string table", allow you to store some common strings somewhere, load them in, and then reference them by ID number - rather than hard-coding the message into your app. This saves a large amount of space (because strings aren't being duplicated), and also makes it very easy to translate all the messages into another language.

"Parameter replacement" is the technique demonstrated by the old C string functions like printf() - special 'field codes' in the string get replaced with actual values that get passed in seperately. So, strings can be more generalised - rather than needing one string for each error number, you could just use a single string with a field code ("Error code %i") and pass the error number in alongside it. We'll be using the printf() syntax for field codes - mainly because we'll be using a special form of the printf() function to build our messages from the strings and arguments. Furthermore, you can store messages with field codes in the strings table - and that's where things start getting really interesting.

For the time being, though, let's get started. There's just one last thing I need to mention - the method of storing the strings file. It's generally a good idea if the strings can be stored where curious users can't tamper with them - adding or removing field codes where they're not expected could cause some serious problems. Under Linux and MacOS, we've not really got anywhere tamper-proof to keep the strings - we'll have to make do with a read-only file. On Windows, however, we can store the string table as a resource, built into the executable. That's why you'll see two versions of the LoadStrings() function; one reads the strings from the resources area, and the other reads from a file on disk. Conditional compilation is used to compile the right one on the right platform.

Enough waffle. Here's the class:

//first, a few predefined constants
const int LOG_APP=1;
const int LOG_CLIENT=2;
const int LOG_SERVER=4;
//LOG_USER is used to display the log message to the
//user - i.e. in a dialog box
const int LOG_USER=8;

#define MAX_LOG_STRINGS 256

class CLog
{
protected:
CLog();

std::ofstream appLog;
std::ofstream clientLog;
std::ofstream serverLog;

std::string logStrings[MAX_LOG_STRINGS];

public:
static CLog &Get();

bool Init();

void Write(int target, const char *msg, ...);
void Write(int target, unsigned long msgID, ...);
};
Pretty straightforward. The log is a Singleton, which is why you see a protected constructor and a static Get() function - trying to have more than one CLog object at any time simply wouldn't work, because they'd both be trying to open the same files, and would cause a sharing violation. The files themselves are accessed through std::ofstream objects. The strings table, once loaded by the LoadStrings() function, is stored in the logStrings array - you could use a vector, but I didn't (because it doesn't *really* need to be dynamically expandable.. when are you going to need more than 256 slots for log messages?). Then, moving down to the public section of the class, there's the destructor and aforementioned Get() function; then, an Init() function, which is responsible for actually opening the logfiles and calling the LoadStrings() function - this isn't done by the constructor, because if it fails, there's no way of knowing; with an explicit Init() function, we're reminded to check the return code. It also gives us total control over when the log is started up.

Finally, the Write() functions. Each takes an int that tells you where the message should be logged to (by ORing together the LOG_* codes defined above); then, one takes a string pointer (for hard-coded strings), and one takes a unsigned long (for string table ID number). Then, each takes some kind of nebulous '...'. What's that about, you ask?

'...'s are known as 'ellipses,' and are how we achieve 'variable argument lists.' Given that the message we're using may contain 5 field codes or 50, the number of values we pass with it will vary - and that includes the situation where there are no arguments. So, we use the '...' to denote that any number of arguments can follow the fixed ones (and there must be at least one fixed one, even if it's just a dummy one). We don't actually need to process the list ourselves in the Write() functions - a good thing, because variables being passed in lose all type information and become a pain to work with - we just need to retrieve a pointer to the list, and pass it on to the special printf() function, vsprintf().

Here's the functions themselves:

CLog::CLog()
{
//the constructor doesn't do anything, but we need
//it for our singleton to work correctly
}

CLog &CLog::Get()
{
static CLog log;
return log;
}

bool CLog::Init()
{
appLog.open("applog.txt");
clientLog.open("clntlog.txt");
serverLog.open("srvrlog.txt");
//user errors get logged to client

return true;
}

void CLog::Write(int target, const char *msg, ...)
{
va_list args; va_start(args,msg);
char szBuf[1024];
vsprintf(szBuf,msg,args);

if(target&LOG_APP)
{
appLog<<szBuf<<"\n";
#ifdef DEBUG
appLog.flush();
#endif
}
if(target&LOG_CLIENT)
{
clientLog<<szBuf<<"\n";
#ifdef DEBUG
clientLog.flush();
#endif
}
if(target&LOG_SERVER)
{
serverLog<<szBuf<<"\n";
#ifdef DEBUG
serverLog.flush();
#endif
}
if(target&LOG_USER)
{
#ifdef WIN32
MessageBox(NULL,szBuf,"Message",MB_OK);
#else
#error User-level logging is not yet implemented for this platform.
#endif
}
}

void CLog::Write(int target, unsigned long msgID, ...)
{
va_list args; va_start(args, msgID);
char szBuf[1024];
vsprintf(szBuf,logStrings[msgID].c_str(),args);
Write(target,szBuf);
}

#ifdef WIN32
//under Win32, the strings get read in from a string table resource
{
for(unsigned long i=0;i<MAX_LOG_STRINGS;i++)
{
char szBuf[1024];
break; //returning 0 means no more strings
logStrings[i]=szBuf;
}
return true;
}

#else
//other platforms load the strings in from strings.txt
{
std::ifstream in("strings.txt");
if(!in.is_open())return false;

DWORD index=0;

while(!in.eof())
{
char szBuf[1024];
in.getline(szBuf,1024);
stringsFile[index++]=szBuf;
}

return true;
}

#endif

Firstly, that va_list business. That's how we work with variable argument lists; we create a pointer of type va_list (it maps to char*, in the headers), and use the va_start() macro to get it pointing to the right place, by passing va_start the argument *immediately before* the list - va_start gets a pointer to that, adds on its size, and stores that in va_list (or something along those lines, at least). We can then pass va_list to vsprintf(), which happily processes it for us.

Next, the flush() calls. In theory, this should make sure that the message you've just written actually gets saved to disk, rather than being stored in a cache somewhere; given that your app is still unstable in debug builds, a crash would cause you to lose log messages in that cache (and those log messages would probably tell you how and why you crashed). I say 'in theory,' because it didn't actually *work* for me; I left it in because it's *meant* to. Tracing the flush() call through the documentation gets to basic_streambuf::sync(), which "endeavours to synchronize the controlled streams with any associated external streams," that is, it tries to get the in-memory object into the same state as the file on disk (by changing the file on disk). I would guess that it's failing there; if you can tell me why, kudos.

I didn't mention the LOG_USER option before. You should have figured out what it does by now, but if you haven't - it displays the message to the user (i.e. in a pop-up message box). You can use this for the really important messages, like 'The game failed to start because it's got a hangover.' However, implementation of this is platform-dependent, which is why you see the #ifdef WIN32 lines in there. There's also a #error statement - if you try and build this on a platform other than Windows at the moment, it won't let you build because that LOG_USER functionality isn't implemented. All non-windows users need to do is add an #elif defined MY_PLATFORM_FLAG before the #else line, and they'll be free to implement the message box for their own platform; I've not done any platform other than Windows because I'm not confident I'll get it right.

It's probably worth noting that the string-table-based version of the Write() function uses the plain version to do the actual logging. It's nice like that. It's also an example of passing no arguments in the variable-argument list; the plain Write() function will handle that fine, as you will see.

Miscellanous Utilities
Hmm, that could probably be a good name for a geek band

There are a few base classes that will be used from time to time across the engine; there are a few more which, while I won't cover here, use the same or similar techniques. Many of these base classes are provided by common libraries such as boost (or, in fact, the STL itself) - but I'm here to educate, and a couple of implemented design patterns never hurt anyone. It may not be totally obvious how some of these will be useful at this stage, but I will be using them all; as such, it will help if you can read, understand, and have them ready to hand in later articles.

Functors

A functor (or, to be more precise, a 'bound functor') is a way of wrapping up a function somewhere in an object. Have you ever tried to work with a pointer to a member function? You can't, for example, dereference a function pointer to CLog::Write() without a pointer to the object it needs to be called on (otherwise, what does the 'this' pointer equal?). With functors, you can wrap up the pointer to the object *and* the pointer to the member function within that object, and use the functor to call it in an easy way. So, we have our functor base class:

class Functor : public IMMObject
{
public:
virtual void operator ()()=0;
};
Firstly, it's memory-managed, meaning that we can throw as many functors as we want around the place and the engine will clean up after us. Secondly, though, it's very obviously an abstract base class for something else. Why? Because the class we're about to derive from it is a templated class:

template<class T>
class ObjFunctor : public Functor
{
protected:
T *obj;
typedef void (T::*funcType)();
funcType func;
public:
AUTO_SIZE;

ObjFunctor(T *o, funcType f)
{ obj=o; func=f; }

void operator ()()
{ (obj->*func)(); }
};
That's more like it. The pointer-to-member-function type is typedef'd for easy use; the AUTO_SIZE macro makes its appearance to satify IMMObject::size(). But what's with this base->derived business? Why bother with the base class at all, and not just have ObjFunctor derived from IMMObject?

It's like this: When you create an ObjFunctor, you'll give the type of the object that it works with - going from our earlier example, ObjFunctor will let you store pointers to functions on any CLog object. Now, let's say you want to keep a generalised list of ObjFunctor objects - say, a list of functions to call in the event that something happens - you'll find you won't be able to. Your std::list< ObjFunctor * > tells you that ObjFunctor requires a template parameter; but if you give it that, you fix the list as being a list of function pointers, or whatever you specify. That's not much good - you want to be able to point to any function, anywhere. That's why we use the base class; you create your list as std::list< Functor * >, and then can store any ObjFunctor in it - and the fact that the () operator is virtual means that calls get passed down in the correct way to the ObjFunctor class.

Lastly, as you may have guessed from the existence of the () operator, the syntax for calling a Functor object (not a pointer, mind) is exactly the same as calling a normal function - fMyFunctor() will call whatever function fMyFunctor is bound to. If fMyFunctor is actually a pointer, rather than an object (as will often be the case), (*fMyFunctor)() will do the trick.

There's one more special case. The ObjFunctor doesn't take into account the reference-counting system; the object that it points to could be freed without its knowledge. Thus, we derive a second class from Functor:

template<class T>
class MMObjFunctor : public Functor
{
protected:
CMMPointer<T> obj;
typedef int (T::*funcType)();
funcType func;
public:
AUTO_SIZE;

MMObjFunctor(T *o, funcType f)
{ obj=o; func=f; }

int operator ()()
{ return (obj->*func)(); }
};
Near-identical, except that the obj pointer is now a CMMPointer. You won't necessarily want to use this all the time, as the other version is slightly faster.

One useful feature I tried to implement (and couldn't, because MSVC doesn't support partial specialization) was the ability to set the functor's return type. If you're using a compiler which supports it, here's a hint: all the Functor classes need to be specialized for the void return type. This is because return (obj->*func) doesn't compile if func returns void. So, you'd have Functor, and then ObjFunctor (which is where MSVC breaks down, because I need to specify the 'void' for R but can't specify anything for T), and so on.

So, the Functor allows us to wrap up a function inside an object. It could be useful for, say, callback handlers when a button is pressed, in a UI system.

Singleton

I give credit for this to Scott Bilas, who presented the technique in Game Programming Gems (an excellent series of books, if I may say so).

You should already know what a singleton is (and if you don't, I apologise - my previous mentioning of the term may have confused you a little). However, it's a bit tedious (to say the least) to have to implement the same singleton code, each time you want a new class as a singleton. Ideally, there should be a Singleton base class - and with the magic of templates, there is.

template<typename T>
class Singleton
{
static T* ms_singleton;
public:
Singleton()
{
assert(!ms_singleton);
//use a cunning trick to get the singleton pointing to the start of
//the whole, rather than the start of the Singleton part of the object
int offset = (int)(T*)1 - (int)(Singleton <T>*)(T*)1;
ms_singleton = (T*)((int)this + offset);
}
~Singleton()
{
assert(ms_singleton);
ms_singleton=0;
}
static T& GetSingleton()
{
assert(ms_singleton);
return *ms_singleton;
}
static T* GetSingletonPtr()
{
assert(ms_singleton);
return ms_singleton;
}
};

template <typename T> T* Singleton <T>::ms_singleton = 0;
To use the singleton class, we derive a class SomeClass from Singleton. One thing to note about this type of singleton is that we - not the loader - are responsible for creating the singleton and destroying it again when we're done. We create it simply by calling new SomeClass() somewhere in code - the constructor takes care of the rest, so we don't even need to store the pointer that new returns. To destroy it, we call delete SomeClass::GetSingletonPtr(); that also sets the singleton pointer back to zero, so we can recreate the singleton if we want.

The Singleton will come in useful for many key engine systems, such as the kernel or settings manager, of which we only ever want one.

Ring buffer
This one I came up with purely on my own.

A ring buffer is, as the name suggests, a 'ring-shaped buffer' - a circular buffer, which has no specific start or end. You create it to store a maximum number of a specific type of object, and then read and write to it like reading or writing to a stream. Obviously, the buffer has to have it's block of storage space as a plain, linear block of memory internally; but it stores read/write pointers, which it 'wraps' to the beginning of the block whenever they pass the end. Provided that the read pointer doesn't catch up to the write pointer (i.e. the buffer is empty), or vice versa (i.e. the buffer is full), then the buffer seems infinitely long. There's no time-consuming memcpy() operations involved; the only limitation is that the size must be determined at compile-time, rather than at runtime (although even that could be fixed, if you needed to).

template<class T, unsigned long bufSize>
class ringbuffer
{
protected:
T buf[bufSize];
public:
ringbuffer()
{
}
bool operator << (T &obj)
{
buf[write]=obj;
++write;
while(write>=bufSize)write-=bufSize;
{
--write; //make sure this happens next time
while(write<0)write+=bufSize;
return false;
}
return true;
}
bool operator >> (T &res)
{
{
++write; //make sure this happens next time
//we call and the buf is still empty
while(write>=bufSize)write-=bufSize;
return false;
}
return true;
}
unsigned long dataSize()
{
unsigned long wc=write;
}
void flood(const T &value)
{
//loop through all indices, flooding them
//this is basically a reset
for(write=0;write<bufSize;++write)
{
buf[write]=value;
}
write=1;
}
};
So, reading and writing to the buffer is done through the >> and << operators. There's also a dataSize() function, which will tell you how many elements are available for reading, and a flood() function, which is useful for wiping the buffer (initialising all slots to a particular value).

The ring buffer will prove useful in the C/S systems, eventually. It's like a FIFO (First In First Out) queue, but it doesn't need to allocate any memory, making it quite a bit faster.

Coda: Gangsta Rappa Game Developer

(Geddit? *sigh*)

That's all for this time. Next time we'll finish the kernel layer, I think, if you're up for it - the Settings system and Task Manager / Kernel Core systems await. But now I'm going to go check my email...