So, I've been working on the server for my hobby game, so far I've done the simple thing of just running it all in one single thread - doing the "simplest thing that will work", sadly - it does not work any more as one thread is not enough to handle all the sending and receiving of data plus the simulation at once. My initial though was to split the sending and receiving to their separate threads, and run the simulation in one thread, and then communicating between the three threads using message queues. This works decently well, but there is still going to be a point where one thread for the simulation is not going to be enough.
The whole task of truly multi-threading any application is not something undertaken lightly, I have written several big system running on N-core systems, where N is pretty big (32+). But all of these problem domains have not been as complex as a real time simulation are and have had problem domains which are easy to partition over several threads as they were highly isolated.
My first though and the initial implementation I built was to split the server into N threads, where each thread gets a certain subset of the players and actors on the server, and this thread is responsible for receiving data, sending data and running the simulation for the players and actors that it has been assigned. Now, this does actually work pretty well and I've been able to run a huge amount of actors on the server. The problem then comes when one actor in thread T0 needs to access an actor in thread T1.
I once again tried to go with the "simplest thing that works" and just do a simple locking scheme, while you do something to actor X which is not owned by your current thread you have to lock and release him with a normal Monitor.Enter / Monitor.Exit pair. This obviously does not scale and the lock contention becomes insane, when having a lot of interactions the server is barely faster then a single threaded, which is to be expected as doing locking on a large scale essentially makes your application sequential in the same way that a single thread does (not quite true, but the gains are not nearly as big as a truly concurrent implementation), on top of this add all the context switching excessive locking causes and you will not gain so much performance.
So, I decided try to revamp the locking and implemented a thread safe reference counter for the objects shared between threads, so that one thread could say "I need this object right now" or even "I need this object for some time" - but it would not grant it exclusive access to the object, it would just make sure that the object does not vanish under it from being deleted from the world (for example an NPC that could get despawned from it's owning thread while another thread is trying to buy an item from it) .
This still leaves a problem though, with two threads possibly modifying the object at the same time (currently two different threads could say "I need this object right now", and they both will be allowed to access it), which obviously does not work. And this is where I'm at currently, I have a couple of ideas on how to solve this though. I'm looking for feedback on these ideas, or general comments or explanation of how problems like this have been solved before in commercial games or any literature that I could read.
IDEA 1: Creating an even more fine-grained locking scheme which allows for both read and write locks to be held on shared objects.
Basically implementing a fine-grained locking scheme which would allow threads to both lock an actor for reading and writing, only allowing one writer but several readers obviously, there are several ways this could be accomplished by using wait handles built into windows or more light weight methods using Monitor.Enter/Exit and pair of sync objects.
IDEA 2: Allow each thread to be fully self containing, exposing no shared objects to other threads and instead have read-only or some type of auto updating proxy objects which the other threads can use
The basic idea is to not share any writable data between the threads, and instead expose read only objects to the other threads which either act as proxies for the main objects or contain the data needed by the other thread in a read-only format. This has the added benefit of removing the different between "a different thread" and "a different process" and "a different machine" as it would in theory be possible to create an abstraction layer which removes the different communication mechanisms for "between threads", "between processes" and "between machines". How updates to objects would be handled I'm not 100% sure of, probably some incoming message queue that other threads can push wanted changes to objects onto, which then is de-queued in sequential order on the owning thread.
As I said I'm interested in any feed back on this type of problem, I posted it in Multiplayer / Networking because it's specifically for a multi-player server, hope I picked the right forum!
This is my final code, which seems to be working very well:
public static class NetworkTime
public static float offset = 0.0f;
public static float gameTime = 0.0f;
public static float localTime = 0.0f;
public static void UpdateTime()
localTime = Time.time;
gameTime = localTime + offset;
public static void UpdateOffset(float remoteTime, float rtt)
var newOffset = (remoteTime - Time.time) + (rtt * 0.5f);
if (offset == 0.0f)
offset = newOffset;
offset = (offset * 0.95f) + (newOffset * 0.05f);
Time.time is the current local time this simulation step was started (supplied by Unity as a built in property)
NetworkTime.UpdateTime is called on both the server and client to set the local time (Time.time isn't used directly anywhere in my code, I always go through NetworkTime.gameTime), the offset will always be 0 on the server so localTime == gameTime on the server. UpdateTime is called at the start of every physics step and frame render to keep the time up to date
NetworkTime.UpdateOffset is called on the client only, every package that is received from the server has the servers latest gametime as the first four bytes, which is sent into UpdateOffset through the remoteTime paramter. The avarage roundtrip time I get from the lidgren library automatically and is sent in through the rtt parameter. I then use hplus0603's formula (I think, maybe I'll be corrected) to calculate the offset. UpdateOffset also calls UpdateTime as the last thing it does to update the time with the latest offset.
This seems to be working very reliable for me, albeit over only simulated latency so far, but the client seems to be in sync to as close as 5-15ms with the server, which feels good enough for me at least
Slight update: I also calculate the current timestep using a simple calculation like this: timeStep = (int)(gameTime * stepsPerSecond);