Thread safe architecture

Started by
3 comments, last by Alberth 5 years, 9 months ago

Hi, 

I have an online multiplayer RPG maze game which I built in NodeJS. It is quite simple. Uses socket.io. Single threaded application so I don't have to worry about race conditions such as two users attempting to take the same object at the same time. The client sends in messages, which are processed by the gameserver and alter the player/world state, query the MySQL database, and broadcasting messages to all the other players and also updating things in the MySQL database.

In the last month I have been learning python and I would like to port my game to Python as I think it will be better suited. Doing some research I have found many suggest using Twisted Python so this is the route I am going to take. 

I wondered if anyone could give me some suggestions for a basic framework of how I would handle the problem of race conditions when moving to an environment where things are operating in parallel, especially with regard the querying and updating of the state in the MySQL. 

If at all possible I wondered if there are any open source implementations of an MMO architecture using Twisted Python that I could look through that dealt with these issues.  

Thank you for your time reading this

    

Advertisement

The general purpose answer:

Draw a diagram of the single-threaded version as a graph of input-process-output nodes. The main loop of the game should be able to be massaged into a directed acyclic graph. If this graph moves down the page over time (all input->process arrows and process->output arrows are pointing downwards), then the horizontal axis shows opportunities for safe parallel process execution, and the vertical axis is a sequencing constraint that let's you form a parallel schedule for multiple threads to concurrently execute the graph. 

Now build the parallel schedule from that graph and there's no deadlocks, race conditions, etc by definition, so you can use multiple threads to execute the process nodes as long as they follow the agreed schedule. 

FWIW, this also means no need for mutexes/locks, as they're just a poor man's way to dynamically generate an ad-hoc schedule without prior analysis. 

As far as I understand it, Twisted is going to make things a whole lot more complicated.  I have never used Twisted, but I have used a similar library in Ruby called EventMachine and you have a lot of autonomous pieces all over the place.  While this seems good at first, for something highly synchronous like a game server I think you'll find that it's counterproductive.

What I would do for a game server is have a single thread that performs all the logic at a constant tick rate.  Each socket reads in messages and adds them to a queue for that single thread to process.  This can be as simple as a select (the system call, not the case/switch/select language construct) in another thread, or something like goroutines in go, which would be similar to how Node does things asynchonously.  The point here is that all your async stuff does is gather messages and add them to a queue for the main thread, which wakes up at regular intervals to process them, update the game state and then hand off game state updates to be send to the client asynchonously.

Race conditions won't happen here because you have a single point of synchonization: accessing the main thread's queue.  It is an extremely simple architecture that will scale well for smaller scale games (if that makes any sense).  If and when a simple architecture like this doesn't scale to meet the needs of your game, then you should start looking at more complex architectures.

From my limited experience with Twisted, I know there are no threads in the usual sense of the word. Twisted is quite close to event-driven programming, except there is no central dispatch point in the program.

Since it's one thread for everything, you should never block. Instead, you make a 'future result', an object that represents the result when you would have done the blocking operation. To that object, you attach 'success' and/or 'failed' callbacks that perform further steps in the process. The object is attached to an asynchronous handler (don't remember how that was done, unfortunately), and then you're finished.

When the handler obtains the actual result, it triggers a success or failure callback. Since there is one thread for everything, ...

And so you make lots of small steps while deferring handling of blocking operations to asynchronous handlers, possibly running in other threads.

 

There are no race-conditions in the central program, as there is one thread. It runs at full speed, since the code never blocks, and you can have multi-tasking, since all the future results are independent of each other.

This topic is closed to new replies.

Advertisement