Jump to content
  • Advertisement
Sign in to follow this  
Zouflain

[C++,SDL] Multithreaded server architecture - locking reads only during writes

This topic is 2990 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I'm dabbling with the server code for a small scale (300 avg users, 1.5k desired peak) persistent-world game. I'm using an SQLite database for static and long term values, but there's also a singleton storing volatile data (eg, this monster's current HP or this map's current instances). I've decided to go with a very rudimentary UDP communications protocol (resend apparently dropped messages after a timeout for reply and ignore noncritical information that comes out of order) that's handled through several threads that each sniff a single packet and process it in parallel. The trouble comes with handling the volatile data between threads. A simple mutex/semaphore won't do the job. Nearly every message will require at least one database lookup (SQLite is threadsafe, however) and access to the volatile data (for game logic). While it's perfectly fine if two threads are reading from the volatile data, I can't have a thread reading and another writing to that data at the same time. If I simply lock every thread with a global semaphore before writing and reading, I might as well run this single threaded since it will essentially be non-parallel with trivial gains. I thought about having two instances of the game's volatile data and simply swapping between them after every update. New writes would go into what's essentially the backbuffer, and reads would come from the front. Writes to the backbuffer would be locking, but they're a heck of a lot less frequent and timeconsuming than the number of reads. The problem is, since volatile data can change drastically rather quickly and in significant ways (new instance, destroyed instance, ect) I can't just use a pointer swap - I'd have to copy over the memory. The overhead from this seems like it would be quite large. So, my question is, how can I allow multiple simultaneous reads from a singleton but prevent a race condition for writing?

Share this post


Link to post
Share on other sites
Advertisement
Your problem is usually referred to the "single writer multiple readers" problem or similar nomenclature. I don't know of any built in solutions for SDL that solves this problem, but boost::thread contains a type called shared_mutex for this kind of situation. In the worst case you can use the shared_mutex code to build your own readers-writer lock out of SDL threading components.

Share this post


Link to post
Share on other sites
Quote:
Original post by Zouflain

The overhead from this seems like it would be quite large.


C++ still shines for this, using value semantics and pre-allocated arrays, these operations are really cheap. It might require use of indexes instead of pointers for even cheaper copies (just memcpy the PODs). It works even in some managed languages.

For processing, split all entities to be processed among threads, let each thread produce its own change log. If two objects need to communicate, the message to be sent is stored as well. This creates n change logs, one for each thread.

Merge change logs into read-only state. The original change logs, based on original partition, are already partitioned by originator. If one thread works on objects 1-20, then change log will contain changes for objects 1-20, and no other change log will.

Order messages between objects by receiver. The result will be a set which can again be partitioned for single writer (all messages to object X will be grouped together, perhaps a hash map, or array for small states). Ordering can be done via merge sort, or similar. This part can also include network messages received in mean time, but that likely does require network handler to properly synchronize the queues.

The reason for above organization is that each part can be implemented fully concurrently and likely with no locking whatsoever, since algorithms know that they operate on disjoint data.

For fun, it can be implemented as map/reduce.

Whether that is a good way to scale is another question, but with 300-ish users it shouldn't be too much of a problem, as long as update rate isn't too high and not all of them bunch up in such a way to result in n^2 interactions.

Quote:
SQLite is threadsafe
Just because the API can be used from multiple threads safely, it does not automatically grant transactional semantics. So unless queries are made as transactions, or there exist some other guarantee for atomicity, they should probably be done from a single thread, posting query results back as available.

Quote:
So, my question is, how can I allow multiple simultaneous reads from a singleton but prevent a race condition for writing?

My personal preference is to treat mutable singletons in multi-threaded application as a bug.

Share this post


Link to post
Share on other sites
Quote:
Original post by Zouflain
Nearly every message will require at least one database lookup (SQLite is threadsafe, however) and access to the volatile data (for game logic). While it's perfectly fine if two threads are reading from the volatile data, I can't have a thread reading and another writing to that data at the same time. If I simply lock every thread with a global semaphore before writing and reading, I might as well run this single threaded since it will essentially be non-parallel with trivial gains.

Sounds like a nightmare waiting to happen. You want arbitrary numbers of threads to be able to change a large lump of shared state. This is approaching the worst case scenario for any concurrent app.

My advice would be to give up on this, serialise access to your shared data, and use your threads in a different way.

Share this post


Link to post
Share on other sites
I have to second Kylotan. In order to make this multi-threaded architecture work, you will be locking so much that each thread will be operating in lock-step, essentially acting like a single thread.

I would instead rethink your design. Currently you are multi-threading such that you can handle multiple client requests simultaneously. Instead try moving to a single thread and look at your data-flow to decide what is a time consuming task. Create threads to handle those tasks and use a job queue. Then when your main thread encounters a request that it knows will be time consuming, it passes it off to the job queue and continues to process the simple client requests. Essentially instead of threading clients, you are using a queue of workers.

Also, you mention that you are pulling from the database on every request. Is this data ever changing? Or are you using the database like a giant look up table? The database will perform caching, so you are not hitting the disk every time you request that data, but it is still processing overhead. Look at what the data is you are retrieving and see if you can pre-load some of it at server start into memory.

Having said all that, also remember that your CPU can move significantly more data than your network connection. You mention that your target is 1.5K users. If each user is using 8kbps, then you are moving 12mbps. Your CPU / Memory / PCI buses are moving data at gbps speeds.

--Z

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

Participate in the game development conversation and more when you create an account on GameDev.net!

Sign me up!