Dealing with multi-player threads...

Started by
14 comments, last by hplus0603 18 years, 2 months ago
@hplus0603:

Perhaps I misread the question, my understanding was the question was referring to server design of the MUD. Performance is a key issue here for a scalable MUD server design. Distributing the same state data to each client does not impact how to handle incoming data from multiple sockets. Different data will have different priority for game state, not all data is essential when tranmitting to each client. eg. player position.

Using a dispatcher frees the system to handle multiple asynchronous requests and scales well with multi-cpu.

Regards
Cubex

Advertisement
the only problem with a thread per user is there is a max of like 200ish threads on windows, so a better approach would be like a thread per 100 users, useing select within the threads to check for data on each threads designated 100 users
When considering the scalability of serving a text-based multiplayer game, here are the major steps I would be concerned about:

1) Reading input from players.
2) Mutating the world state based on that input.
3) Sending out the observed changes to the world to each observing player.

I had a long rant about how the serialization cost of doing 2) in each of the threads of a multi-threaded server may actually cost more than it gains in reducing pressure of 1); in the end I added more links and put it up on my web site instead.

[Edited by - hplus0603 on January 28, 2006 4:22:02 PM]
enum Bool { True, False, FileNotFound };
Quote:Step 3) is never much of a scalability concern, because you can easily fit all the data you need to send to a client in the outgoing TCP buffer, and the sending of that data will be done by the kernel in response to a network adapter interrupt, not in response to any specific user thread. If the buffer fills up, you either drop data, or disconnect the user (those are really the only two options).


This is what I've been thinking about - what to do if a particular client is slow to receive data. For some types of games, there may be a more tolerant approach that allows such clients to stay connected.

Approach 1: Resize the buffering in the game server for that client. This would allow a client to stay connected during a momentary 'hiccup'. The downside is that the client can get his butt kicked by enemies before being able to make the first counter-attack. But that's why we have auto-attack in MMOs.

Approach 2: Slow down that game. Once again going with the MMO example which is typically fast-turn-based (for example one turn per second), if a small number of clients is slow to receive data from the server, the game can slow itself down to allow all clients to receive all the pending data. The downside of course is that many players suffer for one player's skinny pipe.

Anyway, on a typical server box (or any box for that matter), how much data can an outgoing TCP buffer hold? I'm assuming each client connection has its own outgoing TCP buffer.
You should make thread pools. For instance make a thread that handles 20-50 users at a time. This will offer very low performance hit... and if you are on really outdated hardware maybe limit it to 10 connections/users per thread. This should be done at the socket/connection level in your server though, that way you can refer to connections easily. You should also be keeping a central list (vector if you prefer that term) or array of users for looking up other user's information.

If you would like more information PM me I will give you some code samples.
---John Josef, Technical DirectorGlass Hat Software Inc
@wyled: Whether to create thread pools (and thus use IOCP on Windows) is basically what this thread is about. You didn't actually read the article, did you?

@SteveTaylor: You can set the TCP buffering size on your own. Check out the SO_RCVBUF and SO_SNDBUF socket options. The defaults vary based on OS, OS version, and sometimes tuning variables. There may also be a system-wide limit on the amount of buffering, in addition to the per-socket limit; if so, the system might (or might not) have some system-specific way of configuring that limit.

I don't like the idea of trying to dynamically re-configure the buffer size. What if all clients go slow at once? That could easily happen if there's some partial (or full) service outage at your ISP. If you want to be robust, you have to be prepared to deal with that case, so set the buffers to the maximum size you're prepared to deal with up front.

Also note that if the buffer is filling up, it MAY be a hiccup, or it MAY be that you're actually sending too much data for the client's connection. What do you do then? He just can't keep up. I'd drop him (or her) if the buffer actually fills up -- the buffer should be significantly bigger than what you would need to write during a single pulse, so if you can't drain enough data during a few pulses, then you just can't keep up.

Now, for a text-based MUD, running out of space in the TCP buffer is quite unlikely to actually happen. Unless you have lots of users who like to paste novels into their chat lines :-)
enum Bool { True, False, FileNotFound };

This topic is closed to new replies.

Advertisement