This topic is 4023 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

void SendThread() { while(!bKillServer) Send(); }


(the same can be said for the Listen and Receive threads) So thread safety is obviously necessary. I came up with the following solution, but I'm not convinced of its efficiency, and I'd like other options. This is obvious oversimplified psuedo code.
CriticalSection S, R;

void Send() {
EnterCriticalSection(S);
Iterate Send Queue
LeaveCriticalSection(S);
}

void Recv() {
EnterCriticalSection(R);
Iterate Recv over Client Map
LeaveCriticalSection(R);
}

EnterCriticalSection(S);
EnterCriticalSection(R);
LeaveCriticalSection(R);
LeaveCriticalSection(S);
}

void RemoveClient() {
EnterCriticalSection(S);
EnterCriticalSection(R);
Remove Client from Map
LeaveCriticalSection(R);
LeaveCriticalSection(S);
}


So, comments and suggestions are very welcome. I'd sure like to find a better way to do this. *edit* I dont like this method. If I wanted to add another Send/Recv parallelable function (ie: ProcessMessage(Client*)), then thats another CriticalSection that I have to create ( CriticalSection P; ) and my Add/Remove becomes: EnterCS(S); EnterCS(R); EnterCS(P); ... LeaveCS(P); LeaveCS(R); LeaveCS(S); I'd really like a better solution than this naive approach. [Edited by - Luth on July 9, 2007 2:56:40 PM]

##### Share on other sites

Consider that, if you want to drop a client in one thread, but read for that client in another thread, you have a data dependency problem (race condition) in your code, because it logically makes no sense to want to do that.

In general, network processing is a serial function, because there is usually only a single network card, so you'll only ever receive one packet at a time. Your processing loop may take a while, though, so you'll end up getting batched a number of packets when you go to poll, but shuffling data into and out of the networking interface is a very low-load function which doesn't benefit much from threading.

Similarly, if you're using select(), then you want to use a single call to select() to get all three kinds of status: readable, writable, and error status. Then you want to dispatch based on the status, for each socket. Adding a client happens after the accepting socket is readable (for TCP), or after you've read a packet from a previously un-recognized IP/port address (for UDP). This means that there is a serial dependency on reading for creating clients (and there will be one for removing clients, as well), which means threading can't really give you any benefit here.

I would suggest structuring your system using a single thread for networking, as that will scale quite well enough for most needs, and will avoid any unnecessary locking. If, after profiling, you find that networking really is a bottleneck, and that threading really will help, at that point you may want to go with the specific threading solutions available on your OS of choice, or you can use boost::asio as a generic asynchronous/threaded I/O wrapper.

##### Share on other sites
I have a multi core cpu and multi NICs on the server, and it seemed silly not to utilize them (especially since, in my last two interviews, I was specifically asked what I know about multi-threaded network design). It was my goal to utilize the hardware to its fullest extent. Having a single threaded server on my app wasn't earning me any brownie points. Feeding the beast, as it were.

Since processing a large batch of messages (in a single threaded app) can potentially cause the Send/Receive to stall, it made sense to put it in its own thread. It can process handfuls of messages with the time it has, and queue up any return messages. The Send would simply iterate over the send queue, without worrying if all the received messages were processed or not. Any that haven't will be caught in future iterations.

But as you and I both pointed out,
Quote:
 Consider that, if you want to drop a client in one thread, but read for that client in another thread, you have a data dependency problem (race condition) in your code
there are times when this model conflicts; specifically when processing a message (such as "Logout") causes a change in the data that Send (or someone else) may be working on. Hence the need for cs, mutex, or some other thread safety mechanism. If the function that removes a client from the list (writes data) has exclusivity to the data, then Send/Recv/whatever cant operate on it, hence preventing acting on bad data. However, having one critical section (or mutex, or whatever) doesn't work because there's no reason that two read-only processes cant access the same data. I thought I'd stated that clear enough, but hopefully this helps.

I appreciate your previous attempt at helping, but your suggestion completely subverts my intentions; I'd rather figure out how to do something than chalk it up as something I don't know, and shouldn't bother learning. I plainly admit that my solution is ugly, messy, and completely disturbed, but so far, its the only one thats presented itself. Do you have an alternative?

##### Share on other sites
If I asked a guy coming in for an interview about designing a multi-threaded network system for a multi-player game, and the solution you came up with is the answer I get, that would count against the candidate. I'd rather have an answer along the lines of "I tried doing it this way, but it's so ugly, and doesn't seem like it will run any better, so it may be that single-threading network updates is the way to go." That shows that you can analyze a problem, and choose the best solution, no matter what the preconceived notions might be.

Now, if you're hell-bent on multi-threading what really probably shouldn't be, I would use fine-grained locking. Create one lock primitive per entity. When working on data for that entity, acquire that entity lock. Create an open indexing table for getting at the entities, which makes locking easier (you'll still need a reader/writer lock for getting the entity vs removing it). Potentially use reference counting for each entity (which can be implemented using atomic operations), so that someone operating on an entity won't crash when someone else decides to remove the entity -- in fact, the final disconnect sequence probably should happen as a result of the ref count reaching 0.

1. 1
Rutin
19
2. 2
3. 3
JoeJ
16
4. 4
5. 5

• 26
• 20
• 13
• 13
• 17
• ### Forum Statistics

• Total Topics
631700
• Total Posts
3001782
×