Archived

This topic is now archived and is closed to further replies.

LonelyStar

Network and Threads

Recommended Posts

Designing a Network-Wrapper-Class, I was thinking that it would be a good Idea, to do the Newtwork Send''s and Recv''s each in a different thread. The Advantage I would think this would have, is, that whenerver I send data, I just give a send request to the send-thread and do not hace to wait for it to finish. But I want to have a Server-Client architecture, with one server being connected to a lot of clients. And now the problem: Imagine, the Server send-thread would have a lot of send-requests to a lot of clients. Client A has a much slower connection than client B, but a send-request which should get to CLient A is send earlier than a send-request which should get to Client B. So Client B has to wait until Client A got its Data. I could Imagine that a lot of Lag would be produced by this. A solution I could think of, is creating a send-thread for every Client connected to the server. But wouldn''t that mean a lot of overhead?!? Any Ideas how to solve this problem? Thanks! Nathan

Share this post


Link to post
Share on other sites
The amount of overhead depends upon the number of clients connected to the computer. Basically, you don''t want to get to the point where the OS is spending a significant amount of time just switching between threads.

I can think of a couple alternative options. One option would be to have a pool of threads, and distribute the clients amongst this preset number of threads. While it wouldn''t solve the lag issue completely, it would definitely help.

The other solution depends upon how much data you are trying to send, and how much memory you are willing to use. In one piece of code that I''ve written, I''ve dealt with the "lag" issue in the following manner. I made a wrapper class for the socket, and have my own send function. My wrapper class has an internal buffer for all outgoing data. So, whenever I wish to send data, I stuff the data in the buffer, and then try to send what I can from the buffer. If everything is sent, great. If not, whenever the socket notifies my code that it is "ready to send" again, I simply try and flush out the buffer. This works quite well for my purposes. The only issue would be if you were trying to send large amounts of data at once....your buffer may become pretty large (ie like trying to send a file or something). In the case where I was trying to send a large file, I would simply monitor my buffer size. If it becomes too large, try and send data less frequently.

Mike

Share this post


Link to post
Share on other sites
One solution is a non-blocking I/O such as OVERLAPPED I/O and completion port. Under such architecture, the process, in theory, would need at least one thread per CPU for a good high performace server.

Kuphryn

Share this post


Link to post
Share on other sites
Here is my basic design. Note that I haven't implemented it yet.

The basic idea is similar to yours: Wrap a class around a socket that takes care of the details for you. I'd have a thread for sending/receiving. Additionally, instead of just feeding byte data to the game socket, you pass a message class.

To send a message would look something like (excuse my bad code shorthand):

============================================================
game_socket s;

s.connect(IP, port); // or s.bind(port)

m = new game_message;

m.setup_message(message_effort_level, message_type, message_body);

s.send(m);

// go do other stuff for a while

// come back around later and see if the message failed
message_status = s.get_message_status(m);

if (message_status == message_failed){
// handle it. Possibly try sending again,
// or alert the user the message failed
}
else if (message_status = message_still_in_progress) {
// do whatever
}
else {// message succeeded
delete m;
}
============================================================

The send thread would maintain a buffer of messages (of message class). Whenever the low level socket is ready to send, this send process would loop through the messages round robin and ask each message for a packet of data to send and send it. Whenever a message is completed, it would be pulled from the send buffer added to the completed list so the user program can later come around and check which messages have completed and which failed.

For any particular connection, only so many packets would be allowed to be "in transit" at a time (perhaps based on the round trip latency).

Consider the following situation:
"Imagine, the Server send-thread would have a lot of send-requests to a lot of clients. Client A has a much slower connection than client B, but a send-request which should get to CLient A is send earlier than a send-request which should get to Client B. So Client B has to wait until Client A got its Data."

The method I described would:
1) send the first packet of message A's data to A
2) send the first packet of message B's data to B
and so on until the max number of packets are in flight. The packets in flight would vary based on round trip latencies to the client.

In other words, the two messages would get multiplexed. Its worth noting that this still doesn't have any explicit flow control. If A is too slow to handle the pipe we are sending it, A simply will drop the extra packets and rerequest them, if they are of a message_effort_level to rerequest.

I haven't decided what to do about the flow control problem yet.

One solution would be to just let packets get dropped, but this is ugly. It would consume even more precious bandwidth with packet rerequests.

Another would be to limit the BW to each client to some minimum level (like what a 56K could handle). By keeping track of time stamps of when each packet goes out to a client, and the size of the packets you could calculate the average datarate for the past N packets to a client. If this datarate passes a certain threshold (like what a 56K modem could handle), the game would skip the messages to this client until enough time has elapsed to send the next packet.

A third would be to throttle bandwidth based on packetloss that the other client is experiencing.

Some other notes...
I plan on using UDP for this.

The message_effort_level would tell the send thread how hard it should work to get the message through. My send/recieve thread will have to handle all the ack/timeout/message sequencing that TCP would normally handle.

For movement messages to clients, it might flag it as an effort level to just send it and if the client doesn't get it, too bad. By the time we got around resending motion data, the data would probably be stale and we would already have a new motion data message up and ready to send.

Some things would have a relatively high effort level: like for patching client files. We would keep retrying until either the client times out or a packet has been resent too many times.

Realistically, for any usage, my client would send the messages then pop completed/failed messages off a queue contained in the game_socket. Something like:
while (!s.result_queue_empty()) {
a_result = s.pop_result();
// decide what to do with the result
}

[edited by - red_mage on November 18, 2003 4:54:32 PM]

[edited by - red_mage on November 18, 2003 4:55:33 PM]

[edited by - red_mage on November 18, 2003 4:57:04 PM]

Share this post


Link to post
Share on other sites
first of all i''ve not done much networking stuff but thats what i can tell from my experiences/thoughts

quote:

Imagine, the Server send-thread would have a lot of send-requests to a lot of clients. Client A has a much slower connection than client B, but a send-request which should get to CLient A is send earlier than a send-request which should get to Client B. So Client B has to wait until Client A got its Data.

I could Imagine that a lot of Lag would be produced by this.


i don''t think so as the same amount of lag yould be inflicted on your game loop (if you wait 20 ms fo a packet to be sent, with 10 packets overall to be sent, you would have a huge 5fps at best)

as kuphryn suggested: by using nonblocking sockets you get away from ''waiting until sent'' and therefore won''t introduce lag this way.
not sure but even with blocking sockets the call to send() should return immediately if the data to send is small enough, i.e fits in the os send queue, but if its too big even the nonblocking sockets will just send the data partially.

red_mage:
i don''t think that "message_status = s.get_message_status(m);" should belong to the sending code. why not have message_effort_level include types like reliable, unreliable (assuming udp here because with tcp you wouldn''t need to worry about lost data) and let the networking class handle the details.

so to send you simply do:
CNetwork::send(players_all,effort_reliable|effort_high,whatever,data);
or forge a packet like you did, which may be bad because the network class could forge multiple data packets into a big one to reduce overhead (imagine sending position updates for one player --> 12 bytes data 20+ bytes (this number is just guessed but it will be about this size iirc) packet overhead)

Share this post


Link to post
Share on other sites