Congestion control

Started by
1 comment, last by Ponl 16 years, 2 months ago
Did anyone of you write congestion control on your reliable udp? I know its kind of reinventing the wheel but in some applications this really is necisary. Anyway I was thinking about measuring the ~RTT and using this to find the current bandwith to the other peer, and using the bandwithValue to send the amount of data. Does this seem like a good plan ? I've also looked at TCP's way of doing congestion controll but when it measures any congestion it makes the window smaller by half so to me it does not look like a good plan since it will not use all resources available ? If you have any information on this please give sugestions. Thanks Ponl
Advertisement
I use boost's asio for networking, so the concepts are organized in manner, but there's no reason they couldn't be part of trivial network handler loop.
if (priority == IMMEDIATE) {  // protocol control packets, pings, keepalives, negotiation  send_async( packet );  return;}// reliable transportif ((outstanding_bytes < quota) && (outstanding_packets < max_outstanding)) {  outstanding_bytes += packet_size;  outstanding_packets++;  send_async( packet );} else {  outstanding_queue.push_back(packet)}...on_timer() {  // timer is between 50 and 500 ms, depending on requirements   // check all sockets for un-acked packets  // re-send them as needed  // if packets were lost, adjust the outstanding____ values appropriately}...on_ack( Packet p ) {  // decrease socket's outstanding_packets appropriately  // decrease socket's outstanding_bytes by the size of packets that were acked  // try to send more packets}


Flow control here is purely passive. No RTT, no packet times, no negotiation.

You don't really care why the peer is slow, or why congestion happens. It's based around the observation that peer will accept a limited amount of data, and we use acks to control that.

In addition, if necessary, it's trivial to add maximum_bandwidth parameter to the above, meaning that we further limit how much we send. The timer above then also manages secondary quota. Having ability to limit maximum bandwidth is quite handy when dealing with many clients. For example, you could log into your live game via LAN, and suddenly you'd be consuming 100% of bandwidth, simply because you can.

If you use non-blocking or blocking sockets, then the on_timer above is simply part of the loop.

PS: I don't really try to maximize the bandwidth. It's for in-game traffic, so prioritization, bandwidth limits (3-6k/sec per client, hard enforced, allow for up to 20k/sec), reliable packet loss recovery are more important than maxing out the bandwidth.

For streaming data, there's TCP. No point in re-inventing the wheel IMHO.
Ok thanks, I'll try your aproach.
I'll let you know how good it works for me.

Cheers
Ponl

This topic is closed to new replies.

Advertisement