Sign in to follow this  
Ponl

Congestion control

Recommended Posts

Did anyone of you write congestion control on your reliable udp? I know its kind of reinventing the wheel but in some applications this really is necisary. Anyway I was thinking about measuring the ~RTT and using this to find the current bandwith to the other peer, and using the bandwithValue to send the amount of data. Does this seem like a good plan ? I've also looked at TCP's way of doing congestion controll but when it measures any congestion it makes the window smaller by half so to me it does not look like a good plan since it will not use all resources available ? If you have any information on this please give sugestions. Thanks Ponl

Share this post


Link to post
Share on other sites
I use boost's asio for networking, so the concepts are organized in manner, but there's no reason they couldn't be part of trivial network handler loop.

if (priority == IMMEDIATE) {
// protocol control packets, pings, keepalives, negotiation
send_async( packet );
return;
}
// reliable transport
if ((outstanding_bytes < quota) && (outstanding_packets < max_outstanding)) {
outstanding_bytes += packet_size;
outstanding_packets++;
send_async( packet );
} else {
outstanding_queue.push_back(packet)
}

...

on_timer() {
// timer is between 50 and 500 ms, depending on requirements
// check all sockets for un-acked packets
// re-send them as needed
// if packets were lost, adjust the outstanding____ values appropriately
}

...

on_ack( Packet p ) {
// decrease socket's outstanding_packets appropriately
// decrease socket's outstanding_bytes by the size of packets that were acked

// try to send more packets
}


Flow control here is purely passive. No RTT, no packet times, no negotiation.

You don't really care why the peer is slow, or why congestion happens. It's based around the observation that peer will accept a limited amount of data, and we use acks to control that.

In addition, if necessary, it's trivial to add maximum_bandwidth parameter to the above, meaning that we further limit how much we send. The timer above then also manages secondary quota. Having ability to limit maximum bandwidth is quite handy when dealing with many clients. For example, you could log into your live game via LAN, and suddenly you'd be consuming 100% of bandwidth, simply because you can.

If you use non-blocking or blocking sockets, then the on_timer above is simply part of the loop.

PS: I don't really try to maximize the bandwidth. It's for in-game traffic, so prioritization, bandwidth limits (3-6k/sec per client, hard enforced, allow for up to 20k/sec), reliable packet loss recovery are more important than maxing out the bandwidth.

For streaming data, there's TCP. No point in re-inventing the wheel IMHO.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this