Regulating network usage.

Started by
6 comments, last by Holy Fuzz 18 years, 8 months ago
Hey, I'm wondering how to make my program automatically regulate network usage. What I mean is that I want the program to limit itself to sending a certain number of bytes or packets packets per second, based on available network bandwidth. Presumably, my app will start with some default or maximum setting, and then scale down as packet loss and/or lag increases. So my questions are: 1) Should I limit the number of packets sent per second (with a fixed max size per packet) or the total number of bytes sent (including packet headers) per second? 2) How does my program choose its default/maximum transmission rate? Should it ask the user to specify what kind of connection they are using and pick an appropriate value or is there some way to automatically determine an appropriate value? (perhaps there should be a single default value which will then be scaled down/up depending on packet loss and/or lag.) 3) *How* should I scale the transmission rate? Based on packet loss, lag, or both? Do I assume that if I'm losing a certain percentage of packets then I am sending too much? How does lag come into play? Will lag go up as network load increases? By how much should I scale the transmission rate? Thanks for all help! - Fuzz
Advertisement
You could read the modern TCP congestion avoidance papers to understand how TCP solves this problem.

To answer some of your questions:

I'm assuming you're using UDP, as TCP will transparently manage bandwidth for you. If you're using TCP, the best you can do is detect lag, then let the queues drain, and then scale back, as a possible indication of exceeding the channel capacity.

1) Scale the amount of data sent. Be sure to count IP and UDP header size! (28 bytes per packet)
2) Scale up the send window size linearly when receiving acks for packets; scale back the send window size exponentially when noticing lost packets.
3) When starting up, you may wish to scale up exponentially until you hit lost packets, to avoid a very slow start-up process on high-bandwidth links.
4) Letting the user tune the parameters through some advanced interface would usually let the users screw themselves. However, if this is a server application, you may be able to trust the users slightly more than that.
enum Bool { True, False, FileNotFound };
Thanks, much appreciated. Yes, I'm using UDP, and it's for a game, not a server.

A few more questions:

1) In practice, what are good values by which to scale (both linearly and exponentially) the window size?

2) Should I vary the window size globally, per-destination, or some combination of the two?

3) Am I correct in thinking that it doesn't make much sense to scale up the window size whenever an ack is recieved if the system isn't sending anything near a window-size worth of data? (if the windows size is 1000bps, but the system is only sending 100bps, then it doesn't make sense to increase the 1000bps window size, does it?)

Also, eventually this is going to end up being used as part of a priority system. Here's my thinking: When the application wants to send a "message" to another computer, it gives it to my network engine along with a priority, which is an integer. At regular intervals, the engine collects a certain number of outgoing messages into a packet (up to the current window size) and sends it. It picks messages based on their priority; highest-priority messages get to go first. The priority of those packets who were not sent is incremented by one, and the process repeats itself after some interval. Will this system work okay? Alternatively, it could NOT increment th priority, such that it is possible for low-priority packets to NEVER be sent in a highly-congested situation.

Thanks!

- Fuzz
It's been a while since I've dug into the guts of TCP but as I recall TCP will happily use 100% of the bandwidth you can throw at it assuming you can source that much data. TCP is worried about congestion, not throttling.

I believe that most OS's have various QoS (Quality of Service) features that will do throttling semi-automagically but I don't really know anything about them. It gives you something to google for though.
-Mike
I would suggest keeping both global and local measurements, where the actions on a local (per-connection) point would work towards an average used for the global measurements.

I would scale back by a factor of 2 after a lost packet.

I would use units of no less than 1000 bps in my scaling if it's for modem connections -- ie , if adding more capacity, add 1000 bps (which is about 100 bytes per second), although this value is quite dependent on what you expect a typical connection to "want" (broadband or modem?)

You're right that you don't *need* to scale up on ack if you're not close to saturating the current window. I would allow the window to go at least 4x above the current send rate before stopping up-scaling, though, in case more bandwidth suddenly is needed.

For the global connection, I'd scale it back by (2 ^ 1/N) on a dropped packet for N active connections, and add one window per received ack from a client that's within the ack-raise window. Then, for each client, use the minimum of (global / N) and (local) for rate limiting.

That'll give you a good start, although the specifics of traffic regulation are interesting and varied, and depend a lot on what the specific traffic pattern is.
enum Bool { True, False, FileNotFound };
Awesome that makes sense. Thanks again!
MMMmm my thoughts about it. You told it was for a game? Lets say your game send 1 packet of 100 byte every second. The window is 1000 bytes. If I had understood, you'll collect the packet until you've reached the window size. So you'll not send the first packet in 10 seconds?

What about the lag?
The window size is the MAXIMUM number of bytes that can be sent per second. The app sends a packet once the packet's size reaches the window size OR after some time interval. So it will only wait a little while for data to collect.

This topic is closed to new replies.

Advertisement