Limiting upstream troughput

Started by
2 comments, last by Antheus 16 years, 2 months ago
When sending large files using udp, there will be alot of packets dropped if one does not somehow limit the rate at which the data is sent. But what values should one use for this limitation? For instance, if i call send() and pushes like 30x1500byte packets trough my socket at the same time. Only 5-6 of those will reach its destionation. Setting SO_RCVBUF to a large value on the remote host will make this work better. But that aint a good solution since the protocol should not have any restriction to how large files it can send. What values should i use to calculate how fast i should send the data? 8096bytes which is the standard value for RCV and SNDBUF on winsock seems like a nice value. But 8096 bytes over what timeframe? ^.- Thanks
Shields up! Rrrrred alert!
Advertisement
What kind of sockets are you using? Blocking or non-blocking?

With blocking, send will block if you exceed the buffer size until more room is available. With non-blocking, you get a WSAEWOULDBLOCK error.

But buffer sizes do not affect packet loss. Since apparently you need all packets to arrive, you'll need some acknowledgment scheme. When peer receives a packet, it sends a response indicating which packet it received.

With multi-client servers the above problems tend not to be an issue, since server will be serving many peers, and will not run the risk of overflowing client's buffer.

If however you want to take control over traffic, then simply keep a track of how many packets are in transit to a given peer. Once you receive an acknowledgment, send either more data, or re-send lost packets.

The problem you're dealing with is more likely to occur on LAN. Since packets are delivered almost instantly, it's easy to flood client's buffer so that application doesn't have time to clear it fast enough. Over WAN, varying latency makes such particular situation somewhat less likely.

But in general, if you want to ensure things arrive, you'll need to implement either acknowledgment scheme, or simply use TCP.
Quote:Original post by Antheus
What kind of sockets are you using? Blocking or non-blocking?

With blocking, send will block if you exceed the buffer size until more room is available. With non-blocking, you get a WSAEWOULDBLOCK error.

But buffer sizes do not affect packet loss. Since apparently you need all packets to arrive, you'll need some acknowledgment scheme. When peer receives a packet, it sends a response indicating which packet it received.

With multi-client servers the above problems tend not to be an issue, since server will be serving many peers, and will not run the risk of overflowing client's buffer.

If however you want to take control over traffic, then simply keep a track of how many packets are in transit to a given peer. Once you receive an acknowledgment, send either more data, or re-send lost packets.

The problem you're dealing with is more likely to occur on LAN. Since packets are delivered almost instantly, it's easy to flood client's buffer so that application doesn't have time to clear it fast enough. Over WAN, varying latency makes such particular situation somewhat less likely.

But in general, if you want to ensure things arrive, you'll need to implement either acknowledgment scheme, or simply use TCP.


Thanks for your awnser!

I am using non-blocking sockets, and i have a protocol for numbering packets so that they arrive in sequence, and sending ACK's so that they are resent within a certin time if they dont get an ACK.

So it dosn't matter that i am flooding the send and receive buffers before i can empty them, because the ack-resend-logic will resends any packets that was lost due to flooding.

But it seems like an awful waste to rely on packets beeing resent instead of just limiting the rate at which im sending packets.

Im thinking there should be some way to calculate about how much data you can send per second if you have the roundtrip time etc. Ofcourse this is also limited by bandwidth, which there is no way to know i think, unless you try to calculate it.
Shields up! Rrrrred alert!
Quote:I am using non-blocking sockets, and i have a protocol for numbering packets so that they arrive in sequence, and sending ACK's so that they are resent within a certin time if they dont get an ACK


Are you checking for WOULDBLOCK error? That tells you when send buffer is full.

Quote:So it dosn't matter that i am flooding the send and receive buffers before i can empty them, because the ack-resend-logic will resends any packets that was lost due to flooding.


These are network stack buffers local to your machine. Flooding happens independent of that - there's a whole network in between the sender and receiver.

Quote:Im thinking there should be some way to calculate about how much data you can send per second if you have the roundtrip time etc. Ofcourse this is also limited by bandwidth, which there is no way to know i think, unless you try to calculate it.


Sliding window, same as what TCP uses.

Send n packets or m bytes. Then wait for acks. When each ack arrives, send more up to your limits.

If you notice all acks are arriving, increase n or m, if you notice packet loss, decrease them. You can additionally implement hard limits for those values, otherwise you get dynamic bandwidth throttling.

With non-blocking sockets you can use a timed loop to send data (50ms read, then write as much as possible), so you have an implicit timer from which you can calculate how much to send. The assumption here is that sending is instant, which isn't a problem with even 64k buffer. Worst case, you'll undershoot your target bandwidth by a few percent.

This effectively limits the amount of data on wire. There's of course gotchas to that, high-latency connections will achieve lower throughput due to synchronous nature of acknowledgments.

This topic is closed to new replies.

Advertisement