Packet Size and Latency

Started by
14 comments, last by John Schultz 18 years, 7 months ago
I've heard that there is no latency difference in sending 10 byte packets compared to 150 byte packets or bigger until a certain limit. Is this true and why? I assume people are not using dail-up modems anymore for games so I aim for cable modems. Is there any good resources on this? The reason I ask is because in my game I send small updates as soon as something changes, which happens from 0 to 30times/sec (not evenly spread out though). I was afraid to add to much to the packet header because it might cause heavy delay on slow connections. I've been trying to get more information on this, but all I can find is old - 56k modem related. This technique I use work perfect in LAN, I'm trying to make it more compatible with larger latencys and with more players. The total bandwith I use with this is very low so it should not be a problem even on a modem, but the packet frequency probably will. On LAN the latency is stable up to 500packets/sec and with a packet size below 500b. So what does a bunch of extra bytes on each packet do if we assume we have unlimited bandwith and medium latency?
Advertisement
Latency is all about the speed of the network, and is not proportional to what is sent along it. Latency will only increase relative to data flow when the total flow exceeds the bandwidth along the route. With a good hardware modem you can get very low latency from a 56K connection to your ISP. On the other hand, a cable modem connection might be downloading something at full speed from the other side of the world, but the latency could be 2000ms because each packet takes 2s to arrive.

The internet is like a conveyor belt: bandwidth is like the width of the conveyor belt and dictates how much data can be sent along it at once. Latency is proportional to the length of the belt and states how long you have to wait for anything to arrive after you request it.
Yeah, what he said. Latency is basically constant as long as the data isn't piling up waiting to be sent. If you have, say, a 56k connection, then you can send 56k of data every second. Hence the name. [wink]
So as long as you're sending under 56k per second, all is good, it is sent off immediately, and it arrives as fast as the line can take it.
If you try to send 57k per second, then data will pile up in the buffer, causing extra latency. When you send something, you have to wait for all the existing junk in the buffer to finish sending, which means extra latency.

But packets don't go any faster just because you have more bandwidth. Bandwidth is just how much you can send per second (Lke the width of the conveyor belt Kylotan mentioned. The wider it is, the more you can put on it every second. But that doesn't mean stuff you put on it now will arrive faster at the other end.)
Well there is something strange here, something I don't get.

I feel the modem actually need to take it's time to transmit, so does the NIC.

This is how I think it is, is the following correct?

Let's say I send a 55k packet on a 56k modem, assuming we actually get up to 56k in bandwidth. Then the packet, if I got it right, will be sent during the next second, probably divided up somehow by the IP protocol or some other protocols? The receiver needs to wait for the full packet until the packet can be read. This would create a minimum latency of 1000ms @ 56k/sec + the actuall latency, let's say 150. The 150ms latency is acceptable but the 1150ms latency is way out of order for action games.

Assuming we have cable modems with 250kb/s as max bandwith and I want to have "the best ping" then what packet size should I use? 250kbytes? But then I'll have a latency of minimum 1000ms?! Please explain, I know I'm wrong somehow.

So the ultimate packet size (Just to make this clear) should be:

Bandwidth: 30kb/s Optimal PacketSize: Anything from 1 to 30,000 bytes?
Bandwidth: 56kb/s Optimal PacketSize: Anything from 1 to 56,000 bytes?
Bandwidth: 250kb/s Optimal PacketSize: Anything from 1 to 250,000 bytes?
Bandwidth: 1Gb/s Optimal PacketSize: Anything from 1 to 1Gigabyte?

And the latency in those cases should stay at 150ms, which was the time of the 3 byte ping packet we sent before?
Latency is directly proportional to packet size: larger packets have higher latencies. If you can transmit at N bits per second, it should be clear that the interval spacing between packet times increases with the amount of data sent.

As for optimal packet size: I suggest implementing bandwidth adaption, so that the amount and frequency of data sent is adjusted depending on the available bandwidth. For more info and links see my posts in this thread.
Quote:Original post by John Schultz
Latency is directly proportional to packet size: larger packets have higher latencies. If you can transmit at N bits per second, it should be clear that the interval spacing between packet times increases with the amount of data sent.

That's a bit misleading. "Latency" as it is usually described is the sum of three quantities: transmission delay, propagation delay, and queueing delay.

Transmission delay: This relates to the bandwidth of the link, or equivalently, how many bits you can push out per second. A T1 has a lower transmission delay than a 56k modem.

Propagation delay: This measures, once a bit gets put onto a link, how long it will be before it comes out the other end. It is completely independent of transmission delay. Example: A truck full of DVDs has a very low transmission delay (can put hundreds of terabytes of data into the "link" in a matter of minutes) but very high propagation delay (it'll take the truck a few days to get to New York).

Queueing delay: This relates to how long a packet sits around on a host or router waiting for its turn. It's the only part of the equation that's affected by network congestion (load-balancing notwithstanding).

For sending a single packet across most links, propagation delay > queueing delay > transmission delay. That's not to say that transmission delay is unimportant, just that it doesn't change things much if your data rate is low.
Quote:Original post by Wave
I feel the modem actually need to take it's time to transmit, so does the NIC.

That's true, but that's related to the bandwidth. on 56k, you can send 56k *per second*, which means that one bit can be sent in 1/56000 second (Roughly, and actually I think a 56k modem can only send 33k, but receive 56k, but that's not the point). So the bigger packets you send, the longer it takes for the modem/NIC to send (or receive) it.

Quote:
Let's say I send a 55k packet on a 56k modem, assuming we actually get up to 56k in bandwidth. Then the packet, if I got it right, will be sent during the next second

A better way to put it would be to say that sending it takes almost a second. Of course, it starts sending immediately, and takes almost a second to get the last bits of the packet pushed out the door.
So if you send a packet like this every second, all is fine. If you send two of these per second, you're queuing up more data than the modem can send.

So what matters isn't so much the packet size, but the total bandwidth used.
Whether you send 10 packets of 5k each, or one packet of 50k is basically irrelevant. (Of course, each individual 5k packet will take less time to send, so the first of them will arrive a bit faster than the entire 50k packet would)

Quote:
The receiver needs to wait for the full packet until the packet can be read. This would create a minimum latency of 1000ms @ 56k/sec + the actuall latency, let's say 150. The 150ms latency is acceptable but the 1150ms latency is way out of order for action games.

Yeah, which is why games don't usually send 56k packets. :)
This is the transmission delay Sneftel mentioned. It simply takes ages for a slow link to push out 56k bits.
If you send, say, 1k packets, then it will only take 1/56th of a second to send each packet, which added to the 150ms propagation delay gives... well, somewhere around 170ms. That's a lot more acceptable. Of course, since most people have faster connections these days, it will probably be a lot less noticeable, but your math is basically correct.

Of course, sending fewer, bigger packets allows you to use the bandwidth more efficiently (less overhead due to packet headers and such), but bigger packets take longer to send, as your math above showed. :)
So it all depends on what you need.
Quote:Original post by Sneftel
Quote:Original post by John Schultz
Latency is directly proportional to packet size: larger packets have higher latencies. If you can transmit at N bits per second, it should be clear that the interval spacing between packet times increases with the amount of data sent.

That's a bit misleading. "Latency" as it is usually described is the sum of three quantities: transmission delay, propagation delay, and queueing delay.


That's true, adding propagation and queueing delay will further increase latency (making the net latency not directly proportional (constant scale factor) to packet size alone, but a combination of factors (sometimes constant (speed of light) sometimes variable (network conditions)).

Thus, in the best case (near) ideal network (such as with a switched LAN), latency is directly proportional to packet size. This can be verified using ping -l <PACKET_SIZE> <target>, or using a custom protocol and adjusting packet size. On the internet, the additional delays will further increase latency: they add to the base latency related to packet size and available bandwidth.

See also Measuring Latency.

For a game where fast response and low latency are desired, sending smaller packets at a higher frequency will result in the best performance. As the packet size is reduced, packet overhead relative to payload will start to show diminishing returns. For an application (or game state) where maximum data transmission is desired, sending larger packets less frequently is more efficient (for example, high speed networks where Jumbo Frame size (~9000 bytes) packets are required for efficient bandwidth utilization). Extremely large packets will likely fragment (and in some markets, much smaller packets may also fragment: depends on internet router settings).


Edit: Testing on the internet:

Quote:
Pinging [ip address] with 10000 bytes of data:

Reply from [ip address]: bytes=10000 time=185ms TTL=117
Reply from [ip address]: bytes=10000 time=191ms TTL=117
Reply from [ip address]: bytes=10000 time=198ms TTL=117
Reply from [ip address]: bytes=10000 time=193ms TTL=117

Ping statistics for [ip address]:
Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
Minimum = 185ms, Maximum = 198ms, Average = 191ms

Pinging [ip address] with 20000 bytes of data:

Reply from [ip address]: bytes=20000 time=506ms TTL=117
Reply from [ip address]: bytes=20000 time=421ms TTL=117
Reply from [ip address]: bytes=20000 time=420ms TTL=117
Reply from [ip address]: bytes=20000 time=422ms TTL=117

Ping statistics for [ip address]:
Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
Minimum = 420ms, Maximum = 506ms, Average = 442ms

Much smaller packets:

Pinging [ip address] with 750 bytes of data:

Reply from [ip address]: bytes=750 time=18ms TTL=117
Reply from [ip address]: bytes=750 time=14ms TTL=117
Reply from [ip address]: bytes=750 time=18ms TTL=117
Reply from [ip address]: bytes=750 time=15ms TTL=117

Ping statistics for [ip address]:
Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
Minimum = 14ms, Maximum = 18ms, Average = 16ms
Pinging [ip address] with 1500 bytes of data:

Reply from [ip address]: bytes=1500 time=33ms TTL=117
Reply from [ip address]: bytes=1500 time=25ms TTL=117
Reply from [ip address]: bytes=1500 time=37ms TTL=117
Reply from [ip address]: bytes=1500 time=34ms TTL=117

Ping statistics for [ip address]:
Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
Minimum = 25ms, Maximum = 37ms, Average = 32ms

To be fair:

Pinging [ip address] with 1472 bytes of data:

Reply from [ip address]: bytes=1472 time=22ms TTL=117
Reply from [ip address]: bytes=1472 time=19ms TTL=117
Reply from [ip address]: bytes=1472 time=20ms TTL=117
Reply from [ip address]: bytes=1472 time=18ms TTL=117

Ping statistics for [ip address]:
Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
Minimum = 18ms, Maximum = 22ms, Average = 19ms

Pinging [ip address] with 1473 bytes of data:

Reply from [ip address]: bytes=1473 time=36ms TTL=117
Reply from [ip address]: bytes=1473 time=31ms TTL=117
Reply from [ip address]: bytes=1473 time=27ms TTL=117
Reply from [ip address]: bytes=1473 time=25ms TTL=117

Ping statistics for [ip address]:
Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
Minimum = 25ms, Maximum = 36ms, Average = 29ms

Jump in latency with only one byte difference... See MTU.



Doubling the packet size resulted in a near-proportional increase in latency (a reasonably fast internet connection). See also Packet size vs. latency where a 1500 byte packet is estimated at 77.72ms and a 576 byte packet at 29.24ms over T1, 10 hops.

I highly recommend a tool like DummyNet, which when combined with an old PC (just need a floppy drive) and two Intel Pro100 NIC's, allows one to experiment with delay, restricted bandwidth, and packetloss in order to test and tune a custom protocol.

[Edited by - John Schultz on August 26, 2005 7:20:01 PM]
Just a brief addition: the reason packet size matters for latency, is that you have to have all the data of the packet before sending it, which is when you send the first byte -- but you have to receive all the data (i e, receive the last byte) before you can even start looking at the first byte of data.

If the network was a true "stream" and the data you were sending was "streaming" (like audio, say), things would be different -- but true "streams" don't have "packet" sizes anyway ;-)
enum Bool { True, False, FileNotFound };
Thanks for all the feedback!

My sum up (Check if it's correct)
-----------------------------------------------------------------------
- The latency depends on a sum of the following:
1.Transmission - Factor: size of the packet - Time: Depends on the connection/modem/nic
2.Propagation - Time: The time it take for the data to travel - Factor: depends on distance, routers and network.
3.Queueing - Time: If we exceed bandwidth the packet might need to wait in a queue. Factor: Bandwidth usage.

So to actually make a good estimation of the latency we need:
1. Know the speed of our connection.
Calculation: 1000*( PacketSize / ConnectionSpeed ) = SendLatency in milliseconds
Ex 56kb/s - we can send 1byte in 18ms.
2. The time the packet travels.
Calculation: Send a ping, get ping reply, measure time and withdraw point 1 above.
3. Need to know the current bandwidth usage. Easy if our game is the only network application running..
Ex: If the usage is 40kb on a 56k connection and we send a packet 35k the delay will be the time to transmit ( 40k + 35k ) = 75k into point 1. Which would mean 1340ms!

-----------------------------------------------------------------------


So now I'll go through the rest and all links provided and reply if I have more questions. Also correct me if I'm wrong above!

@John Schultz - What's this?
http://www.brightland.com/sourcePages/network_technology.htm
It speaks of a smart network library, but it's just a plain text page? Is this a project that is not finnished? What's the actuall page?

And also you said you've made your own custom netlib - the above? How did you implement the reliable UDP part? Did you send an ack, for every reliable packet received or did you piggyback them on other packets? or did you do this depending on the current traffic?

And.. I will post more questions in a new topic =)

This topic is closed to new replies.

Advertisement