UDP FPS Server: One object update per packet?

Started by
0 comments, last by Vexal 10 years, 11 months ago

Is it better to pack as much information into a single state update packet as possible, or should I send a separate packet for each update?

Currently, I'm sending a single packet per object every 64 milliseconds, and am implementing a method similar to the one in this article: https://developer.valvesoftware.com/wiki/Source_Multiplayer_Networking.

Each packet is 88 bytes. Is it more efficient to pack as many updates into a single packet as possible? And if so, what size should the packet be? I would like this to scale to a large number of players and a large number of physics objects, but looking at my task manager, my server is already using about 1/4 mb/s just for about 10 objects and a single client.

Edit: Not sure why the last paragraph is bold.

Advertisement

You have somewhat less latency, but I still advise against using such tiny packets. The latency advantage probably isn't all that great, or perceivable at all.

Sending packets of 88 bytes and using IPv4, protocol overhead adds more than 100% to your bandwidth (84 bytes ethernet frame, 20 bytes IPv4 header, 8 bytes UDP header ---> 112 bytes protocol for 88 bytes of data). Now consider you use IPv6 (doubling the IP header size) and have an ATM network at the user's end, which is the case for most people. Bandwidth costs money and is not infinite.

This doesn't matter a lot when you only send 15 packets per second to one client, but it may eventually start to matter when you connect a few thousand clients. The OS (or network card) must calculate IP checksums headers for every packet, and the number of packets that can fit on a wire is limited.

A typical 100Mbit ethernet has a theoretical maximum of approximately 148,000 packets per second. This number does not include ACKs or data received from the other end, or any other protocols that have something to go over the wire, and it assumes ideal conditions.

If you send 15k packets per second, you use only just over 10% of your cable's capacity, which does not sound like a lot. However, the way ethernet works, you cannot look at it this way. Before a frame is sent, the network card peeks whether the cable is "busy" and if it is, it goes to sleep for some random time. So, you can look at 10% saturation as a 10% chance of randomly adding unpredictable jitter (this somewhat anihilates the "low latency" advantage of small packets). Add to this other packets coming in from another machine. You may easily find your line 20% saturated or more.

Eventually, sending many thousands of packets will cause packet loss. More packets, more packet loss. Routers handle at most so and so many packets per second, and they can only forward what fits on the attached cable (using the same peek-whether-busy strategy as your network card). They do have packet queues, but queue lengths are short, even though memory is cheap nowadays. The reason for this is that longer queues tend to cause the opposite of what one would expect. Instead of lowering network load, they increase the load, as packets arrive late and the sender is already assuming them lost (no ACKs coming in) and resends.

Therefore, routers drop packets rather quickly when they can't immediately forward them, and thus the more packets you send, the more packets you will lose. Fewer, larger packets will induce less loss.

Nowadays, practically all networks in data centers and (almost?) all of the internet is IPv6, even if you only see IPv4 at home. IPv6 mandates a MTU no less than 1280 bytes. IPv4 only guaranteed 576 bytes, but I haven't ever encountered such a low value.

You can more or less safely send packets of 1280 bytes. The worst thing to happen is that you get IP fragmentation for very few locations, but even this shouldn't make much of a difference.

IP fragmentation has two (mostly theoretical) issues: Under IPv6, fragmentation will cause a resend using two smaller packets (thus, double bandwidth). But, since you stay below the MTU for IPv6, fragmentation won't happen there (no problem!).

Under IPv4 the routers will do the fragmentation/reassembly transparently. The problem here is that if one fragment is lost, the whole datagram is invalid and is discarded. Luckily, packet loss is the exception, not the rule -- insofar it's not that much of a problem (also, packet loss usually happens in bursts, so if you lost one packet you likely lost the others anyway).

Of course, if a user connects to the internet with a modem, having to receive 1280 bytes before assembling a packet may add some measurable latency. It depends on how slow the modem is, and on how critical low latency is for your game. Though of course, a slow modem will always suck, no matter what. This is something you may (or may not) want to consider, too.

Thanks for the detailed response. It was exactly what I wanted to know.

This topic is closed to new replies.

Advertisement