TCP question

Started by
8 comments, last by hplus0603 12 years, 9 months ago
Hi all

If I send 60,000 data packets of 20 bytes each using send() and TCP, how much header overhead am I gonna get? I mean, I hope not every send() warrants its own TCP packet with a full header, does TCP aggregate send()s and waits for data to pile up before forming a packet? How long does it wait before sending a packet if no more data is sent()?

Thanks
Advertisement
Have a look into the "Nagle algorithm" and how it is commonly implemented by operating systems; that should be illuminating smile.gif

Wielder of the Sacred Wands
[Work - ArenaNet] [Epoch Language] [Scribblings]

Thanks, you saved me tons of trouble by pointing me in this direction.

Now I see that it *is* necessary to send each data packet in its own TCP packet, otherwise timing gets thrown off.

So for a real-time game with movement orders and object positions being sent in a flux of 20-byte packets, you'd have to set the TCP_NODELAY option, to disable nagle's. Is it also necessary to set SO_SNDBUF to 0 and disable Winsock buffering? it seems so, otherwise you still get disrupted timing from buffering.
Sounds pretty much on track smile.gif

Wielder of the Sacred Wands
[Work - ArenaNet] [Epoch Language] [Scribblings]

Keep in mind that even if you disable Nagle, there's no guarantee that every send() call will be a single packet.

Now I see that it *is* necessary to send each data packet in its own TCP packet, otherwise timing gets thrown off.

So for a real-time game with movement orders and object positions being sent in a flux of 20-byte packets, you'd have to set the TCP_NODELAY option, to disable nagle's. Is it also necessary to set SO_SNDBUF to 0 and disable Winsock buffering? it seems so, otherwise you still get disrupted timing from buffering.


Relying on network packets for timing is a bad idea. Basically, it won't work.

Any traffic on the network can throw off the timings horribly. Any network congestion, network rerouting, dropped packets, electrical storms with lightning in the air, solar flares, or just about anything else can throw it off. These are entirely outside your control.

For example, you may be running with almost exactly 30ms latency for the first thousand packets, but then the next one has 47ms latency, another has 3124ms latency, then 54, then drift to a new stable timing of 37ms latency.

This is true regardless of your choice of using TCP, UDP, or other protocols.


The time it takes to travel the network is entirely outside the scope of socket programming. All you know is that it will probably get there eventually. It may travel by fiber, satellite around the world multiple times, copper cable around a room, or it may travel by messages strapped to pigeons, wireless Pringles-can antenna, or even bongo drums. That takes place at a network layer far below those you control.


Do not make assumptions about how long it takes, because it is highly variable and completely outside your control.
@frob: LOL maybe I'll try the bongo drums method to improve latency

But seriously now, I'm having big problems with latency. When I run it on loopback it's perfect, but when I try it on a 170ms latency broadband WAN, I get tons of trouble. Most obviously, animation becomes jumpy and totally not smooth. Why is it? is it because I'm using TCP? I would have thought high latency only causes a delayed response time for the first packet, but then it's supposed to be smooth cause there's a stream of packets keeping it going. But it seems as if the client waits the 170ms for every packet seperately.

obviously, animation becomes jumpy and totally not smooth. Why is it? is it because I'm using TCP? I would have thought high latency only causes a delayed response time for the first packet, but then it's supposed to be smooth cause there's a stream of packets keeping it going. But it seems as if the client waits the 170ms for every packet seperately.


You probably aren't properly decoupling your simulation, your networking, and your rendering.


Generally, when you receive a network update, you should use that as information about what you want the simulation to be in the future -- how to go from where it's at, to some guess at what it will be in the future. Future network packets keep updating what your predicted future state will be, so you keep striving at that state. This is typically based on step numbers, where each step length is the same fixed value.

Your rendering typically interpolates between the previous physics frame and the next physics frame based on the current wall-clock time and an estimate of when the next physics step will be taken in wall-clock time. Or it just renders what the last physics step outcome was; that's simpler, but less smooth in some cases.
enum Bool { True, False, FileNotFound };
My networking & rendering are tightly knit, via the data object that keeps object position, gets updated by the network and gets redrawn much quicker than network-updated. What paces the whole thing is the rate at which the network updates the object, and somehow latency slows that down to ~5-6 times per second, instead of 30 or so.
The object shouldn't "jump" when it gets a new network state. Instead, it should just use the new state as another data point in an algorithm that estimates where the objects should be at each point in time. Some people use this by interpolating between the position the object was last in and the position you received; others do it by forward extrapolating from the received position/velocity; some do it by interpolating between two forward extrapolated positions from the last and current packets; there are many ways to do it.

Halo runs at 10 or 15 updates per second, but renders at 60 frames per second, and manages pretty smooth game play, for example.

If you get 5-6 packets per second, have you looked at the data in a network sniffer (like WireShark) to figure out what's actually going on the wire? Are you getting "clumping" of packets, many in a single packet? If so, you're probably seeing the effects of Nagle's algorithm, which you can turn off using TCP_NODELAY.
enum Bool { True, False, FileNotFound };

This topic is closed to new replies.

Advertisement