Jump to content
  • Advertisement
Sign in to follow this  
uri8700

TCP question

This topic is 2556 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi all

If I send 60,000 data packets of 20 bytes each using send() and TCP, how much header overhead am I gonna get? I mean, I hope not every send() warrants its own TCP packet with a full header, does TCP aggregate send()s and waits for data to pile up before forming a packet? How long does it wait before sending a packet if no more data is sent()?

Thanks

Share this post


Link to post
Share on other sites
Advertisement
Have a look into the "Nagle algorithm" and how it is commonly implemented by operating systems; that should be illuminating smile.gif

Share this post


Link to post
Share on other sites
Thanks, you saved me tons of trouble by pointing me in this direction.

Now I see that it *is* necessary to send each data packet in its own TCP packet, otherwise timing gets thrown off.

So for a real-time game with movement orders and object positions being sent in a flux of 20-byte packets, you'd have to set the TCP_NODELAY option, to disable nagle's. Is it also necessary to set SO_SNDBUF to 0 and disable Winsock buffering? it seems so, otherwise you still get disrupted timing from buffering.

Share this post


Link to post
Share on other sites
Keep in mind that even if you disable Nagle, there's no guarantee that every send() call will be a single packet.

Share this post


Link to post
Share on other sites

Now I see that it *is* necessary to send each data packet in its own TCP packet, otherwise timing gets thrown off.

So for a real-time game with movement orders and object positions being sent in a flux of 20-byte packets, you'd have to set the TCP_NODELAY option, to disable nagle's. Is it also necessary to set SO_SNDBUF to 0 and disable Winsock buffering? it seems so, otherwise you still get disrupted timing from buffering.


Relying on network packets for timing is a bad idea. Basically, it won't work.

Any traffic on the network can throw off the timings horribly. Any network congestion, network rerouting, dropped packets, electrical storms with lightning in the air, solar flares, or just about anything else can throw it off. These are entirely outside your control.

For example, you may be running with almost exactly 30ms latency for the first thousand packets, but then the next one has 47ms latency, another has 3124ms latency, then 54, then drift to a new stable timing of 37ms latency.

This is true regardless of your choice of using TCP, UDP, or other protocols.


The time it takes to travel the network is entirely outside the scope of socket programming. All you know is that it will probably get there eventually. It may travel by fiber, satellite around the world multiple times, copper cable around a room, or it may travel by messages strapped to pigeons, wireless Pringles-can antenna, or even bongo drums. That takes place at a network layer far below those you control.


Do not make assumptions about how long it takes, because it is highly variable and completely outside your control.

Share this post


Link to post
Share on other sites
@frob: LOL maybe I'll try the bongo drums method to improve latency

But seriously now, I'm having big problems with latency. When I run it on loopback it's perfect, but when I try it on a 170ms latency broadband WAN, I get tons of trouble. Most obviously, animation becomes jumpy and totally not smooth. Why is it? is it because I'm using TCP? I would have thought high latency only causes a delayed response time for the first packet, but then it's supposed to be smooth cause there's a stream of packets keeping it going. But it seems as if the client waits the 170ms for every packet seperately.

Share this post


Link to post
Share on other sites

obviously, animation becomes jumpy and totally not smooth. Why is it? is it because I'm using TCP? I would have thought high latency only causes a delayed response time for the first packet, but then it's supposed to be smooth cause there's a stream of packets keeping it going. But it seems as if the client waits the 170ms for every packet seperately.


You probably aren't properly decoupling your simulation, your networking, and your rendering.


Generally, when you receive a network update, you should use that as information about what you want the simulation to be in the future -- how to go from where it's at, to some guess at what it will be in the future. Future network packets keep updating what your predicted future state will be, so you keep striving at that state. This is typically based on step numbers, where each step length is the same fixed value.

Your rendering typically interpolates between the previous physics frame and the next physics frame based on the current wall-clock time and an estimate of when the next physics step will be taken in wall-clock time. Or it just renders what the last physics step outcome was; that's simpler, but less smooth in some cases.

Share this post


Link to post
Share on other sites
My networking & rendering are tightly knit, via the data object that keeps object position, gets updated by the network and gets redrawn much quicker than network-updated. What paces the whole thing is the rate at which the network updates the object, and somehow latency slows that down to ~5-6 times per second, instead of 30 or so.

Share this post


Link to post
Share on other sites
The object shouldn't "jump" when it gets a new network state. Instead, it should just use the new state as another data point in an algorithm that estimates where the objects should be at each point in time. Some people use this by interpolating between the position the object was last in and the position you received; others do it by forward extrapolating from the received position/velocity; some do it by interpolating between two forward extrapolated positions from the last and current packets; there are many ways to do it.

Halo runs at 10 or 15 updates per second, but renders at 60 frames per second, and manages pretty smooth game play, for example.

If you get 5-6 packets per second, have you looked at the data in a network sniffer (like WireShark) to figure out what's actually going on the wire? Are you getting "clumping" of packets, many in a single packet? If so, you're probably seeing the effects of Nagle's algorithm, which you can turn off using TCP_NODELAY.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!