Jump to content
  • Advertisement
Sign in to follow this  
Tree Penguin

data recv-ed in bundles

This topic is 4317 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi, i am having trouble sending/recieving small amounts of data at a time, using winsocket TCP/IP in c++. My server sends 300 to 1500 bytes of data every frame (in one send call), 60 frames per second. Everything is recieved by the client (a seperate thread using recv, recieving data as soon as possible) but it arrives in bundles of about 5 frames at a time. I tried the following things to fix it but i noticed no real improvement: - calling recv to read data sizes like 32, 100 or 300 bytes at a time (32 gave a slightly better performance than 100 or 300) - enabling TCP_NODELAY (server and client) - sending more data (if for some reason TCP_NODELAY didn't work) - decreasing the read/send buffer size - sending the data in smaller parts (50 bytes) - changing the priority class of the reading thread What else can i try, what could be the problem?

Share this post


Link to post
Share on other sites
Advertisement
Are you complaining about lag recieving data or about the bundling? You've done the usual stuff about TCP lag so I don't really have anything else to suggest.

As for the bundling issue you do realize that there is no guarantee with TCP that just becuase you sent 300 bytes in a single recv that you will get 300 bytes on the other end? It's perfectly ok as far as TCP is concerned for it to give you your data one byte at a time, or 15000 bytes at time (i.e. bundling), or anything in between. There is no such thing as a packet at the application layer when using TCP.

Share this post


Link to post
Share on other sites
Look at the forum faq,

Quote:

Q14) I'm trying to send two packets over TCP, but it seems like they arrive "merged" or one packet disappears on the receiving end. What's going on?


This is normal and expected behaviour.

Enabling TCP_NODELAY on the sender will usually stop the joining breaking messages up (unless its send queue is nonempty already, usually indicating some congestion, or you send messages *really* fast), but won't stop the receiver doing so.

Sending 60 times per second is too often for most purposes. You will find coalescence happens. Sending 300 bytes 60 times per second means 18 kbytes per second, or 144 kbit/second, which is faster than many peoples' connections. This is too fast.

In any case, even if message coalescence does not happen under normal circumstances, your application should be able to handle it anyway; as the FAQ indicates, TCP does NOT preserve message boundaries.

Mark

Share this post


Link to post
Share on other sites
I'm complaining about the bundling, it causes multiple frames to arrive at the same time, which effectively gives you a framerate of at most 10 fps (even though there are 60 frames sent every second, the older ones just get overwritten by the new ones recieved in the same bundle).

When TCP_NODELAY is on (so nagle is turned off) data should be sent immediately (so not wait for more data to make a nice big package), there is no reason for the network adapter (or the localhost pipe, which has the same problem at the moment) to hold on to it longer than that.

So either nagling isn't turned off at all (while getsockopt says it is), or for some reason recv() only recieves data when some insanely large buffer is completely full or after a timeout of something like 200 ms, which sounds like nagle to me.

Any ideas?

EDIT:

Quote:
Original post by markr
Enabling TCP_NODELAY on the sender will usually stop the joining breaking messages up (unless its send queue is nonempty already, usually indicating some congestion, or you send messages *really* fast), but won't stop the receiver doing so. ...


Yeah, but for some reason it does not right now, even when i use localhost it still gets bundled (which rules out congestion i'd say).

Is there any other way to prevent this? I doubt that all TCP traffic has a maximum of 10 blocks of data every second, so why doesn't it send (or recieve) them right away?

Share this post


Link to post
Share on other sites
Is the "bundling" you've observed actually resulting in larger packets? Do a packet capture (with Ethereal or something) and see what's actually in the TCP packets (which is different than what recv() tells you). It's possible that the buffering is the fault of the receiving thread not receiving often enough. If nothing else, packet tracing will allow you to localize the problem to the sender or the receiver.

Share this post


Link to post
Share on other sites
Okay, i tried Ethereal and it turns out the packets are being sent and recieved as individual packets but the timing is all screwed up.

They are sent with a few packets at a time, so with timing like this:

50.1650..
50.1668..
50.1686..
50.2290..
50.2319..
50.2337..
50.2920..
50.2938..
50.2954..

I guess that's because there's a seperate thread in the winsocket drivers that actually writes the data to the network adapter. Because of windows's time sharing it get's the time to do so roughly 15 times a second.

That would mean the only way to sort of fix my problem is to smear out the recieved data and make it look a little smoother. Or is there a way to make winsock truly send the data immediately when i call send()?

Or, have i got it all wrong :P?

Share this post


Link to post
Share on other sites
None of your threads are busy-waiting are they? If so, nothing will work as expected as you'll get client/server contention when you don't want it.

Have you tried running the client and server on separate hosts?

Mark

Share this post


Link to post
Share on other sites
I really think you're coming at this whole problem wrong. TCP is just not designed to do what you seem to want it to do. You can tweak things until you're blue in the face and you may even get it to work most of the time in extremely controlled scenarios but as soon as you put in on a different machine, or a different network, or somebody external to your machine starts sends you data then you'll change the timing and it won't work anymore.

You need to code your app to handle this sort of thing, not tweak things to try to get it not to happen.

Share this post


Link to post
Share on other sites
Quote:
Original post by markr
None of your threads are busy-waiting are they? If so, nothing will work as expected as you'll get client/server contention when you don't want it.

Have you tried running the client and server on separate hosts?


Yes, that gives me the same problem. And the server thread that sends the data is the same thread that does the physics, which does appear smooth when i let the server view it, so it's somewhere between the send() and recv() command that it goes wrong.

I've added a timecode to the data packets, smoothing it out over several frames and it now works fine (just with a small lag).

I guess i should have gone with udp for this sort of thing.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

Participate in the game development conversation and more when you create an account on GameDev.net!

Sign me up!