data recv-ed in bundles

Started by
13 comments, last by Scuppy 17 years, 8 months ago
Quote:Original post by Tree Penguin
Quote:Original post by markr
None of your threads are busy-waiting are they? If so, nothing will work as expected as you'll get client/server contention when you don't want it.

Have you tried running the client and server on separate hosts?


Yes, that gives me the same problem. And the server thread that sends the data is the same thread that does the physics, which does appear smooth when i let the server view it, so it's somewhere between the send() and recv() command that it goes wrong.

I've added a timecode to the data packets, smoothing it out over several frames and it now works fine (just with a small lag).

I guess i should have gone with udp for this sort of thing.


When data arrives on a tcp connection, the receiving side buffers it up until its buffer is full. When the receiving process doesn't read the data out from the buffer before the next packet arrives, the system simply appends the new data to the end of the buffer. If you want to get 60 frames per second by sending 60 packets per second, you have to make sure that both the sending and the receiving side gets a timeslice at least 60 times per second. On systems that have a timeslice larger than 1/60 seconds it's impossible to do it. On systems with a timeslice exactly 1/60 seconds, the only time this will work is when the only process that's ready for execution is the sending or the receiving process. On systems with a timeslice smaller than 1/60 this will work as long as the total number of timeslices per second divided by 60 is larger than the number of processes ready for execution. Various priority levels have to checked too, since they modify the scheduling.

Generally, you either lower the network framerate and interpolate/exrapolate or get an os that can provide realtime scheduling. The first seems easier.
Advertisement
Quote:Original post by Anonymous Poster... If you want to get 60 frames per second by sending 60 packets per second, you have to make sure that both the sending and the receiving side gets a timeslice at least 60 times per second. On systems that have a timeslice larger than 1/60 seconds it's impossible to do it.

They get their time, at least the sending side does. I looked at the exact times i called send (which where fine) and the exact times the data actually got sent (which where screwed up), so the winsock driver itself doesn't get or take the time.

Quote:It's probably not too late to just switch to UDP.

Wouldn't switching to UDP mean i need to change the whole logic of my server-client code? I got one evening (4 ours or so) to switch, is that enough for an average programmer?
4 hours is not enough to do anything when it comes to distributed programming, unfortunately.

You could build a simple "reliable" layer on top of UDP, and replace your TCP connect/send/recv calls with calls to that layer. However, building and testing that layer probably takes more than 4 hours. Even integrating an existing layer, like SDL_Net, HawkNL or ENet, might take that long.
enum Bool { True, False, FileNotFound };
Quote:Original post by hplus0603
4 hours is not enough to do anything when it comes to distributed programming, unfortunately.

You could build a simple "reliable" layer on top of UDP, and replace your TCP connect/send/recv calls with calls to that layer. However, building and testing that layer probably takes more than 4 hours. Even integrating an existing layer, like SDL_Net, HawkNL or ENet, might take that long.


Too bad, thanks anyway.
If the delay is the tcp window buffering, couldnt he just turn off scaling and force a small window (eg mtu = 500, windowsize=500)? If the delay is the QoS bucket (I'm not sure what MS call it or even if there is one by default) then maybe he can fiddle that too.

This topic is closed to new replies.

Advertisement