Quote:Original post by Tree PenguinQuote:Original post by markr
None of your threads are busy-waiting are they? If so, nothing will work as expected as you'll get client/server contention when you don't want it.
Have you tried running the client and server on separate hosts?
Yes, that gives me the same problem. And the server thread that sends the data is the same thread that does the physics, which does appear smooth when i let the server view it, so it's somewhere between the send() and recv() command that it goes wrong.
I've added a timecode to the data packets, smoothing it out over several frames and it now works fine (just with a small lag).
I guess i should have gone with udp for this sort of thing.
When data arrives on a tcp connection, the receiving side buffers it up until its buffer is full. When the receiving process doesn't read the data out from the buffer before the next packet arrives, the system simply appends the new data to the end of the buffer. If you want to get 60 frames per second by sending 60 packets per second, you have to make sure that both the sending and the receiving side gets a timeslice at least 60 times per second. On systems that have a timeslice larger than 1/60 seconds it's impossible to do it. On systems with a timeslice exactly 1/60 seconds, the only time this will work is when the only process that's ready for execution is the sending or the receiving process. On systems with a timeslice smaller than 1/60 this will work as long as the total number of timeslices per second divided by 60 is larger than the number of processes ready for execution. Various priority levels have to checked too, since they modify the scheduling.
Generally, you either lower the network framerate and interpolate/exrapolate or get an os that can provide realtime scheduling. The first seems easier.