Jump to content

  • Log In with Google      Sign In   
  • Create Account


Dealing with bursts of delayed packets from the client to the server


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
8 replies to this topic

#1 fholm   Members   -  Reputation: 262

Like
0Likes
Like

Posted 03 September 2011 - 03:45 AM

So, here comes another question about sending updates from the client to the server, we all know that internet connections are not 100% reliable and the speed of delivery can differ between two packets that are sent right after each other. This specific problem is regarding to movement (or more generic the stream of requests from the client), assuming this:

  • Client and server both running at 66.66Hz internally (15ms steps)
  • Client send commands to the at a rate of 33.33Hz (30ms), so on average there are two commands per packet are sent to the server
Now, the server receives these commands from the client, put in a local command queue for a client and then executed one by one every 15ms, the queue is re-filled every 30ms by new commands from the client, and one command is picked of the queue every 15ms by the server. Those of you experienced with this kind of thing probably already knows what is "going wrong", let me give you an example:

  • 30ms Client sends two commands to the server
  • 60ms Client sends two more commands to the server
  • The server receives both batches of commands from the client at the same time due to lag/etc.
  • Now the client continues to send two commands every 30ms
  • The server de-queues one command every 15ms, but will never catch up to the "latest" command due to there always arriving more commands and the initial "burst" of 2x2 commands due to network lag
How do I deal with this, the fact that packets don't always arrive with equal spacing, especially when they sometimes miss the current running fixed step update and get delayed another 15ms. This causes a "queue" of commands to build up on the server which creates even more lag for the client since the server has to chew through the first X commands that are queued before the last received one.

I've come up with a couple of options, none seems really satisfying:

  • Limit the command queue to two commands and drop everything else, this will cause the occasional dropped package but it's not that big of a deal since it can be smoothed out
  • Execute commands as fast as they come in to keep up with the client. This feels like it would open up the server for cheating though, by sending commands at a blazing speed to move your character faster for example. Maybe there is some clever way of deducing when commands are valid even when they coming in bursts to disable the ability to cheat.
Again, thanks everyone that has helped me so far!

Sponsor:

#2 kunos   Crossbones+   -  Reputation: 2203

Like
0Likes
Like

Posted 03 September 2011 - 03:48 AM

why dont you just dequeue the entire command buffer every server iteration?
Stefano Casillo
Lead Programmer
TWITTER: @KunosStefano
AssettoCorsa - netKar PRO - Kunos Simulazioni

#3 fholm   Members   -  Reputation: 262

Like
0Likes
Like

Posted 03 September 2011 - 03:56 AM

why dont you just dequeue the entire command buffer every server iteration?


That would leave the server open for cheating would it not? By sending say 4-6 packets with 2 commands each (on every client step).

#4 kunos   Crossbones+   -  Reputation: 2203

Like
0Likes
Like

Posted 03 September 2011 - 04:03 AM

i dont know what kind of server architecture you have in place... but if that's what you're worried about, it's pretty easy to keep a "long term" statistics about how many packet you are receiving from a client... say in the last 10 seconds.. if that's way more than you expect, you have a cheater.
Stefano Casillo
Lead Programmer
TWITTER: @KunosStefano
AssettoCorsa - netKar PRO - Kunos Simulazioni

#5 Zipster   Crossbones+   -  Reputation: 579

Like
0Likes
Like

Posted 03 September 2011 - 04:06 AM


why dont you just dequeue the entire command buffer every server iteration?


That would leave the server open for cheating would it not? By sending say 4-6 packets with 2 commands each (on every client step).

The server needs to be able to handle "bogus" data from clients regardless, as well as act authoritatively to prevent cheating, so the number of packets or commands per update shouldn't be relevant.

#6 fholm   Members   -  Reputation: 262

Like
0Likes
Like

Posted 03 September 2011 - 04:09 AM

i dont know what kind of server architecture you have in place... but if that's what you're worried about, it's pretty easy to keep a "long term" statistics about how many packet you are receiving from a client... say in the last 10 seconds.. if that's way more than you expect, you have a cheater.


Yeah I suppose that would work (my server architecture would allow this), there's a lot of "but ifs" here, but for example what if the client isn't "doing anything" at the movement (standing still, not rotating, not shooting, etc.) which causes no packets to be sent which would give the client a "buffer of unspent packages" that could be used to send a lot of commands really really fast at the end of those 10 seconds. I suppose there could be some heuristic for solving this also (say only accept max 6 packets per read cycle or something, or start dropping packets of more then X arrives at the same time, etc.).

What are the common ways of dealing with this, the inherent "burstyness" of network connections?

#7 fholm   Members   -  Reputation: 262

Like
0Likes
Like

Posted 03 September 2011 - 04:11 AM



why dont you just dequeue the entire command buffer every server iteration?


That would leave the server open for cheating would it not? By sending say 4-6 packets with 2 commands each (on every client step).

The server needs to be able to handle "bogus" data from clients regardless, as well as act authoritatively to prevent cheating, so the number of packets or commands per update shouldn't be relevant.


A very good point, I sort of assumed this would be the case/answer.

#8 Antheus   Members   -  Reputation: 2397

Like
0Likes
Like

Posted 03 September 2011 - 08:59 AM

Unless you have a dedicated network with hardware assisted timing you simply cannot, under any circumstances, rely on something like "every X ms".

Even high precision timing systems do not work, except perhaps for direct device control. They collect data in batches, associate it with timestamps and then send that. Turns out it's a surprisingly difficult problem to solve, since for even modest precisions (microseconds), length of cabling affects delivery times.

by sending commands at a blazing speed to move your character faster for example.


Other way round. If you rely on packet sequences, then you open yourself to cheating.

A character can move X m/s or similar. Server and other clients can enforce that. If I receive delayed movement data for last 3 seconds, it doesn't matter how it was generated.

especially when they sometimes miss the current running fixed step update and get delayed another 15ms

If you attempt to recover the state, then server moves simulation that far back in time and simulates everything again until present time. Results of such recovery may be seen as "snapping" by other players. Such approach requires keeping history of all actions.

Alternative is to try to bring lagging player up to current time, likely causing them to warp all over the place, at least as far as others can see it.

#9 wodinoneeye   Members   -  Reputation: 705

Like
0Likes
Like

Posted 15 September 2011 - 02:42 AM

One UDP server socket

Have a session connect protocol handshake/double handshake and proper timeouts for inactivity on both ends

Let the 15ms server cycle process all that come in on the socket -- all sessions (have some session related data in every packet header to act as a simple cheat garbage filter -- discard those that dont contain the secret number for existing sessions)

If your clients are not supposed to send more frequently than the metered 30/60ms then you now how many to expect over any time interval.
Keep count over a short window (like 5 x 15ms cycles) and report excess (cheating).
Have the connections send Empty packet placeholders if no real data traffic -- need not be every cycle (which can help detect disconnects)


If this is UDP, duplications/out-of-order packets can happen and more frequently lost or delayed packet (on internet or even a lousy OS that runs intrusive background high level processes like Vista did). Use sequence numbers on your packets so the other side can detect missing packets/duplicates and handle them (probably report to the user if too many lost)
Some data lost/delayed can be ignored as out of date (ie current position frequently sent) but if you have events that the client needs to get to display properly or modal input commands sent once, you will need a much more complicated 'reliable' delivery of the data (resend window queues/acks/self throttling to wait for resends/timeouts/etc)

TCP is simplest but you can actually do both TCP for data that needs to be reliable and UDP for the frequent update kind that can be ignored if out of date.

Ive written reliable protocols using UDP but its alot of trouble unless you really need certain customizations.
--------------------------------------------Ratings are Opinion, not Fact




Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS