Delta compression and dropped client->server acks.

Started by
11 comments, last by Guns Akimbo 7 years, 5 months ago

From what I understand using delta compression requires me to keep track of the player last acknowledged state on the server ...Without delta compression I'm assuming I can just blast state at the player and they can fix the ordering using the tick/frame number?

Using compression could see me successfully send an update to the player (say from frame 100 to 101), but the ack gets dropped somewhere along the way (red part of diagram below). The player is now in state 101 but is getting sent a delta of 100->102.

What's the fix for this:

  1. Send the delta of 100->102 and have the player client apply the delta to a past frame state to calculate the new state.
  2. If the players ack is missing on the next Update() send the delta of the last acked frame the server knows about + all the delta's between last acked frame and the current frame and let the player work out which ones it needs to apply (so in the above example: 100-> 101 + 101-> 102)... and hope it's not too big.
  3. The server acks the players ack and have the player re-send if it's ack timesout?
  4. Stop the world if the don't get the player ack, resend the data and wait (this seems like it drags all other connections down due to one players bad connection)
  5. Don't use delta compression and just blast compressed state at the player, letting the player accommodate for any missing state.
  6. Something else?


 simulation               net                                      player
 +-------+             +-------+                                  +-------+
     |            state    |                                          |
     |Update();+----------->for players:                              |
     |                     |  send {last_frame, frame, delta_state}   |
     |                     +------------------------------------------>----
     |                     |                                          |   |update local state
     |                     |                               Ack(Frame);|   |
     |                     <------------------------------------------+<---
     |                     |                                          |
     |                     +------+                                   |
     |                     |      |store last acked frame             |
     |                     |      |for player                         |
     |                     <------+                                   |
     |                     |                                          |
     +                     +                                          +

Advertisement

How about client stores in memory what frame 100 looked like even though it has moved onto frame 101 so it can still apply the 100->102 delta?

How about client stores in memory what frame 100 looked like even though it has moved onto frame 101 so it can still apply the 100->102 delta?

Assuming I've not committed some sin in my other assumptions that make that a bad choice it's certainly looking like the easiest to implement.

The player stores a buffer of past states it can apply deltas to, that buffer is large enough that if it ever got too stale then a disconnection or full state update would be forced anyway.... I now await the "No, no, no you're doing it all wrong" response that crushes my hopes ;)

Your option 2 is fairly common in games using UDP connections:

2. If the players ack is missing on the next Update() send the delta of the last acked frame the server knows about + all the delta's between last acked frame and the current frame and let the player work out which ones it needs to apply (so in the above example: 100-> 101 + 101-> 102)... and hope it's not too big.

Additionally, networking should not be so tightly coupled to the world's update. Often it is events that are transmitted, not world updates. Consider sending events as soon as they are generated, and processing received data as soon as it is available if possible.

In any event, the pattern you describe looks like this:

send { A }
send { A B }
recv { Ack A }
send { B C }
send { B C D }
recv { Ack D }
send { E }
...

It is called a "sliding window" protocol, where you keep a time window's worth of data, and discard it when you know it was processed on the other end. The time window slides like a queue as new data is added on one side and removed on the other.

Be sure you provide a stamp or sequence number on every transmission so they can be acknowledged, and make sure you handle long-lived cases. It is okay to use a small value, even a single byte, as your stamp if you write code to properly roll it over when the limit is reached.

In practice generally there is only a single update or perhaps two and rarely three before the acknowledgement. If your data packets are small then the retransmit time shouldn't be too bad nor overwhelm your connection.
Yes; clients and servers store the state for the last X frames, and include which frame they assume the data is delta-encoded against in the packet.
This storage may also double as state storage used for time-travel hit-verification, if that's your kind of server authority.
enum Bool { True, False, FileNotFound };

Thanks for the replies folks :)

send { A }
send { A B }
recv { Ack A }
send { B C }
send { B C D }
recv { Ack D }
send { E }
...

On the second to last line there where we Ack D and then go on to only send E. That's going to work for state right because if Alice was at (xyz)32,15,10 in A and at 40,20,30 in D then if we got D we know all we need to know to understand her current location. But if C was an 'event' (e.g: fired a projectile at Bob) then if we only got D she would have moved but we would miss that she fired at Bob?

Or am I oversimplifying that? Would her projectile become a networked entity who's position we also receive in D and we just "miss" her avatar doing to firing animation locally if we miss C.

Is that ok? is this the kind of thing that disappears in the chaos of frantic online multiplayer? or should you strive to resend all events? (which would make that last line look more like:


send{B C E}

Or am I maybe conflating two concepts here? Should I treat state and events separately? Could/Should I option 1 state but option 2 events?

Send events.

Changing state is an event. Usually there is an event of some type that triggered changing states.
Are you synchronizing state, or re-playing events?
Both approaches have been used and work.
And most games do a little bit of both, anyway, choosing the variant that works best for each of their kinds of gameplay elements.
There is no right or wrong answer there.
enum Bool { True, False, FileNotFound };

Thanks again,

I've gone with a blend of both as suggested.

State replication via a delta that the player client is expected to merge into state-data it has previously acked (even if there is a gap in the servers reckoning e.g: the 100->102 example)

Plus events which are sent "reliably" in that they are resent until acked.

If you have sequence (index) numbers for the transmits (both directions) you can use the 'sliding window' method and reply with a Bit Array indicating multiple ACKs in one msg (you send back the highest complete ACK msg (base) index and then a bit array from that point - say sized 32 bits - marking '1' all the recv'd msgs .... NOTE --- the first bit is ALWAYS a '0' (miss) or the base index would advance to gobble a '1') The originator then knows it can advance its sliding window upto that base index. This is for single state sends (I used such as this on a generic low-level Reliable UDP Protocol).

The problem with piggy backing (the multiple state resend) is what to do when the compounded msg grows too large (too many unACKd) and another strategy has to kick in (too much is being lost).

One method Ive heard of (that was use long ago) was to always send the current state data N and include the N-1 data for every transmission which generally gets rid of alot of losses automatically (more loss than that you still need a way to handle).

Of course the delta compress scheme is to cut back on data size transmitted and these mutlisends run counter to that.

-

You then need a stepped escalation strategy to handle communication degredation which requires lots of tracking on both sides and detection of the degredation to make the protocol shift down and later back up when it detects improvement.

At what point are these lost deltas irrelevant ?? The network Sender needs info to tell it what of the data is discardable (like position minutiae) and what isn't (like critical mode or activation events). So the high level protocol needs 'Dont Care' status transmits to inform the other side about discards and 'quality shift' in the protocol. The Sender also needs to have available alternate substitution data like position sync points (to send repeatedly instead of deltas) when 'jerky' is prefered to over-rubberbanding or resend packet storms.

This is the kind of stuff about needing the low level network protocol having to have more complex interactions with App level operations

That also could include quality monitoring to do the degredation/capacity detection which then is signalled upto the Game App to make appropriate compensating adjustments at THAT level.

--------------------------------------------[size="1"]Ratings are Opinion, not Fact

This topic is closed to new replies.

Advertisement