Understanding Lag Compensation & Timestamps

Started by
20 comments, last by NetworkDev19 4 years, 7 months ago

Hello,

I have a basic "greedy" server authority architecture with client side prediction and reconciliation.

  1. The client sends inputs to the server with a sequence number (or command number). It stores that locally, and waits for the server to acknowledge it. In the mean time, it performs the input as if the server already has.
  2. The server processes the input on the first update when it becomes available, hence I'm calling it "greedy". It just consumes inputs as soon as they show up, and then sends the real position back to the player (with the sequence number) and any other players so they can render this player's new position.
  3. The client who sent the input in step 1 receives the acknowledgement, looks at the sequence number in its history of inputs it stored, then replays any unacknowledged inputs that have accumulated since that was sent.

This is fine and dandy for a really simple game, maybe two players. But, once you start having more players from all over, with different connection speeds, things will fall apart because there's no lag compensation or sense of "time" on the server's simulation.

So, this leads me to my search. I've found a bunch of articles - gafferongames, gabrielgambetta, valve's wiki, Overwatch & Halo Reach GDC presentations, Qing Wei Lim's writeup

The terms used in each are actually kind of loose, I'd argue some of them don't line up with each other. Some are talking about different implementations, or parts of one another. Example - gabrielgambetta talks about reconciliation however it's really *client-side reconciliation with the server* like I have now. It is *not* lag compensation or "server-side reconciliation with the client" depending on which phrasing you wish to use.

None of them really discuss time in depth and what it means to timestamp everything in order for the server to perform lag compensation specifically. I'd say gafferon comes close in his "fix your timestep" article.

Here's my current understanding of Lag Compensation and hope someone can correct me and fill in any gaps

  1. Client sends command, like walk or shoot, at CLIENT TIME X
  2. Server receives this command at SERVER TIME Y
  3. is older than Y - IE the server is always ahead time wise.
  4. Server sees the command took HALF ROUND TRIP TIME Z to reach it.
  5. Server looks into its position history at time Y - Z, applies the command in that historical context
  6. If needed, server will then replay all accumulated inputs from players again from that history. There may be situations where this isn't necessary - for example, player 1 shot at player 2 and missed, just disregard it. Or, player 1 moved forward but nothing went wrong (didn't collide with anything).

My questions:

  1. What are times and Y based on? Is it real world time?
  2. How is this Z found? Is it Y-X? Or is it assumed from the last known ping/pong?
  3. If it is based on real world time, and Z is found from Y-X, what happens when the player's timestamp is not a real or accurate timestamp? How do you deal with time differences, just track and stamp all commands in UTC ticks?
  4. What happens if Z is beyond the positional history buffer? For example, Counterstrike says it saves up to a maximum of 1 second of positional history. What if the command is for an action that occurred beyond that time buffer in the past? Ignore it?
Advertisement

At some point, time advances on the computer, and that "real time" is used as a function for advancing game time.

However, game time is, at a minimum, an offset from "computer clock" to "game time." You might initially capture this offset when the game first starts, and then update this delta further based on information from the server (the server can tell you if it reveived data in the future; you can see on the server time stamps of the server is way different from you.)

Typically, though, you don't directly use real-world time, but instead count game-ticks, typically ticks-since-game-start. This assumes that all game clients and servers simulate the game at the same frame rate, which is generally a good idea (physics is more stable, simulation more deterministic, etc.) You don't need to simulate at the same rate you display pictures. In fact, often a 240 Hz simulation rate might give you better responsiveness than a 60 Hz simulation rate, and when display rate varies, simulate a variable number of ticks based on how long the last frame took. (There are advanced display-prediction methods, too, which end up being important for example in VR games.)

OK, so you send commands "I did this at tick Q" and you receive commands "this happened at tick Q." The server needs to decide on a maximum difference it is prepared to accept, and if the client is beyond that, ignore the command, and/or kick the client.

Note that allowing clients to move in the past may lead to time paradoxes; generally you will only rewind time for actions that can't be adjusted physically, which generally means ray-test "hitscan" weapons fire. Ballistic fire and movement is physical enough that you can apply physics-based adjustments on them.

enum Bool { True, False, FileNotFound };

First of all, thank you for the response!

So for example:

  1. Client stamps that at 40 ticks, he moved forward.
  2. Server sees 40 ticks in the message, but it's at 120: so 120-40 = 80.
  3. It looks up position history at 80 ticks, says ok you moved forward at this point in history - maybe something changed and you couldn't
  4. Server will send client a different position than where client predicted.

I think that makes sense, not sure why I hadn't looked at simply accumulating ticks over time. Does the client ever need to know which time (tick) the server is ahead? Eg Client needs to know it's 80 ticks behind?

Lastly I want to propose a separate question I was going to start a new thread, but you touched upon it here
What happens if the client is running at a faster update than the server? Say the client is 60 ticks, but the server is 30 (for sake of example). Since the server is authority, it will always give the clients the real positioning. Even if the client is double the tickrate, since we map time, wouldn't the results be exactly the same - its just the client would "pop" positions more often?

First: Yes, the client will have to estimate the server tick position, and if it finds it's too far ahead or back, bump its offset a bit. Modern clocks are fairly precise, so keeping a separate rate delta (e g, multiplying the passage of time by 1.0001 or something) isn't generally needed.

Second: Consider the model where the server receives updates for time T-X for each client, when the server is at time T. It will have to send the corrections for all of the clients, and also send the state of each client at those corrected times, to all clients. But this means that there is literally no benefit to the server of running ahead of time -- it will just have to rewind and re-simulate every player, every time! Meanwhile, when the player is at time T+X, it receives updates for itself, and for each other player, for time T-X.

If you think through this model, it actually looks a lot like the model where the current player is ahead of time, and each remote player is behind time, on the local machine. So, on your machine, I am behind time, but you are ahead of time. Note that you don't have my inputs for any time after T-X (because they're still in the aether, not having reached you yet,) So you will have to guess (extrapolate) what I'm going to do, anyway.

Thus, the canonical implementation of this model is that the server keeps time T, and each client tries to make sure that the server has the commands FOR TIME T right before time T happens. This means that each client needs to run themselves transmission-time-plus-epsilon ahead of T. The clients can get good information from the server on whether they sent their data too late (missed the window,) at a reasonable time (arrived within, say, 0..3 ticks of when it should have,) or came too early (arrived, say, 4+ ticks too early.) This will let the client adjust its offset compared to server time.

The server then sends updates of all players' state to all players, for time T, and the player receive this one transmission time after it is sent. Again, this means that the player gets a correction for time "now minus one round trip time" whereas it gets baseline information about other player for the first time for that same time.

enum Bool { True, False, FileNotFound };

So I guess from an implementation standpoint I'm kind of stumped.

 

What is the client sending to the server? So he's at some time before server's time. He sends the server his input and timestamp of his current time. the server just assumed that his timestamp is in the past, subtracts it from his, and then attempts to do the raycast for shooting for example (you said don't do movement)

However, now you're mentioning that the client should know about the server's timestamp too. I guess I don't fully understand why and what he does with that information.

 

After watching the GDC talk for Overwatch, he talks about contracting the time of the player if the difference between time is too large due to dropped packets. I guess that's what is throwing me off at the moment. What does that mean? The client is increasing his own tick rate beyond the server's?

If the client is too far behind, you can do one of two things:

1) make the client run faster -- multiply the clock advancement rate by some number > 1 -- to "contract" time to catch up

2) snap the client -- just add a fixed offset to the client to catch up, which will end up in a "jump" ahead in time

In practice, client time drift from server is reasonably rare, so option 2 is usually good enough, and is much easier to implement without artifacts than a clock the speeds up and slows down.

enum Bool { True, False, FileNotFound };

Does that apply to everything the client sends to the server (like movement) or only things like shoot commands?

So, I have a client who says he's moving forward. He timestamps that send it to the server.

Server sees the timestamp is behind, but because it's movement, we just process it as it is in a first come first serve basis? Or do you rewind time based on the difference of timestamp?

Now the server sends its timestamp and the new world state back to the client. Say the client is suffering from loss, it sees that its timestamp is really behind, it starts to speed up its simulation to catch up, which includes movement right?

 

By the way, thanks again for the useful information. I've found it difficult to find a place of discussion to chat about this stuff until now! Most knowledgeable people on the subject keep this all to themselves haha

In general, you will want every input from the client to be timestamped, and you want them all to be timestamped in the same frame of reference. Typically, there will be one "here are my inputs" block of data, and it uses one timestamp, and may include all kinds of commands. And, typically, the actual timestamp will just be a field in the outermost framing packet header. (There's some adjustment needed if you send fewer network packets than you run input step simulations, as you send more than one tick input in one network packet.)

enum Bool { True, False, FileNotFound };
13 hours ago, hplus0603 said:

In general, you will want every input from the client to be timestamped, and you want them all to be timestamped in the same frame of reference. Typically, there will be one "here are my inputs" block of data, and it uses one timestamp, and may include all kinds of commands. And, typically, the actual timestamp will just be a field in the outermost framing packet header. (There's some adjustment needed if you send fewer network packets than you run input step simulations, as you send more than one tick input in one network packet.)

 

When the client first syncs its timeclock with the server, say upon first connecting, what should it use as an offset?

I know that over-time, you "dial in" that number using a statistics or just having the server tell you (I've read other posts like this one where you helped a lot) but I don't know what my initial value should be for the client. They need some kind of a delta, what should I base it on? If my server is 60 ticks a second, should I put my client 30 ticks ahead when I first connect? Is that reasonable?

6 hours ago, NetworkDev19 said:

When the client first syncs its timeclock with the server, say upon first connecting, what should it use as an offset?

Ping and measure the RTT before you start/join the game.

On 7/28/2019 at 4:07 AM, NetworkDev19 said:

Here's my current understanding of Lag Compensation and hope someone can correct me and fill in any gaps

Note that lag compensation is a concept, not a specific implementation. What you've described is one form of compensation, but there are other ways to compensate for lag, e.g. simply increasing the radius of a fireball (there are pros and cons to every approach and it depends on the type of game).

This topic is closed to new replies.

Advertisement