Server client ticks, lag compensation, game state, etc

Started by
25 comments, last by bgilb 7 years, 5 months ago

I'm mostly talking about the single delays like you outlined. I'll call these latency spikes. These could be either FPS drops on the client or actual latency on the network.

I think I'll implement the clock syncing because it will probably make things easier. But I don't really understand the roll of the buffer anymore when using clock syncing. Do I get rid of it?

With clock syncing though, what happens also during latency spikes? Doesn't the server go to grab a user command during a tick and it's not there? Won't this also fill up the buffer?

Maybe pseudo code would help me understand?

Here is how it works currently for me:

Tick loop (at 60hz) on client:

Generate usercommand by polling input. User command is tagged with current client tick.

Send usercommand to be client side predicted

Send usercommand over network (Which waits until 2 are available and sends then).

Tick loop at(60hz) on server:

Once per render frame (higher than ticks) check for any incoming usercommand message. If any add to the players user command buffer

Per tick check for the lowest client tick if there are any and apply it to that player.

Both loops can play catchup

Advertisement
The buffer is still important. The buffer is a priority queue of "tick number" mapping to "command for this player," and you flush any commands that arrive for ticks that are already executed, or that arrive for the far future.
With clock syncing, you can throw away events that arrive too late, but keep the ones that still have time to execute.
If you get lots of events arriving too late, then the server will tell the client to adjust the clock to send packets sooner, which will build up more buffering, which will help when the spike happens.
You have to figure out how much you want to be tolerant against latency spikes, versus how much overall latency you are prepared to accept in buffering.
enum Bool { True, False, FileNotFound };

After doing some research, it doesn't appear the source engine uses a buffer, at least not exactly.

https://github.com/ValveSoftware/source-sdk-2013/blob/master/mp/src/game/server/player.cpp#L3192

http://forums.steampowered.com/forums/showthread.php?t=3119525

It lets the user run up to the max allowed user commands per server tick. This is actually only recently added to the source engine in the last 3 years. I'm not sure how it worked before.

The github line is where the source engine actually processes user commands on the server. It doesn't seem to care at all about the current server tick?

It would make lag compensation easier because I wouldn't have to worry about the buffer size when the command was received. Although some form of speed hacking seems possible?

It does keep a clock, but it's implicit. This line in DetermineSimulationTicks() seems to be the way that happens:

simulation_ticks += ctx->numcmds + ctx->dropped_packets
Note "ctx->dropped_packets" -- the invariant is that "numcmds" plus "dropped_packets" equals the total number of time steps lapsed.
enum Bool { True, False, FileNotFound };

It seems the server keeps track of missed user commands, and that becomes the number of ticks the client is allowed to execute. The "CCommandContext" from what I've read is just a set of user commands that were received on the network.

What I'm getting at is that it doesn't appear to care that the client sends his user commands "just in time" for the server tick. They're allowed to play catch-up or fall behind. Wouldn't this be more robust? Save for players sometimes skipping around.

it doesn't appear to care that the client sends his user commands "just in time" for the server tick. They're allowed to play catch-up or fall behind. Wouldn't this be more robust? Save for players sometimes skipping around.


This is more robust in the sense that a single client's path forward through time will be tracked on the server.
It is less robust in the sense that the server may resolve different clients at different points in time, and thus cannot have a cohesive "these players collided" view that is agreed by everybody.
Which gets even harder when you try to make the determination of "this client fired in this direction at this time; what did they hit?"
enum Bool { True, False, FileNotFound };

But the clients are always at different points in time because of latency. I don't think this system rewinds server gamestate or changes history or anything. So as long as an unlaggy player shot the lagged out version, it will go back to previous ticks and the player will still be in the lagged position for that snapshot and it will let him kill/hit him.

This topic is closed to new replies.

Advertisement