Sign in to follow this  

Server client ticks, lag compensation, game state, etc

This topic is 404 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I'm trying to implement FPS style game networking, and have a lot of it understood and have implemented them. But there are parts that all the material I can read has kind of glossed over. The only meaningful things I can find are actually forum posts on here (mostly by hplus0603 and fholm).

 

So I guess to start I will explain how my system currently works:

  • Simulation is 60Hz on both server and client
  • Client sends input commands every other frame

Once per tick, the server goes through all pending commands and applys them to the game world. This means a user can apply more than one user command per server tick. From reading Valve's articles it seems they do it this way, but I'm not 100% sure.

 

After applying on the user commands, the server reads the physics of the world and stores it into a world game state object that includes all the entities and sends this to the users with the server tick.

 

I imagine you can already see the problems in this system:

  • When doing client server reconciliation, the ticks and positions don't really match up 100%. Maybe this is okay though?
  • Lag compensation definitely won't work.. This is because the users position inside the gamestate will actually jitter around. Also the client tick and server tick don't match up.
  • Interpolation probably won't work either..

 

So the first way I thought to fix it is to always have the server tick only execute 1 user command per tick. But I can't just tick 1 user command at a time, since user commands would end up missing. Especially since I send 2 at a time. Also how do I match up the user command tick to the server tick it was actually applied on? One issue I had with this was that the buffer was inexplicably filling up with like 20 user commands, and never draining them. So the player would end up very far behind (20 * 16ms) = 320ms. Also the receiving first game state is confusing to me.

 

Client: 

  • Client connects and receives first game state
  • Sets its own local tick to the server tick it received in the game state. Let's say tick 1300.
  • Client sends it's first user command with it's local tick (which will be the server tick it just received, we'll also just assume 1 user command per packet). The result of that user command was a position of 1,0.

Server:

  • Client connects, add player to game state
  • Send game state to everyone with server tick (1300)
  • At this point a couple ticks will have passed (let's just say 5) before we've even received the first user command. That client's buffer is empty. Let's assume we don't execute anything when the user command buffer is empty.
  • We received the previously connected users first command stamped with tick 1300. But the server is already on tick 1305. So it applys the user command on tick 1306 and send's it to the client with the stamp 1306.

Client

  • Client has sent a few user commands by now without a game state.
  • Receives second game state 1301 on the server.
  • Checks his own history and the result of 1301 was 2,0. But the server says he was at 0,0 since it hasn't even applied any user commands.

 

 

 

 

 

Share this post


Link to post
Share on other sites

the first way I thought to fix it is to always have the server tick only execute 1 user command per tick


That's generally how it's done!

When you send commands, you will typically timestamp them "for simulation tick X," "for simulation tick X+1," etc.
You then put them in a queue when you receive them, and run them at the appropriate tick.
If you receive a command for a tick that's already executed on the server, tell the client that it's too far behind and should increase its estimate of transmission latency (add more to the clock to get estimated arrived server clock.)
If you receive more commands when there's already more than one queued command (for some value of "one") then you can tell the client that it's running ahead, and should adjust its estimate backwards a bit.

As long as you run ticks at the same rate on server and client, this will work fine! When you get out of sync, there will be a slight discrepancy, which the server needs to correct the client for.
Typically, those will be so small that they're not noticed, and they won't happen that often after initial sync-up.

Share this post


Link to post
Share on other sites

Thanks for answer! I have a couple follow up questions..

 

1) When the client receives his initial server tick, he actually needs to figure out what the actual current server tick is? So the server actually sends the first gamestate as the current server tick + client latency?

2) Doesn't this mean though the server will be executing ticks for user commands it doesn't even have? (For almost all the clients). Unless the server works way behind like 30 ticks. Which would be half a second of extra latency.

 

The server tick vs client tick is what's throwing me off I think.

 

Also as I'm understanding it the server only has one global tick timeline. And the client has his own locally. Maybe this is causing the confusion? Although having a tick timeline per client on the server confuses me too.

Edited by bgilb

Share this post


Link to post
Share on other sites

In  normal real life mode the server will always send anything to the client except you are completely the only player on it and the gameworld itself does not do any state changes what is far unrealistic. You also get messages on authentication and so on so dont worry about getting the server tick.

 

You might capture it and then synchronize with the clients internal clock (not OS clock)

Share this post


Link to post
Share on other sites

 

the first way I thought to fix it is to always have the server tick only execute 1 user command per tick


That's generally how it's done!

When you send commands, you will typically timestamp them "for simulation tick X," "for simulation tick X+1," etc.
You then put them in a queue when you receive them, and run them at the appropriate tick.
If you receive a command for a tick that's already executed on the server, tell the client that it's too far behind and should increase its estimate of transmission latency (add more to the clock to get estimated arrived server clock.)
If you receive more commands when there's already more than one queued command (for some value of "one") then you can tell the client that it's running ahead, and should adjust its estimate backwards a bit.

As long as you run ticks at the same rate on server and client, this will work fine! When you get out of sync, there will be a slight discrepancy, which the server needs to correct the client for.
Typically, those will be so small that they're not noticed, and they won't happen that often after initial sync-up.

 

 

What does adjusting estimate mean in this situation? In my project, I'm doing exactly what is described above, so I have certain tick rate and I send client input commands to the server at client's tick rate which is the same as server's. I can't see situation where client sends the same input that was already processed - because I never send redundant inputs (I send them as UDP, but reliably so I'm not shooting with inputs blindly, with some redundancy like sending last 3 inputs each packet, maybe I should?), so if client sent input for tick #123 it will keep sending ticks that follow, not the same one, even if it won't confirm that the input was accepted by server.

 

Ah, and I perform client-side simulation too, and confirm ticks from server, then resimulate these queued on client. I can measure how many ticks haven't been confirmed for example, and if this piles up I could do something. 

But you mention some estimates of latency, which is something I don't do at all. Does it affect the rate at which client should send inputs? As per above, I'm sending them with the rate of local simulation, which has the same rate as server sim (right now 30Hz). How should I understand adjusting estimate in my case? What should be done?

Share this post


Link to post
Share on other sites

Doesn't this mean though the server will be executing ticks for user commands it doesn't even have?


Typically, the player will run "ahead of time" from the server, so that commands arrive "just in time" (plus buffering.)
The other entities on the pleyer's machine will run "behind time" from the server (based on server data forwarding.)
One of the basic choices in game networking is whether you display remote entities "behind time" in correct positions, or "forward extrapolated" in estimated future positions.
The other basic choice is whether you let the player take the action immediately, or whether you actually play player actions behind time, leading to input latency. The benefit of doing this is that you can be in lockstep sync with the server.
(It might seem like there are four possible combinations here, but the combination "other entities forward extrapolated" and "local player is simulated with latency" is never used in practice :-)

Estimated latency doesn't affect how often you send packets, it mainly affects the clock offset between server and client.
It also typically affects how far into the future you forward extrapolate displayed entities, if you choose that option. Edited by hplus0603

Share this post


Link to post
Share on other sites

How does it know how to run ahead? Also I'm planning "behind time" other entities that are interpolated. Actions are taken immediately (well most) a.k.a client side prediction.

 

 

Here is a "chart" I made that will maybe clarify my confusion.

 

http://i.imgur.com/2KNLyOK.png

 

You will see the ticks never really line up. Does the server need to keep track of another tick that is the user command tick? So on tick 7 the server actually sends back "Here is the result on tick 7, which was for you user command 0".

 

Then the client can lookup in his usercommand result history, what happened after usercommand 0?

 

For lag compensation, the server would have to be able to rewind to the tick the client actually shot from his view. The server would have to know the clients interp and his latency right? So right when he receives the usercommand shooting, he rewinds 2 ticks (32ms) for latency, and another 2 ticks (32ms) for interp? Then checks the shot with those positions? It seems even worse to code if the server buffers usercommands some amount..

Share this post


Link to post
Share on other sites
The server buffer just adds latency to the calculation. All you really need is to know the tick numbers that the commands are intended for, and, if you do "client view rewind," how much latency the client sees.
There will be one or two round-trips when first establishing the connection where the server lets the client know its current tick, then the client tells the server what the server tick received was, and then the server tells the client what the effective round-trip latency seems to be based on that, after which point the client can start assuming it knows the server tick.
When the server sees a command that is way too early or too late, it can tell the client to adjust in the appropriate direction.

The ticks don't need to "line up" in "global time" (because there is no such thing, according to Einstein!) but they do need to agree on the sequence of events -- tick 1 happens before tick 2 and so fort.

Share this post


Link to post
Share on other sites

1) So is the client trying to execute the same tick at the same actual time as the server? Or is he trying to compensate for latency by being ahead in ticks so when his usercommand is sent it's on the correct tick?

2) How would that work if there was a buffer of user commands? Let's say user commands aren't emptied until there are 3.

 

When you said "intended for", in my drawing the user command was intended for tick 3. But this has long passed now on the server. And I don't think the server is going to be going back into old game states to update the state. Is that correct?

 

Do I need to keep track of server ticks AND user command #s ? This is the only way I can think to do it.

 

So once the user command buffer reaches >= 3, it pops a user command off on the server and apply's it to that server tick. Then it sends that one to the client saying "Here is tick 534, and I applied user command 0". The buffer would prevent user commands from being empty and the player missing that server tick.

 

Then the client knows on user command 0, for him it was tick 0 (but maybe that doesn't matter?) so at the point just do server reconciliation starting at user command 0 and applying all the newest ones.

 

What do I do if the buffer is some large number like 10? The player is clearly very behind then. And what do I do if it hits 0?

Share this post


Link to post
Share on other sites

1) So is the client trying to execute the same tick at the same actual time as the server?


No, the client runs ahead of the server, so that the outgoing packets arrive to the server when they are needed.

How would that work if there was a buffer of user commands?


That buffer just looks like more network latency to the timing system.

in my drawing the user command was intended for tick 3. But this has long passed now on the server.


Right. So the client has to move its clock up enough such that it executes and sends commands in time for them to make it to the server, all buffering included.

Do I need to keep track of server ticks AND user command #s ?


Aren't they the same thing? Each client command should be tagged with what tick number it's intended for (which is the same as which local-ahead-tick-number it was received on, when the client clock is not out of sync.)

once the user command buffer reaches >= 3, it pops a user command off on the server and apply's it to that server tick


That's one way of implementing it. Another is to simply call the "send buffered commands" function on your command buffer every X iterations through the main input/simulation loop.

What do I do if the buffer is some large number like 10?


Which buffer? Client, or server?

The client should buffer (tick, command) tuples and flush all of those into a single packet every so often.
The server should receive those commands into an incoming buffer.
If the server receives such a command with a (tick, _) value that is in the past and must be discarded, then the server should tell the client to add more compensation (shift clock forward.)
If the server receives such a command with a (tick, _) that is ludicrously far into the future (more than you're willing to de-jitter-buffer for this connection) then the server should tell the client to reduce compensation (shift clock backward.)

Share this post


Link to post
Share on other sites

Ahh I think I'm getting it!

 

Can you elaborate on this part "Another is to simply call the "send buffered commands" function on your command buffer every X iterations through the main input/simulation loop."

 

For shifting the clock backwards, how do I then handle the client user commands having duplicate tick #s ?

Share this post


Link to post
Share on other sites

For shifting the clock backwards, how do I then handle the client user commands having duplicate tick #s ?


Either don't handle that at all and just throw away duplicate commands on the server, or discard buffered commands on the client when the clock is shifted.
Getting out of sync will cause a small inconsistency between client/server, although 9 times out of 10, it will not actually be noticed by the player.

Share this post


Link to post
Share on other sites

Is there something different I could use than a user command buffer? I only ask because it kind of doesn't work 100%.

 

When the client first spawns he has a drop in FPS for maybe 100ms. Unfortunately during this time he doesn't send any user commands, but when the FPS drop recovers, he ends up sending 4 messages at once to the server. With 2 commands per message. So the client is immediately 128ms behind. If any other FPS drops happen, he just ends up farther and farther behind.

 

My solution right now is to put a max on the command buffer and clear it if it's above the max. But it seems janky.

 

Edit: Maybe I make it so the client never runs more than one tick if any are missed? But let the server have 100% accuracy.

Edited by bgilb

Share this post


Link to post
Share on other sites

he ends up sending 4 messages at once to the server. With 2 commands per message.


Why do you force exactly two command ticks into each message, instead of sending whatever you have when it's time to send?

Anyway, it's quite common that games need a little bit of sync when first starting up, loading levels, etc.
The role of your networking, for non-lockstep architectures, is to be robust based on this. It's really no different from the network being jittery. For example, my Comcast Internet will drop about 1% of packets, quite reliably (not in bursts or anything.) That's just a fact of life.
If you're doing lock-step networking (like Age of Empires, Starcraft, etc) then you instead make sure everything is loaded and ready to go, and when all clients have sent "everything ready" to the server, the server sends a "start" message and the game actually starts simulating.

Separately, it's common to have an upper limit to how many physics steps you will take when your loop is running slowly.
However, even if you do that, you will end up detecting that you are, in fact, behind the server.
This is one of the reasons why a bit of de-jitter buffer is useful; it adds a little latency, but allows for the non-smooth realities of the internet to not affect simulation as much.

Share this post


Link to post
Share on other sites

I'm not exactly forcing 2 user commands per message. Just based on Valve's articles they said they usually package a user command message to have 2 user commands. So I wait for 2 user commands to have been created before sending the message. I imagine this would be client adjustable.

 

Yes that's about the conclusion I'm coming to. This is non lock stop aka Valve style. The only thing is, this all seems so fragile D:

If the client has performance problems, it basically equates to packet loss. Then you have to deal with latency variability. In a nice world the packets would arrive right when they're need.

 

I'm not 100% sure how to tweak the user command buffer to be optimal on the server. As you said when the clients loop runs slowly he most likely ends up behind the server (especially if 1 tick took like 200ms or something). I actually removed the tick catch-up on the client since I didn't see the point anymore. Is this okay? I kept it on the server, where it doesn't have any upper bound on the amount of ticks it can execute at once. The problem on the client is, if the performance hit is bad enough user commands won't be sent out and he will be perpetually behind. The server loop will eventually run out of user commands for that client, but later it will receive like 4 at once. Since the server only executes 1 per tick, that client will never catch-up. It seems weird that FPS problems make my system fall apart though.

 

Edit: Also all my testing up till now has been on the same computer. So any problems are just performance issues in the application and not anything with latency or latency spikes. So it makes me worry once there is any latency or latency spiking it will be even worse.

Edited by bgilb

Share this post


Link to post
Share on other sites

based on Valve's articles they said they usually package a user command message to have 2 user commands. So I wait for 2 user commands to have been created before sending the message


My question is: If there are four commands in the queue, and it's time to send a network packet, why would you only send 2 commands, and not the entire queue, in that network packet?

If the client has performance problems, it basically equates to packet loss


Yes! Hence, why games have minimum recommended system specs.
However, the good news is that MOST games will be more CPU bound on rendering than on simulation, so you can simulate 2, 3, ... N simulation steps between each rendered frame, and at that point, the performance problem looks more like latency than like packet loss.
And you should be able to automatically compensate for latency pretty easily, if you do the simple "let the server tell you how much ahead/behind you are" thing, and don't try to be too fancy.

I actually removed the tick catch-up on the client since I didn't see the point anymore.


If you adjust the client clock based on whether the server sees incoming packets as ahead or too late, then the client will automatically adjust anyway.

The server loop will eventually run out of user commands for that client, but later it will receive like 4 at once


This is the same thing as "latency increased on the transmission network" and thus your clock adjustment should deal with it. The client will simply simulate further ahead for the controlling player, and assume the latency is higher, and things will still work.

Share this post


Link to post
Share on other sites

 

 

My question is: If there are four commands in the queue, and it's time to send a network packet, why would you only send 2 commands, and not the entire queue, in that network packet?

Sorry I may be confusing you. The server and client both buffer commands. The client fills his input buffer up until it hits a max, then sends everything (effectively the same thing as cl_cmdrate in CS games). So I'm always sending what's available, there aren't any pending ones that aren't sent. Input polling is built into the client tick loop, maybe that's a problem?

 

 

 

However, the good news is that MOST games will be more CPU bound on rendering than on simulation, so you can simulate 2, 3, ... N simulation steps between each rendered frame, and at that point, the performance problem looks more like latency than like packet loss.

 

The problem on the client is that more simulation steps at once means more user commands sent at once which will add latency on the server. The server doesn't let the client catch up or anything. The server buffer would end up with extra latency that won't go down unless the client had packet loss later. So I figure I might as well skip those update loops, because they wont end up executed on the server anyways. The server will let people catch up if the reason for the lag is the server taking too long doing something.

 

Right now instead of syncing their ticks, I'm just going to have the server handle the user command de-jitter buffer size, which I'm thinking should be similar right? So when the server buffer is too full because extra latency I will discard a certain amount of user commands so they can skip ahead. Ideally the buffer would try to be as small as possible, but without missing any server ticks and without removing excess user commands. Can you see any problems with this?

Edited by bgilb

Share this post


Link to post
Share on other sites

The problem on the client is that more simulation steps at once means more user commands sent at once which will add latency on the server.


Yes, slower frame rate and larger batches of simulation steps leads to more buffered simulation steps.
What I'm trying to say is that this should look, to your system, almost exactly as if the client simply had a slower and more jittery network connection.
If you adjust clocks correctly, you don't need to do any other work (as long as the server does, indeed, buffer commands and apply timesteps correctly.

So when the server buffer is too full because extra latency I will discard a certain amount of user commands so they can skip ahead.


Are you not time stamping each command for the intended server simulation tick? You should be.
You only need to throw away user commands when you receive two separate commands time stamped with the same tick, which should only happen if the clock is adjusted backwards in a snap, or if the network/UDP duplicates a packet.

And, yes if the client runs at 5 fps, then you need to buffer 200 ms worth of commands on the server. There is no way around that.
If you throw away any of that data, then you will just be introducing more (artificial) packet loss to the client, and the experience will be even worse.
You know that, because the client won't send another batch of commands until 200 ms later, you actually need that amount of buffering.

Separately, if you really want to support clients with terrible frame rates, you may want to put rendering in one thread, and input reading, simulation, and networking, in another thread.
When time comes to render to the screen, take a snapshot of the simulation state, and render that. That way, the users control input won't be bunched up during the frame render, so at least the controls will be able to pay attention to 60 Hz input rates, even if they can't render at that rate.

Share this post


Link to post
Share on other sites

These things are hard to explain, I'm sure you know! Thanks for all the help.

 

At the moment I'm not doing clock/tick syncing, or letting the client say what server tick it's intended for. I couldn't wrap my head around how that is different than the user command buffer on the server.

 

Right now I have it semi working, without client side prediction coded. It's on LAN so it feels fairly responsive.

 

On the client I sent batches of user commands marked with their local client tick which starts at 0 and isn't the same as the server tick.

 

The server receives them and buffers them. The server has it's own loop and all that, and removes one user command off the stack every server tick (same as client at 60Hz) and applies it to that player. If there was nothing in the stack it doesn't do anything for that player and the player doesn't move or process anything.

 

The server then sends gamestate to the client that has the server tick, and the client tick (or none) to identify which user command was used.

 

The problem with the 5fps scenario is that initially the server receives no user commands for 200ms or whatever, so the client completely misses those server loops and doesn't move. Then when the client catches up, he sends a bunch of user commands at once. Let's say 20. The buffer on the server will stay at roughly 20 user commands indefinitely. This is because the server only executes 1 user command per tick. I don't see a way to fix that besides removing the most of the user commands from the buffer. Isn't that the same as the clock adjusting, and ending up with duplicate tick #s that are deleted/ignored?

 

My other method before was to execute all ticks as they were received, but this made it impossible to generate any sort of gamestate that could be shared with other clients, since the players would be skipping around. Also it would be pretty easy to speed hack by just sending more user commands. Basically there isn't even a server loop.

Share this post


Link to post
Share on other sites
[quote]At the moment I'm not doing clock/tick syncing, or letting the client say what server tick it's intended for/quote]

I see. That's why you run into these problems :-)

[quote]The buffer on the server will stay at roughly 20 user commands indefinitely. This is because the server only executes 1 user command per tick.[/quote]

You are assuming that, after a single burst of 200 ms latency, the client will then be able to fill up 2 ticks every 20 milliseconds.
But, if the client runs at 5 Hz, it means it sends a clump of 20 commands every 200 milliseconds. The server-side buffer will go down to 0, then fill up again to 20, and then repeat.

If you have a single delay, and then the client runs fine (this could be a network delay, rather than client frame rate, even) and do not time stamp commands, then yes, the client will keep filling up the buffer.
If you do not time stamp the commands or sync the clocks (which you're going to want to do once you get into server-side lag compensation for shooting) then the best bet is to remove all buffered commands when you get a new packet of commands.
Keep all the commands when they come in, because there is some chance that the player will in fact send 20 commands every 200 milliseconds (or whatever the send rate is.)
But, if you get more commands, and there are more than 1 commands in the buffer, then delete all but the last command in the buffer before appending the newly arrived commands.

Share this post


Link to post
Share on other sites

I'm mostly talking about the single delays like you outlined. I'll call these latency spikes. These could be either FPS drops on the client or actual latency on the network.

 

I think I'll implement the clock syncing because it will probably make things easier. But I don't really understand the roll of the buffer anymore when using clock syncing. Do I get rid of it?

 

With clock syncing though, what happens also during latency spikes? Doesn't the server go to grab a user command during a tick and it's not there? Won't this also fill up the buffer?

 

Maybe pseudo code would help me understand?

 

Here is how it works currently for me:

Tick loop (at 60hz) on client:

Generate usercommand by polling input. User command is tagged with current client tick.

Send usercommand to be client side predicted

Send usercommand over network (Which waits until 2 are available and sends then).

 

 

Tick loop at(60hz) on server:

Once per render frame (higher than ticks) check for any incoming usercommand message. If any add to the players user command buffer

Per tick check for the lowest client tick if there are any and apply it to that player.

 

 

Both loops can play catchup

Share this post


Link to post
Share on other sites
The buffer is still important. The buffer is a priority queue of "tick number" mapping to "command for this player," and you flush any commands that arrive for ticks that are already executed, or that arrive for the far future.
With clock syncing, you can throw away events that arrive too late, but keep the ones that still have time to execute.
If you get lots of events arriving too late, then the server will tell the client to adjust the clock to send packets sooner, which will build up more buffering, which will help when the spike happens.
You have to figure out how much you want to be tolerant against latency spikes, versus how much overall latency you are prepared to accept in buffering.

Share this post


Link to post
Share on other sites

After doing some research, it doesn't appear the source engine uses a buffer, at least not exactly. 

 

https://github.com/ValveSoftware/source-sdk-2013/blob/master/mp/src/game/server/player.cpp#L3192

http://forums.steampowered.com/forums/showthread.php?t=3119525

 

 

It lets the user run up to the max allowed user commands per server tick. This is actually only recently added to the source engine in the last 3 years. I'm not sure how it worked before.

 

The github line is where the source engine actually processes user commands on the server. It doesn't seem to care at all about the current server tick?

 

It would make lag compensation easier because I wouldn't have to worry about the buffer size when the command was received. Although some form of speed hacking seems possible?

Share this post


Link to post
Share on other sites
It does keep a clock, but it's implicit. This line in DetermineSimulationTicks() seems to be the way that happens:

simulation_ticks += ctx->numcmds + ctx->dropped_packets
Note "ctx->dropped_packets" -- the invariant is that "numcmds" plus "dropped_packets" equals the total number of time steps lapsed. Edited by hplus0603

Share this post


Link to post
Share on other sites

It seems the server keeps track of missed user commands, and that becomes the number of ticks the client is allowed to execute. The "CCommandContext" from what I've read is just a set of user commands that were received on the network.

 

What I'm getting at is that it doesn't appear to care that the client sends his user commands "just in time" for the server tick. They're allowed to play catch-up or fall behind. Wouldn't this be more robust? Save for players sometimes skipping around.

Edited by bgilb

Share this post


Link to post
Share on other sites

This topic is 404 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this