Jump to content
  • Advertisement
Sign in to follow this  
bgilb

Server client ticks, lag compensation, game state, etc

This topic is 708 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I'm trying to implement FPS style game networking, and have a lot of it understood and have implemented them. But there are parts that all the material I can read has kind of glossed over. The only meaningful things I can find are actually forum posts on here (mostly by hplus0603 and fholm).

 

So I guess to start I will explain how my system currently works:

  • Simulation is 60Hz on both server and client
  • Client sends input commands every other frame

Once per tick, the server goes through all pending commands and applys them to the game world. This means a user can apply more than one user command per server tick. From reading Valve's articles it seems they do it this way, but I'm not 100% sure.

 

After applying on the user commands, the server reads the physics of the world and stores it into a world game state object that includes all the entities and sends this to the users with the server tick.

 

I imagine you can already see the problems in this system:

  • When doing client server reconciliation, the ticks and positions don't really match up 100%. Maybe this is okay though?
  • Lag compensation definitely won't work.. This is because the users position inside the gamestate will actually jitter around. Also the client tick and server tick don't match up.
  • Interpolation probably won't work either..

 

So the first way I thought to fix it is to always have the server tick only execute 1 user command per tick. But I can't just tick 1 user command at a time, since user commands would end up missing. Especially since I send 2 at a time. Also how do I match up the user command tick to the server tick it was actually applied on? One issue I had with this was that the buffer was inexplicably filling up with like 20 user commands, and never draining them. So the player would end up very far behind (20 * 16ms) = 320ms. Also the receiving first game state is confusing to me.

 

Client: 

  • Client connects and receives first game state
  • Sets its own local tick to the server tick it received in the game state. Let's say tick 1300.
  • Client sends it's first user command with it's local tick (which will be the server tick it just received, we'll also just assume 1 user command per packet). The result of that user command was a position of 1,0.

Server:

  • Client connects, add player to game state
  • Send game state to everyone with server tick (1300)
  • At this point a couple ticks will have passed (let's just say 5) before we've even received the first user command. That client's buffer is empty. Let's assume we don't execute anything when the user command buffer is empty.
  • We received the previously connected users first command stamped with tick 1300. But the server is already on tick 1305. So it applys the user command on tick 1306 and send's it to the client with the stamp 1306.

Client

  • Client has sent a few user commands by now without a game state.
  • Receives second game state 1301 on the server.
  • Checks his own history and the result of 1301 was 2,0. But the server says he was at 0,0 since it hasn't even applied any user commands.

 

 

 

 

 

Share this post


Link to post
Share on other sites
Advertisement

the first way I thought to fix it is to always have the server tick only execute 1 user command per tick


That's generally how it's done!

When you send commands, you will typically timestamp them "for simulation tick X," "for simulation tick X+1," etc.
You then put them in a queue when you receive them, and run them at the appropriate tick.
If you receive a command for a tick that's already executed on the server, tell the client that it's too far behind and should increase its estimate of transmission latency (add more to the clock to get estimated arrived server clock.)
If you receive more commands when there's already more than one queued command (for some value of "one") then you can tell the client that it's running ahead, and should adjust its estimate backwards a bit.

As long as you run ticks at the same rate on server and client, this will work fine! When you get out of sync, there will be a slight discrepancy, which the server needs to correct the client for.
Typically, those will be so small that they're not noticed, and they won't happen that often after initial sync-up.

Share this post


Link to post
Share on other sites

Thanks for answer! I have a couple follow up questions..

 

1) When the client receives his initial server tick, he actually needs to figure out what the actual current server tick is? So the server actually sends the first gamestate as the current server tick + client latency?

2) Doesn't this mean though the server will be executing ticks for user commands it doesn't even have? (For almost all the clients). Unless the server works way behind like 30 ticks. Which would be half a second of extra latency.

 

The server tick vs client tick is what's throwing me off I think.

 

Also as I'm understanding it the server only has one global tick timeline. And the client has his own locally. Maybe this is causing the confusion? Although having a tick timeline per client on the server confuses me too.

Edited by bgilb

Share this post


Link to post
Share on other sites

In  normal real life mode the server will always send anything to the client except you are completely the only player on it and the gameworld itself does not do any state changes what is far unrealistic. You also get messages on authentication and so on so dont worry about getting the server tick.

 

You might capture it and then synchronize with the clients internal clock (not OS clock)

Share this post


Link to post
Share on other sites

 

the first way I thought to fix it is to always have the server tick only execute 1 user command per tick


That's generally how it's done!

When you send commands, you will typically timestamp them "for simulation tick X," "for simulation tick X+1," etc.
You then put them in a queue when you receive them, and run them at the appropriate tick.
If you receive a command for a tick that's already executed on the server, tell the client that it's too far behind and should increase its estimate of transmission latency (add more to the clock to get estimated arrived server clock.)
If you receive more commands when there's already more than one queued command (for some value of "one") then you can tell the client that it's running ahead, and should adjust its estimate backwards a bit.

As long as you run ticks at the same rate on server and client, this will work fine! When you get out of sync, there will be a slight discrepancy, which the server needs to correct the client for.
Typically, those will be so small that they're not noticed, and they won't happen that often after initial sync-up.

 

 

What does adjusting estimate mean in this situation? In my project, I'm doing exactly what is described above, so I have certain tick rate and I send client input commands to the server at client's tick rate which is the same as server's. I can't see situation where client sends the same input that was already processed - because I never send redundant inputs (I send them as UDP, but reliably so I'm not shooting with inputs blindly, with some redundancy like sending last 3 inputs each packet, maybe I should?), so if client sent input for tick #123 it will keep sending ticks that follow, not the same one, even if it won't confirm that the input was accepted by server.

 

Ah, and I perform client-side simulation too, and confirm ticks from server, then resimulate these queued on client. I can measure how many ticks haven't been confirmed for example, and if this piles up I could do something. 

But you mention some estimates of latency, which is something I don't do at all. Does it affect the rate at which client should send inputs? As per above, I'm sending them with the rate of local simulation, which has the same rate as server sim (right now 30Hz). How should I understand adjusting estimate in my case? What should be done?

Share this post


Link to post
Share on other sites

Doesn't this mean though the server will be executing ticks for user commands it doesn't even have?


Typically, the player will run "ahead of time" from the server, so that commands arrive "just in time" (plus buffering.)
The other entities on the pleyer's machine will run "behind time" from the server (based on server data forwarding.)
One of the basic choices in game networking is whether you display remote entities "behind time" in correct positions, or "forward extrapolated" in estimated future positions.
The other basic choice is whether you let the player take the action immediately, or whether you actually play player actions behind time, leading to input latency. The benefit of doing this is that you can be in lockstep sync with the server.
(It might seem like there are four possible combinations here, but the combination "other entities forward extrapolated" and "local player is simulated with latency" is never used in practice :-)

Estimated latency doesn't affect how often you send packets, it mainly affects the clock offset between server and client.
It also typically affects how far into the future you forward extrapolate displayed entities, if you choose that option. Edited by hplus0603

Share this post


Link to post
Share on other sites

How does it know how to run ahead? Also I'm planning "behind time" other entities that are interpolated. Actions are taken immediately (well most) a.k.a client side prediction.

 

 

Here is a "chart" I made that will maybe clarify my confusion.

 

http://i.imgur.com/2KNLyOK.png

 

You will see the ticks never really line up. Does the server need to keep track of another tick that is the user command tick? So on tick 7 the server actually sends back "Here is the result on tick 7, which was for you user command 0".

 

Then the client can lookup in his usercommand result history, what happened after usercommand 0?

 

For lag compensation, the server would have to be able to rewind to the tick the client actually shot from his view. The server would have to know the clients interp and his latency right? So right when he receives the usercommand shooting, he rewinds 2 ticks (32ms) for latency, and another 2 ticks (32ms) for interp? Then checks the shot with those positions? It seems even worse to code if the server buffers usercommands some amount..

Share this post


Link to post
Share on other sites
The server buffer just adds latency to the calculation. All you really need is to know the tick numbers that the commands are intended for, and, if you do "client view rewind," how much latency the client sees.
There will be one or two round-trips when first establishing the connection where the server lets the client know its current tick, then the client tells the server what the server tick received was, and then the server tells the client what the effective round-trip latency seems to be based on that, after which point the client can start assuming it knows the server tick.
When the server sees a command that is way too early or too late, it can tell the client to adjust in the appropriate direction.

The ticks don't need to "line up" in "global time" (because there is no such thing, according to Einstein!) but they do need to agree on the sequence of events -- tick 1 happens before tick 2 and so fort.

Share this post


Link to post
Share on other sites

1) So is the client trying to execute the same tick at the same actual time as the server? Or is he trying to compensate for latency by being ahead in ticks so when his usercommand is sent it's on the correct tick?

2) How would that work if there was a buffer of user commands? Let's say user commands aren't emptied until there are 3.

 

When you said "intended for", in my drawing the user command was intended for tick 3. But this has long passed now on the server. And I don't think the server is going to be going back into old game states to update the state. Is that correct?

 

Do I need to keep track of server ticks AND user command #s ? This is the only way I can think to do it.

 

So once the user command buffer reaches >= 3, it pops a user command off on the server and apply's it to that server tick. Then it sends that one to the client saying "Here is tick 534, and I applied user command 0". The buffer would prevent user commands from being empty and the player missing that server tick.

 

Then the client knows on user command 0, for him it was tick 0 (but maybe that doesn't matter?) so at the point just do server reconciliation starting at user command 0 and applying all the newest ones.

 

What do I do if the buffer is some large number like 10? The player is clearly very behind then. And what do I do if it hits 0?

Share this post


Link to post
Share on other sites

1) So is the client trying to execute the same tick at the same actual time as the server?


No, the client runs ahead of the server, so that the outgoing packets arrive to the server when they are needed.

How would that work if there was a buffer of user commands?


That buffer just looks like more network latency to the timing system.

in my drawing the user command was intended for tick 3. But this has long passed now on the server.


Right. So the client has to move its clock up enough such that it executes and sends commands in time for them to make it to the server, all buffering included.

Do I need to keep track of server ticks AND user command #s ?


Aren't they the same thing? Each client command should be tagged with what tick number it's intended for (which is the same as which local-ahead-tick-number it was received on, when the client clock is not out of sync.)

once the user command buffer reaches >= 3, it pops a user command off on the server and apply's it to that server tick


That's one way of implementing it. Another is to simply call the "send buffered commands" function on your command buffer every X iterations through the main input/simulation loop.

What do I do if the buffer is some large number like 10?


Which buffer? Client, or server?

The client should buffer (tick, command) tuples and flush all of those into a single packet every so often.
The server should receive those commands into an incoming buffer.
If the server receives such a command with a (tick, _) value that is in the past and must be discarded, then the server should tell the client to add more compensation (shift clock forward.)
If the server receives such a command with a (tick, _) that is ludicrously far into the future (more than you're willing to de-jitter-buffer for this connection) then the server should tell the client to reduce compensation (shift clock backward.)

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!