Jump to content
  • Advertisement
jevito

Need help understanding Tick sync, tick offset

Recommended Posts

hello, I think this is my first post, sorry for my English, I have some question regarding tick sync, I've been reading a lot about the topic here but I still kinda lost, thanks for helping me.

let's say.

  • there are 2 clients or more, they run ahead of the server sending packet to server 60 fps.
  • each client has a different Latency.
  • the server receives packets at different times due to the latency of each client.

Questions:

  • I understand when the client connects to the server we need to sync the client tick with server tick depending on client latency but what are the steps to accomplish that?

for example "Client A" is at tick 20 with ping  = 85, send a packet to the server and received it at tick 10 + Latency, then there is "Client B" at tick 20 but with a ping = 120 so server received it at different tick number because they have different latency values, but they sent the packet at the same real-time, but server will receive the packet at different tick number, how do I calculate the offset depending on the client so every client are running same tick with the server?

 

Share this post


Link to post
Share on other sites
Advertisement

Generally, the client needs to run one-half round-trip-time ahead of the server, so that the commands it sends arrive "just in time" for the server.

One way of doing that, is for the client to send "this is command for step Sc" in each packet (as part of the main packet header/framing, generally.)

Then, the server sends back "I received command for step Sc when my step was Ss," again as part of the main packet header/framing.

The client can then compute "Sc - Ss" and re-derive its offset so that the difference is about 1, which should give it enough margin for a small amount of jitter. (The client can of course use some heuristic to figure out how large a number to aim for in average server buffering.)

Share this post


Link to post
Share on other sites

thanks, it's much clear now, I was following www.gabrielgambetta.com/entity-interpolation and I make 100ms buffer which is 6 tick and 60 fps.

when you said compute "Sc - Ss" and re-derive its offset so that the difference is about 1, does it mean that the client needs to be ahead 1 tick after re-derive its offset?

 

Share this post


Link to post
Share on other sites

I've found that, for good network connections, it makes sense for the client to aim to be 1 tick ahead of the derived time, e g sending the data such that it arrives 1 tick before it's needed on the server. If your network will have more jitter, then it would make sense to make that value something greater than 1. 100 ms (6 ticks) seems fairly conservative to me; you can probably get away with less for most network situations these days.

Once you have everything working well, you may want to look into adaptively setting this buffer size -- for example, if you find that the client is generally 3 ticks ahead, based on "Sc - Ss," but 1 time out of 100, it's 2 ticks behind, you may decide that 1% packet loss is too much, and bump the target up to 5 ticks. Or you may decide that 1% packet loss is fine, and not make any adjustment -- it's really up to your particular game's needs to decide what the right specific numbers are here.

Share this post


Link to post
Share on other sites

thanks for the help, now I understand how to sync Client-Server when the game start, based on your explanation what I did is:

tickOffset = Math.floor((latency/2 * 0.06)+5);

where:

Latency/2 = one-half round-trip-time.

0.06 tick = 1 ms.

+5 = offset 1 + buffer + server processing time. (it depends on the server how long it takes to process an input)

example:

60 / 2 * 0.06 + 5 =  6.8 ticks and Math.floor round it to 6 ticks. its working pretty good.

 

Share this post


Link to post
Share on other sites

The easiest way to adjust the clock is for the server to tell the client how early/late it is.

Each time the server decodes a packet, it finds the earliest tick command in the packet (may be only one tick if you don't batch them.) This is easy to do when you're figuring out which tick to queue the incoming commands for in the receive loop.

It then compares the current server tick to the command tick, and compares to the desired input window. For example, let's say that you want messages to arrive in the window between 1..4 ticks ahead of time. If it arrives more than 4 ticks ahead of time, then the server calculates the offset as "ahead by (client_tick - server_tick - 3)" to aim for the "3 ahead" point. If it arrives less than 1 tick ahead of time (just in time, or too late) then it calculates the offset as "behind by (server_tick + 2 - client_tick)" to aim for the "2 ahead" point. If the timestamp is within the desired 1..4 ticks ahead window, then the server simply sets the offset to 0.

Then, the server sends back the timing offset to the client. When the client receives a timing offset value, it simply adjusts its clock offset by that value for any future packets.

Note that there will be multiple packets with a time adjustment in the pipeline/in flight, so you may also want to have an "adjustment generation" value inside the packet, set by the client, and returned by the server, and the client only actually adjusts its clock offset if the generation in the return packet is the same as its current generation.

 

Here's some example code illustration for client logic:

struct client_state {
  int tickOffset;
  uint8_t tickGeneration;
} * clientState;

int clientClockTick() {
  return clockMicroseconds() / FRAME_LENGTH_US + clientState->tickOffset;
}

send_to_server() {
  ...
  packet->targetTick = clientClockTick();
  packet->tickGeneration = clienState->tickGeneration;
  ...
}

receive_from_server() {
  ...
  if (packet->timeOffset != 0 && packet->tickGeneration == clientState->tickGeneration) {
    clientState->tickOffset -= packet->timeOffset;
    clientState->tickGeneration++;
  }
  ...
}

 

Here's some example code illustration for server logic:

receive_on_server() {
  ...
  int st = serverCockTick();
  int clientOffset = 0;
  if (packet->targetTick < st + 1) {
    clientOffset = (st + 2) - packet->targetTick;
  } else if (packet->targetTick > st + 4) {
    clientOffset = packet->targetTick - (st + 3);
  }
  returnPacket->timeOffset = clientOffset;
  returnPacket->tickGeneration = packet->tickGeneration;
  ...
}


 

 

Share this post


Link to post
Share on other sites
On 11/8/2019 at 5:13 PM, hplus0603 said:

The easiest way to adjust the clock is for the server to tell the client how early/late it is.

Each time the server decodes a packet, it finds the earliest tick command in the packet (may be only one tick if you don't batch them.) This is easy to do when you're figuring out which tick to queue the incoming commands for in the receive loop.

It then compares the current server tick to the command tick, and compares to the desired input window. For example, let's say that you want messages to arrive in the window between 1..4 ticks ahead of time. If it arrives more than 4 ticks ahead of time, then the server calculates the offset as "ahead by (client_tick - server_tick - 3)" to aim for the "3 ahead" point. If it arrives less than 1 tick ahead of time (just in time, or too late) then it calculates the offset as "behind by (server_tick + 2 - client_tick)" to aim for the "2 ahead" point. If the timestamp is within the desired 1..4 ticks ahead window, then the server simply sets the offset to 0.

Then, the server sends back the timing offset to the client. When the client receives a timing offset value, it simply adjusts its clock offset by that value for any future packets.

Note that there will be multiple packets with a time adjustment in the pipeline/in flight, so you may also want to have an "adjustment generation" value inside the packet, set by the client, and returned by the server, and the client only actually adjusts its clock offset if the generation in the return packet is the same as its current generation.

 

Here's some example code illustration for client logic:


struct client_state {
  int tickOffset;
  uint8_t tickGeneration;
} * clientState;

int clientClockTick() {
  return clockMicroseconds() / FRAME_LENGTH_US + clientState->tickOffset;
}

send_to_server() {
  ...
  packet->targetTick = clientClockTick();
  packet->tickGeneration = clienState->tickGeneration;
  ...
}

receive_from_server() {
  ...
  if (packet->timeOffset != 0 && packet->tickGeneration == clientState->tickGeneration) {
    clientState->tickOffset -= packet->timeOffset;
    clientState->tickGeneration++;
  }
  ...
}

 

Here's some example code illustration for server logic:


receive_on_server() {
  ...
  int st = serverCockTick();
  int clientOffset = 0;
  if (packet->targetTick < st + 1) {
    clientOffset = (st + 2) - packet->targetTick;
  } else if (packet->targetTick > st + 4) {
    clientOffset = packet->targetTick - (st + 3);
  }
  returnPacket->timeOffset = clientOffset;
  returnPacket->tickGeneration = packet->tickGeneration;
  ...
}


 

 

I like it, I'm almost complete implementing it. I have a question tho.

when the client is too ahead out of 1-4 range that you mentioned server tells to the client to adjust its tick, but what can I do with that single tick that arrived too ahead, should I ignore it?  because if the client is at tick 15 and after adjusting its tick maybe the client will be at 13 so the first tick client sent will be applied after tick 13 there is a desync but I'm not too sure I haven't tested it.

Share this post


Link to post
Share on other sites

The server has a queue of incoming commands. It should still keep the "too far ahead" command in the queue and run it.

The client should never "go back" in time and re-send input for some tick -- instead, when it finds it's ahead, it will freeze or slow down for a short while while it slides back to the appropriate offset.

e g, your client should know which tick it previously processed, and only process the next tick when the time says it's time to process the next tick.

Share this post


Link to post
Share on other sites

ohh I understand now, I can slow down or freeze until the client gets to the appropriate offset. Perfect thank you.

I'm trying to make a simple top-down tank game. and I plan to implement GGPO like in Rocket League min 37:20 in order to sync Client A to Client B and so on, I was thinking of doing extrapolation but it will be difficult for collision detection I have fully deterministic physic tho.

what is your recommendation to sync between Clients?

 

Share this post


Link to post
Share on other sites

You cannot do fully deterministic physics with lock-step simulation, without delaying the action of client commands.

You either extrapolate all other clients, and detect / correct when the extrapolation went wrong, or you delay the action of player commands until you've received all other client inputs for the same tick number.

What GGPO does is re-play the entire physics for many frames for each input frame, which lets it "jump ahead" to the later results of some action a previous client sent. This works for fighting games, because:

1) Most discrepancies are hidden by wind-up animations, so the end result is that the other side sees less wind-up.

2) The simulation cost of a single frame is minuscule, so simulating, say, 10 frames, every frame you receive, is no big deal.

If your game has the same characteristics, then that approach should work fine.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!