• Create Account

# Network Tick rates

Old topic!

Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

14 replies to this topic

### #1Angus Hollands  Members   -  Reputation: 850

Like
0Likes
Like

Posted 18 January 2014 - 03:00 PM

I have a few questions regarding the principle of operating a network tick rate.
Firstly, I do fix my tick rate on clients. However, I cannot guarantee that if the animation code or other system code takes too long, that it will operate at such a frame rate.
To calculate the network tick, I would simply use "(self.elapsed - self.clock_correction) * self.simulation_speed", assuming the correction is the server time + latency downstream.

However, if the client runs at 42 fps, and the simulation speed is 60 fps (or, the server runs at 42 and the simulation is 60), I will eventually calculate the same frame successive times in a row if I round the result of the aforementioned equation. (This was an assumption, it seems unlikely in practice, but this will still occur when I correct the clock). How should one handle this?
Furthermore, should the simulation speed be the same as the server fixed tick rate, for simplicity?

One last question;

If I send an RPC call (or simply the input state as it would be) to the server every client tick (which I shall guarantee to run at a lower or equal tick rate to the server), then I believe I should simply maintain an internal "current tick" on the server entity, and every time the game tick is the same as (last_run_tick + (latest_packet_tick - last_packet_tick)) on the server I pop the latest packet and apply the inputs. This way, if the client runs at 30 fps, and the server at 60, then it would apply the inputs on the server every 2nd frame.

However, if the client's packet arrives late for whatever reason, what is the best approach? Should I introduce an artificial buffer on the server? Or should I perform rewind (which is undesirable for me as I am using Bullet Physics, and thus I would have to "step physics, rewind other entities" and hence any collisions between rigid body and client entities would be cleared"? If I do not handle this, and use the aforementioned model, I will eventually build an accumulation of states, and the server will drift behind the client.
Regards, Angus

Edited by Angus Hollands, 18 January 2014 - 04:36 PM.

### #2hplus0603  Moderators   -  Reputation: 10183

Like
2Likes
Like

Posted 18 January 2014 - 05:03 PM

I am assuming a fixed time step here. Let's call it 60 simulation steps per second.

If the client cannot simulation 60 times per second, the client cannot correctly partake in the simulation/game. It will have to detect this and tell the user it's falling behind and can't keep up, and drop the player from the game. This is one of the main reasons games have "minimum system requirements."

Now, simulation is typically not the heaviest part of the game (at least on the client,) so it's typical that simulation on a slow machine might take 20% of the machine resources, and simulation on a fast machine might take 5% of machine resources. It is typical that the rest of available resources are used for rendering. Let's say that, on the slow machine, a single frame will take 10% of machine resources to render. With 20% already going to simulation, 80% are left, so the slow machine will display at 8 fps. Let's say that, on the fast machine, a frame takes 1% of machine resources to render. With 95% of machine resources available, the fast machine will render at 95 fps.

Some games do not allow rendering faster than simulation. These games will instead sleep a little bit when they have both simulated a step and rendered a step and there's still time left before the next simulation step. That means the machine won't be using all resources, and thus running cooler, or longer battery life, or being able to do other things at the same time. Also, it is totally possible to overlap simulation with the actual GPU rendering of a complex scene, so the real-world analysis will be slightly more complex.

Note that, on the slow machine, simulation will be "bursty" -- by the time a frame is rendered, it will be several simulation steps behind, and will simulate all of those steps, one at a time, to catch up, before rendering the next frame. This is approximately equivalent to having a network connection that has one frame more of lag before input arrives from the server.

When a client sends data to the server such that the server doesn't get it in time, the server will typically discard the data, and tell the client that it's sending late. The client will then presumably adjust its clock and/or lag estimation such that it sends messages intended for timestep T somewhat earlier in the future. This way, the system will automatically adjust to network jitter, frame rate delays, and other such problems. The server and client may desire to have a maximum allowed delay (depending on gameplay,) and if the client ends up shuffled out to that amount of delay, the client again does not meet the minimum requirements for the game, and has to be excluded from the game session.

enum Bool { True, False, FileNotFound };

### #3Angus Hollands  Members   -  Reputation: 850

Like
0Likes
Like

Posted 24 January 2014 - 02:53 PM

Have you any ideas for a robust tick synchronisation algorithm? I have found mine tends to be unstable

### #4hplus0603  Moderators   -  Reputation: 10183

Like
0Likes
Like

Posted 24 January 2014 - 04:05 PM

I have described a robust tick synchronization mechanism more than once before in the last two months on this forum.

enum Bool { True, False, FileNotFound };

### #5Angus Hollands  Members   -  Reputation: 850

Like
0Likes
Like

Posted 24 January 2014 - 06:23 PM

I have described a robust tick synchronization mechanism more than once before in the last two months on this forum.

I have seen your posts, and they have been helpful. I have also found that I seem to get an unreliable jitter no matter what step I use to modify the client clock. I use an RPC call that "locks" a server variable until the client replies with acknowledgement of the correction, so I avoid bouncing around a median value because of latency.

### #6hplus0603  Moderators   -  Reputation: 10183

Like
0Likes
Like

Posted 24 January 2014 - 09:48 PM

I use an RPC call that "locks" a server variable until the client replies with acknowledgement of the correction, so I avoid bouncing around a median value because of latency.

I would not expect that to be robust.

First, I would want the server to be master, and never adjust its clock.
Second, because of network jitter, you will need some amount of de-jittering. You can then use your clock synchronization to automatically calibrate the amount of de-jitter needed!

When a client receives a packet that contains data for some time step that has already been taken, bump up the estimate of the difference.
When a client receives a packet that contains data for "too far" into the future, bump down the estimate of the difference.
The definition of "too far" should be big enough to provide some hysteresis -- probably at least one network tick's worth (and more, if you run at high tick rates.)

Finally, the server should let the client know how early/late arriving packets are on the server side. If the server receives a packet that's late, it should tell the client, so the client can bump it's estimate of transmission delay (which may be different from clock sync delta.) Similarly, if the server receives a packet that is "too far" into the future, let the client know to bump the other way.

As long as the hysteresis in this system is sufficient -- the amount you bump by, and the measurement of "too far," -- then this system will not jitter around a median, but instead quickly settle on a good value and stay there, with perhaps an occasional adjustment if the network conditions change significantly (rare but possible during a session,) or there is significant clock rate skew (very rare.)
enum Bool { True, False, FileNotFound };

### #7fholm  Members   -  Reputation: 262

Like
0Likes
Like

Posted 25 January 2014 - 04:27 AM

Since I have gotten a ton of help from a lot of people on this forum over the years, and especially hplus0603, I figured I could give some back and provide the code for the system for tick synchronization and simulation that I have come up with. This is by no means anything revolutionary or different from what everyone else seem to be doing, but maybe it will provide a good example for people learning. For reference I am using this for an FPS game.

First some basic numbers:

My simulation runs at 60 ticks per second on both the server and client. While it could be possible to run the server at some even multiplier of the client (say 30/60 or 20/60 or 30/120, etc.) I have chosen to keep both simulations at the same rate for simplicity's sake.

Data transmission rates are also counted in ticks and happen directly after a simulation tick, the clients send data to the server 30 times per second (or every 2nd tick), the server sends data to the clients 20 times per second (or every 3rd tick).

Here are first some constants we will use throughout the pseudo code:

#define SERVER_SEND_RATE 3
#define CLIENT_SEND_RATE 2
#define STEP_SIZE 1f/60f

I use the canonical fixed simulation/variable rendering game loop, which looks like this (C-style pseudo code):

float time_now = get_time();
float time_old = time_now;
float time_delta = 0;
float time_acc = 0;

int tick_counter = 0;
int local_send_rate = is_server() ? SERVER_SEND_RATE : CLIENT_SEND_RATE;
int remote_send_rate = is_server() ? CLIENT_SEND_RATE : SERVER_SEND_RATE;

while (true) {
time_now = get_time();
time_delta = time_now - time_old;
time_old = time_now;

if (time_delta > 0.5f)
time_delta = 0.5f;

time_acc += time_delta;

while (time_acc >= STEP_SIZE) {
tick_counter += 1;

recv_packets(tick_counter);
step_simulation_forward(tick_counter);

if ((tick_counter % local_send_rate) == 0) {
send_packet(tick_counter);
}

time_acc -= STEP_SIZE;
}

render_frame(time_delta);
}

This deal with the local simulation and the local ticks on both the client and the server, now how do we deal with the remote ticks? That is how do we handle the servers ticks on the client, and the clients ticks on the server.

The first key to the puzzle is that the first four bytes in every packet which is sent over the network contains the local tick of the sender, and it's then received into a struct which looks like this:

struct Packet {
int remoteTick;
bool synchronized;
char* data;
int dataLength;
Packet* next;
}

Exactly what's in the "data" pointer is going to be very game specific, so there is no point in describing it. The key is the "ticks" field, which is the local tick of the sending side at the time the packet was constructed and put on the wire.

Before I show the "receive packet" function I need to show the Connection struct which encapsulates a remote connection, it looks like this:

struct Connection {
int remoteTick = -1; // this is not valid C, just to show that remoteTick is initialized to -1
int remoteTickMax = -1;

Packet* queueTail;

Connection* next;

// .. a ton of more fields for sockets, ping, etc. etc.
}

The Connection struct contains two fields which are important too us (and like said in the comment, a ton more things in reality which are not important for this explanation): remoteTick, this is the estimated remote tick of the remote end of this connection (the first four bytes we get in each packet). queueHead and queueTail which forms the head/tail pointers of the currently packets in the receive buffer.

So, when we receive a packet on both the client and the server, the following code executes:

void recv_packet(Connection c, Packet p) {
// if this is our first packet (remoteTick is -1)
// we should initialize our local synchronized remote tick
// of the connection to the remote tick minus remotes send rate times 2
// this allows us to stay ~2 packets behind the remote on avarage,
// which provides a nice de-jitter buffer. The code for
// adjusting our expected remoteTick is done somewhere else (shown
// further down)

if (c->remoteTick == -1) {
c->remoteTick = p.remoteTick - (remote_send_rate * 2);
}

// deliver data which should be "instant" (out of bounds
// with the simulation), such as: reliable RPCs, ack/nack
// to the remote for packets, ping-replies, etc.
deliver_instant_data(c, p);

// insert packet on the queue
}

Now, on each connection we will have a list of packets in queueHead and queueTail, and also a remoteTick which gets initialized to the first packets remoteTick value minus how large of a jitter-buffer we want to keep.

Now, inside the step_simulation_forward(int tick) function which moves our local simulation forward (the objects we control ourselves), but we also integrate the remote data we get from our connections and their packet queues. First lets just look at the step_local_simulation function for reference (it doesn't contain anything interesting, but just want to show the flow of logic):

void step_simulation_forward (int tick) {
// synchronize/adjust remoteTick of all our remote connetions
synchronize_connection_remote_ticks();

// de-queue incomming data and integrate it

// move our local stuff forward
step_local_objects(tick);
}

The first thing we should do is to calculate the new synchroznied remoteTick of each remote connection. Now this ia long function, but the goals are very simple:

To give us some de-jittering and give us smooth playback, we want to stay remote_send_rate * 2 behind the last received packet. If we are closer to the received packet then < remote_send_rate or further away then remote_send_rate * 3, we want to adjust to get closer. Depending on how far/close we are we adjust one or a few frames up/down or if we are very far away we just reset-completely.

void synchronize_connection_remote_ticks() {
// we go through each connection and adjust the tick
// so we are as close to the last packet we received as possible

// the end result of this algorithm is that we are trying to stay
// as close to remote_send_rate * 2 ticks behind the last received packet

// there is a sweetspot where our diff compared to the last
// received packet is: > remote_send_rate and < (remote_send_rate * 3)
// where we dont do any adjustments to our remoteTick value

while (c) {

// increment our remote tick with one (we do this every simulation step)
// so we move at the same rate forward as the remote end does.
c->remoteTick += 1;

// if we have a received packet, which has not had its tick synchronized
// we should compare our expected c->remoteTick with the remoteTick of the packet.

if (c->queueTail && c->queueTail->synchronized == false) {

// difference between our expected remote tick and the
// remoteTick of the last packet that arrived
int diff = c->queueTail->remoteTick - c->remoteTick;

// our goal is to stay remote_send_rate * 2 ticks behind

// if we have drifted 3 or more packets behind
// we should adjust our simulation slightly

if (diff >= (remote_send_rate * 3)) {

// step back our local simulation forward, at most two packets worth of ticks
c->remoteTick += min(diff - (remote_send_rate * 2), (remote_send_rate * 4));

// if we have drifted closer to getting ahead of the
// remote simulation ticks, we should stall one tick
} else if (diff >= 0 && diff < remote_send_rate) {

// stall a single tick
c->remoteTick -= 1;

// if we are ahead of the remote simulation,
// but not more then two packets worth of ticks
} else if (diff < 0 && abs(diff) <= remote_send_rate * 2) {

// step back one packets worth of ticks
c->remoteTick -= remote_send_rate;

// if we are way out of sync (more then two packets ahead)
// just re-initialize the connections remoteTick
} else if (diff < 0 && abs(diff) > remote_send_rate * 2) {

// perform same initialization as we did on first packet
c->remoteTick = c->queueTail->remoteTick - (remote_send_rate * 2);

}

// only run this code once per packet
c->queueTail->synchronized = true;

// remoteTickMax contains the max tick we have stepped up to
c->remoteTickMax = max(c->remoteTick, c->remoteTickMax);
}

c = c->next;
}
}

The last piece of the puzzle is the function called integrade_remote_simulations, this function looks as the packets available in the queue for each connection, and if the current remote tick of the connection is >= remote_tick_of_packet - (remote_send_rate - 1).

Why this weird remote_tick comparison? Because if the remote end of the connection sends packets every remote_send_rate tick, then each packet contains the data for remote_send_rate ticks, which means the tick number of the packet itself, and then the ticks at T-1 and T-2.

void integrade_remote_simulations() {

while (c) {
// integrate data into local sim, how this function looks depends on the game itself

// remove packet
}
c = c->next;
}
}

I hope this helps someone

Edited by fholm, 25 January 2014 - 04:28 AM.

### #8Angus Hollands  Members   -  Reputation: 850

Like
0Likes
Like

Posted 02 February 2014 - 07:06 AM

Thank you for all of your support. I hate to admit, but I'm still finding it hard to organise my thoughts on this matter.

At present, I have the following structure for the network code

class Network:

for data in self.in_data:

class ConnectionInterface:
if payload.protocol == HANDSHAKE_COMPLETE and not self.has_connection():
self.create_connection()
return

self.update_ack_info()
self.update_other_stuff()

self.handle_invalid_tick()
else:

def tick_is_valid(self, tick):
return tick >= WorldInfo.tick

def update(self):
data = self.dejitter.find_latest(WorldInfo.tick)
if data:

class Connection:
self.update_replication_data()



My first concern; should I drop packets which are received at the wrong time (ie, too late or too early?) If so, I won't ACK them in order to avoid a reliable packet failing to retransmit.

As well as this, I will need to keep the client informed of how far from the correct time they are. If I send a reply packet, I feel like i should be handling this in the connection class rather than the interface (the interface simply manages the connection state, whilst most data should derive from the connection itself).

You can then use your clock synchronization to automatically calibrate the amount of de-jitter needed!

What should I infer from the clock difference? Do you mean to say that I should take some indication of the jitter time needed from the magnitude of difference from the average RTT that clock synchronisation determines?

Overall, would I be correct in the following understanding?

1. We de-jitter packets by enough time that we always have (at least) one in the buffer
2. We infer the de-jitter time by the clock sync oscillation about a median/mean value.
3. We then look at the packets we read from the dejitter buffer. If they're early or late we selectively drop them, do not ACK them and inform the client that their clock is wrong (in a certain direction).

I see one other issue; build up in the de-jitter buffer. If we shorten the de-jitter time, we will have already ACKed the packets in the buffer. But we will not be processing them, in order to shorten the buffer size. So how does one respond to that?

For the record, my de-jitter buffer:

from collections import deque

class JitterBuffer:

def __init__(self, delay=None):
self.buffer = deque()
self.delay = delay
self.max_offset = 0

def store(self, tick, item):
self.buffer.append((tick, item))

def retrieve(self, tick):
delay = self.delay

tick_, value = self.buffer[0]
#print(tick_)
if delay is not None:
projected_tick = tick_ + delay
tick_difference = tick - projected_tick

if tick_difference < 0:
raise IndexError("No items available")

elif tick_difference > self.max_offset:
pass

self.buffer.popleft()
return value



### #9hplus0603  Moderators   -  Reputation: 10183

Like
0Likes
Like

Posted 02 February 2014 - 10:36 AM

We de-jitter packets by enough time that we always have (at least) one in the buffer
We infer the de-jitter time by the clock sync oscillation about a median/mean value.
We then look at the packets we read from the dejitter buffer. If they're early or late we selectively drop them, do not ACK them and inform the client that their clock is wrong (in a certain direction).

That seems reasonable, except the "not ack" part.
I assume that a single network packet contains a single "sent at" clock timestamp.
Further, I assume it may contain zero or more messages that should be "reliably" sent.
Further, I assume it will contain N time-step-stamped input messages from time T to (T+X-1) where X is the number of time stamps per network tick.

Now, reliable messages can't really have a tick associated with them, unless all of your game is based on reliable messaging and it pauses for everyone if anyone loses a message (like Starcraft.)
So, why wouldn't you ack a reliable message that you get?

When it comes to shortening the de-jitter buffer, I would just let the messages that are already in the buffer sit there, and process them in time. It's not like you're going to shorten the buffer from 2 seconds to 0.02 seconds in one go, so the difference is unlikely to matter.

Also, you don't need for there to "always" be one packet in the buffer; you just need for there to "always" be input commands for timestep T when you get to simulate timestep T. In the perfect world, with zero jitter and perfectly predictable latency, the client will have sent that packet so it arrives "just in time." All you need to do to work around the world not being perfect, is to send a signal to the client each time you need input for time step T, but that input hasn't arrived yet. And, to compensate for clock drift, another signal when you receive input intended for time step T+Y where Y is some large number into the future.

Finally, I don't think you should keep the network packets themselves in a de-jitter buffer. I think you should decode them into their constituent messages as soon as you get them, and extract the de-jitter information at that time. The "queuing" of "early" messages will typically happen in the input-to-simulation pipe.
enum Bool { True, False, FileNotFound };

### #10Angus Hollands  Members   -  Reputation: 850

Like
0Likes
Like

Posted 02 February 2014 - 11:53 AM

We de-jitter packets by enough time that we always have (at least) one in the buffer
We infer the de-jitter time by the clock sync oscillation about a median/mean value.
We then look at the packets we read from the dejitter buffer. If they're early or late we selectively drop them, do not ACK them and inform the client that their clock is wrong (in a certain direction).

That seems reasonable, except the "not ack" part.
I assume that a single network packet contains a single "sent at" clock timestamp.
Further, I assume it may contain zero or more messages that should be "reliably" sent.
Further, I assume it will contain N time-step-stamped input messages from time T to (T+X-1) where X is the number of time stamps per network tick.

Now, reliable messages can't really have a tick associated with them, unless all of your game is based on reliable messaging and it pauses for everyone if anyone loses a message (like Starcraft.)
So, why wouldn't you ack a reliable message that you get?

When it comes to shortening the de-jitter buffer, I would just let the messages that are already in the buffer sit there, and process them in time. It's not like you're going to shorten the buffer from 2 seconds to 0.02 seconds in one go, so the difference is unlikely to matter.

Also, you don't need for there to "always" be one packet in the buffer; you just need for there to "always" be input commands for timestep T when you get to simulate timestep T. In the perfect world, with zero jitter and perfectly predictable latency, the client will have sent that packet so it arrives "just in time." All you need to do to work around the world not being perfect, is to send a signal to the client each time you need input for time step T, but that input hasn't arrived yet. And, to compensate for clock drift, another signal when you receive input intended for time step T+Y where Y is some large number into the future.

Finally, I don't think you should keep the network packets themselves in a de-jitter buffer. I think you should decode them into their constituent messages as soon as you get them, and extract the de-jitter information at that time. The "queuing" of "early" messages will typically happen in the input-to-simulation pipe.

Firstly, I send inputs as RPC calls every frame, whilst state updates are sent to the client at a network tick rate.

With regard to the question about build up, I think I was thinking incorrectly about my own implementation.

Are you advising that the client's clock correction also factors in the jitter buffer length, rather than simply accounting for the offset server side?

My current code looks as follows:


class PlayerController:

def on_initialised(self):
super().on_initialised()

# Client waiting moves
self.waiting_moves = {}

self.maximum_offset = 30
self.buffer_offset = 0

def apply_move(self, move):
inputs, mouse_x, mouse_y = move
blackboard = self.behaviour.blackboard

blackboard['inputs'] = inputs
blackboard['mouse'] = mouse_diff_x, mouse_diff_y

self.behaviour.update()

def client_missing_move(self) -> Netmodes.client:
pass

def player_update(self, delta_time):
mouse_delta = self.mouse_delta
current_tick = WorldInfo.tick
inputs, mouse_x, mouse_y = self.inputs, mouse_delta[0], mouse_delta[1]

max_value=WorldInfo._MAXIMUM_TICK),
inputs: TypeFlag(inputs.InputManager,
input_fields=MarkAttribute("input_fields")),
mouse_diff_x: TypeFlag(float),
mouse_diff_y: TypeFlag(float)) -> Netmodes.server:

tick_offset = WorldInfo.tick - move_tick
# Increment difference
if tick_offset < 0:
self.buffer_offset += 1
# Decrement difference
elif tick_offset > self.maximum_offset:
self.buffer_offset -= 1

def update(self):
requested_tick = WorldInfo.tick - buffer_offset
try:

except KeyError:
self.client_missing_move()
return

self.apply_move(move)


With client_missing_move, should that add a tick offset to the clock synchronisation client-side? Because the client is already "guessing" the RTT time.

Also, your previous post confuses me as to whether you mean buffer size or clock offset, I assume you mean buffer size.

When a client receives a packet that contains data for some time step that has already been taken, bump up the estimate of the difference.
When a client receives a packet that contains data for "too far" into the future, bump down the estimate of the difference.
The definition of "too far" should be big enough to provide some hysteresis -- probably at least one network tick's worth (and more, if you run at high tick rates.

Edited by Angus Hollands, 02 February 2014 - 01:04 PM.

### #11hplus0603  Moderators   -  Reputation: 10183

Like
0Likes
Like

Posted 02 February 2014 - 08:35 PM

There should be no buffer for network packets on either side.

There could be a buffer for queued commands on either side. Queued commands are just a subset of all possible messages that will arrive in a given packet.

Any message that is not timestamped with "please apply me at time X" should probably be applicable right away.

enum Bool { True, False, FileNotFound };

### #12Angus Hollands  Members   -  Reputation: 850

Like
0Likes
Like

Posted 14 February 2014 - 06:26 AM

I've taken a break before reconsidering this.

I still have a remaining question, if you would be so kind as to clarify for me.

Conflict between clock synchronisation and the command delay on the client. - Should I manage the "forward projection" time that the client adds to the current tick separately from the clock synchronisation system (e.g)

received_tick = synchronised_tick + projection_tick


or should I run the client ahead (so the synchronised tick itself handles the latency upstream)?

I assume that running the client ahead makes more sense.

Following from this, how best should I approach clock synchronisation (with the second suggested method, whereby we run ahead of the server)?

The most robust method for finding the RTT seems to me to be something like this

fac = 0.05
rtt = (fac * rtt_old) + ((1 - fac) * rtt)
new_time = timestamp + rtt


But then I need to include the command delay, which will likely change when the RTT measurement changes, so it will jitter around a little bit (the rtt estimate may be "off" so the server will tell the client its command was late, and therefore the client will increment the command delay. But the RTT estimate will likely compensate for the changed conditions the next time we update it).

The other option is that we don't separate the two ideas of "command latency" and "upstream latency" and just have a single latency variable. We update this by nudging it from the server.

if not_current_command:
if command_is_late:
self.client_nudge_forward()
else:
self.client_nudge_backwards()


But this seems a little to coarse grained, surely? Plus there would be some significant convergence time unless we factored in the difference in timing as a scalar. I'm not sure how large this scalar might need to be though.

difference = int((server_tick - command_tick) * convergence_factor)

if not_current_command:
self.client_nudge(difference)



-------------------------------

My findings:

My dejitter buffer is simply a deque structure.

The clock synchronisation relies on the "nudging technique". I have found it to be responsive and stable on local testing conditions, with simulated latency (but not with jitter as yet). I cannot use a smoothing factor because it will otherwise take too long to converge upon the "correct value".

To perform clock sync:

1. Send the command, a "received_clock_correction" boolean and the client's tick to the server. The server checks if the tick is behind the current tick, if so discarding the command and requesting a forward nudge (by the tick difference). Otherwise, we store it in the buffer.
2. In the simulation, read the first packet in the buffer. If there isn't one available, don't do anything and return.
3. If the tick is late (I can't imagine why it would be as we should have caught this, but lets say that you don't run the simulation function for a frame), we just remove it from the buffer (consume the move) and then recursively call the update function to see if we just need to catch up with the newer packets in the buffer).
4. Otherwise, If the tick is early, we check how early it is. If it is more than a safe margin (e.g  10 ticks), we send a backwards nudge to the client (by the tick difference, not the "safe" tick difference) and consume the move. Otherwise, we just skip this frame until the server catches up with the latency in the buffer (which, as aforementioned is less than or equals to 10 ticks late).
5. Otherwise we consume and run the move.

The purpose of the "received_clock_correction" boolean is to allow lockstep-like correction, This means that we won't send n correct by x RPC calls in the time it takes for the client to receive, apply the correction and send a new packet. I already have an RPC locking system in place (three functions, server_lock, server_unlock and is_locked (server side) ), but they will not be delayed by the dejitter buffer, which is what we use for clock correction.

The boolean is included in the command "packet" (RPC call) and so it is read at the same time the command is considered for execution. In my latest implementation, I have a new RPC (server_add_buffered_lock and server_remove_buffered_lock) which work like their unbuffered counterparts except they are executed in time with the dejittered commands.

Edited by Angus Hollands, 14 February 2014 - 11:43 AM.

### #13hplus0603  Moderators   -  Reputation: 10183

Like
0Likes
Like

Posted 14 February 2014 - 12:08 PM

Should I manage the "forward projection" time that the client adds to the current tick separately from the clock synchronisation system (e.g)

The clocks cannot be "synchronized" perfectly. Speed of light and all that :-)

Your question is, if I hear it right, "should I keep two variables: estimated clock, and estimated latency, or should I just keep the estimated clock and bake in the latency?"

The answer is "it depends on your simulation model." The only strict requirement is that you must have a way to order all commands from clients, in order, on the server, and order all updates from the server, or each client, in order.

I typically think of this as two separate values: the estimated clock, and the estimated send latency. I find that works better for me. Your approach *may* be different and *may* work with a single clock value -- it all depends on how you arrange for the updates to be ordered.
enum Bool { True, False, FileNotFound };

### #14Angus Hollands  Members   -  Reputation: 850

Like
0Likes
Like

Posted 14 February 2014 - 12:58 PM

Should I manage the "forward projection" time that the client adds to the current tick separately from the clock synchronisation system (e.g)

The clocks cannot be "synchronized" perfectly. Speed of light and all that :-)

Your question is, if I hear it right, "should I keep two variables: estimated clock, and estimated latency, or should I just keep the estimated clock and bake in the latency?"

The answer is "it depends on your simulation model." The only strict requirement is that you must have a way to order all commands from clients, in order, on the server, and order all updates from the server, or each client, in order.

I typically think of this as two separate values: the estimated clock, and the estimated send latency. I find that works better for me. Your approach *may* be different and *may* work with a single clock value -- it all depends on how you arrange for the updates to be ordered.

Thanks hplus!

It seems to work fantastic at the moment, the only concern I have will be for jitter. The jitter buffer becomes such when the client's clock is forward projected.To do this, it needs to overcompensate on the clock at the moment. I think I will implement this latency myself to allow dejittering to occur by default.

### #15fholm  Members   -  Reputation: 262

Like
0Likes
Like

Posted 15 February 2014 - 03:22 AM

From reading the replies in the whole thread it feels like you are over-complicating things in your head a lot, I did the same when I initially learned how to deal with synchronizing time between the client/server. A lot of the confusion comes from the fact that most people call it "synchronize", when in reality that's not what it's about.

hplus, said something which is key for understanding this and for realizing how simple it actually is:

The only strict requirement is that you must have a way to order all commands from clients, in order, on the server, and order all updates from the server, or each client, in order.

This is the only thing which actually matters, there is no need to try to keep the clients time in line with the servers time or try to forward-estimate the local client time with the remote server time by adding half the latency (rtt/2) on some local offset.

The piece of code which I read that made it all "click" for me was the CL_AdjustTimeDelta function in the Quake 3 source code, more specifically line 822 to 877 in this file: https://github.com/id-Software/Quake-III-Arena/blob/master/code/client/cl_cgame.c this shows how incredibly simple time adjustment is.

There are four cases which you need to handle:
• Are we off by a huge amount? reset the clock to the last time/tick received from the server (line 844)
• Are we off by a large amount? jump the clock towards the last time/tick received from the server (line 851)
• Are we slightly off? nudge the clock in the correct direction (line 857 - 871)
• Are we off by nothing/almost nothing? do nothing
Now quake 3 uses "time" as the value which we try to sync the client against the server with, I have found it a lot easier to use "ticks" completely and just forgo any concept of "time" in my code completely. The process I am going to describe next is the same process i described (a bit long winded) in my earlier post, but I'll try to make it a bit more transparent/easier to grasp:

Each peer have their own local tick which increments exactly like you would expect: +1 for every simulation frame we run locally. This is done on both the server and all the clients, individually and separately. The local tick is sent as the first four non-header bytes of each packet to every remote peer we are connected to. In reality the clients just send its own local tick to the server, and the server sends its own local tick to all clients.

Each connection to a remote peer has what is called the remote tick of that connection, this is exists on the connection object from client->server and server->client. The remote tick of each connection is what we try to keep in sync with the other end of the connections local tick. This means that the remote tick of the connection to the server on the client tries to keep in sync with the servers local tick and vice versa.

The remote tick of each connection is also stepped +1 for each local simulation tick. This allows our remote tick to step forward at roughly the same pace as the other end of the connection steps its local tick (which is what we are trying to stay in sync with). When we receive a packet we look at the four tick bytes of that packet and compare the remote tick we have for the connection, we check the same four conditions as is done in the Q3 sources in the CL_AdjustDeltaTime function, but with ticks instead.

After we have adjust our remote tick we read all the data in the packet and put it in the connections simulation buffer, everything is indexed by the local tick off the remote end of the connection. When then simply run something like this for each connection to de-queue all the remote simulation data which should be processed:

while (connection->simulationBuffer->next && connection->simulationBuffer->next->tick <= connection->remoteTick) {
connection->simulationBuffer->next->process();
connection->simulationBuffer->next = connection->simulationBuffer->next->next;
}


Old topic!

Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

PARTNERS