• Create Account

Banner advertising on our site currently available from just \$5!

# Timestamping & processing inputs

Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

14 replies to this topic

### #1Farkon  Members   -  Reputation: 192

Like
0Likes
Like

Posted 23 May 2014 - 03:24 AM

Hello,

What i've been doing until now is : clients send inputs, server is calling process(input) as soon as it receives them, which means processing several netframes per server frame. It works well for my simple physics, it's deterministic.. but it's obviously vulnerable to speedhack.

So i was looking at another way, here is what i'm doing.

Client has its own frame ID and send its inputs with the frame ID to the server.

The server is caching each message by (client) frame ID.

(The server sets the first client frame ID via the first client message received (client ID))

From there the server increments each client ID according to its (the server) steps.

The server sets client inputs getting them from the buffer, if there is no frame, just skip it.

Using a dejitter buffer it works well but i'm not convinced that's how i should do it; here are my questions :

• Here the latency is set by the first message received, how do i initialize my timestamps?
• What happens if the client is suddenly lagging and the server ends up skipping all the received frames ? Do you adjust this in realtime?
• In my game the server is sending the map & a player creation event all at once. I'm using a fixed timestep but what happens is that when the client is receiving the map and processing it, it takes around 1 second & in the same frame sends the first inputs. Next frame the fixed loop makes his job and tries to catch up from the lag introduced in the previous loop, so i end up sending X events to compensate for something the server isn't aware of which introduces a big amount of lag in the game.
• Still related to time, what happens if the client or the server goes out of the fixed loop aka by being prevented to enter the spiral of death?

### #2Angus Hollands  Members   -  Reputation: 765

Like
0Likes
Like

Posted 23 May 2014 - 06:56 AM

Hello,

What i've been doing until now is : clients send inputs, server is calling process(input) as soon as it receives them, which means processing several netframes per server frame. It works well for my simple physics, it's deterministic.. but it's obviously vulnerable to speedhack.

So i was looking at another way, here is what i'm doing.

Client has its own frame ID and send its inputs with the frame ID to the server.

The server is caching each message by (client) frame ID.

(The server sets the first client frame ID via the first client message received (client ID))

From there the server increments each client ID according to its (the server) steps.

The server sets client inputs getting them from the buffer, if there is no frame, just skip it.

Using a dejitter buffer it works well but i'm not convinced that's how i should do it; here are my questions :

• Here the latency is set by the first message received, how do i initialize my timestamps?
• What happens if the client is suddenly lagging and the server ends up skipping all the received frames ? Do you adjust this in realtime?
• In my game the server is sending the map & a player creation event all at once. I'm using a fixed timestep but what happens is that when the client is receiving the map and processing it, it takes around 1 second & in the same frame sends the first inputs. Next frame the fixed loop makes his job and tries to catch up from the lag introduced in the previous loop, so i end up sending X events to compensate for something the server isn't aware of which introduces a big amount of lag in the game.
• Still related to time, what happens if the client or the server goes out of the fixed loop aka by being prevented to enter the spiral of death?

I'm not the authority on such things, but I have some remarks:

1. If your client cannot meet the game requirements, they cannot play. This will usually mean that they are unable to simulate the game at a certain tick rate, that is required for the server and the client. As Hplus has mentioned before, it is necessary, and no different from setting minimum graphics hardware requirements.
2. As long as your client is using the same framerate for simulation as the server is, then you don't need to send delta time or any scalar value. Just send the move identifier, which appears to be your frame ID. To avoid missing simulation frames (assuming that the server simulation runs in one update call, rather than per-object, though it is still good practice to use this technique), store inputs in a dejitter buffer that has a minimum length. You fill the buffer at the start of the game to that length, and only stop to refill it when it reaches zero items. (When should really be "if" here). You have said you can process many frames in one server frame. This would make things difficult to monitor regarding speedhacking. You would have to maintain some form of ratio between the server time in ticks and the client frames in ticks, and notice if this value starts to creep above 1, but this is less than perfect.
3. You will notice that the buffer decreases in length slowly over size, due to inevitable packet loss. To compensate for this, sending N previous moves with every move (frame) will enable the server to recover,provided that you have enough time to lookup the missing move. The way that I handle this is to check that the previous move is a single step away from the current move taken from the buffer for simulation, and if it is not, find the missing moves between the move (frame) IDs. For previous moves, I do not send the physics results for correction purposes, because it would be more bandwidth, and It should be the case that other valid moves are received that do include this information. Sending less data means less connection overhead, which should reduce the likelihood of your flooding the connection (within reason).
4. You should surely be able to load the map in an asynchronous fashion? If you have to block when loading, it should be trivial enough to tell the server that you're doing so. It seems illogical that the server would expect moves from a client who is technically not involved in the gameplay yet.

Edited by Angus Hollands, 23 May 2014 - 06:57 AM.

### #3Mussi  Crossbones+   -  Reputation: 2191

Like
0Likes
Like

Posted 23 May 2014 - 07:05 AM

What happens if the client is suddenly lagging and the server ends up skipping all the received frames ? Do you adjust this in realtime?

If your physics are simple enough, you can store all your states for the amount of lag you want to be able to compensate for, rewind x steps and then simulate forward again. The rest of the clients have to do this as well. Or you could just let the server correct the client.

In my game the server is sending the map & a player creation event all at once. I'm using a fixed timestep but what happens is that when the client is receiving the map and processing it, it takes around 1 second & in the same frame sends the first inputs. Next frame the fixed loop makes his job and tries to catch up from the lag introduced in the previous loop, so i end up sending X events to compensate for something the server isn't aware of which introduces a big amount of lag in the game.

What does your fixed timestep look like? Can't you load before hand or send a message that you're ready to take on input? Why do you end up sending X events(who is you?)?

Still related to time, what happens if the client or the server goes out of the fixed loop aka by being prevented to enter the spiral of death?

There's not much you can do about this, if the client doesn't meet the minimum required system specs they simply can't play.

### #4Farkon  Members   -  Reputation: 192

Like
0Likes
Like

Posted 23 May 2014 - 07:18 AM

1. If your client cannot meet the game requirements, they cannot play. This will usually mean that they are unable to simulate the game at a certain tick rate, that is required for the server and the client. As Hplus has mentioned before, it is necessary, and no different from setting minimum graphics hardware requirements.
2. As long as your client is using the same framerate for simulation as the server is, then you don't need to send delta time or any scalar value. Just send the move identifier, which appears to be your frame ID. To avoid missing simulation frames (assuming that the server simulation runs in one update call, rather than per-object, though it is still good practice to use this technique), store inputs in a dejitter buffer that has a minimum length. You fill the buffer at the start of the game to that length, and only stop to refill it when it reaches zero items. (When should really be "if" here). You have said you can process many frames in one server frame. This would make things difficult to monitor regarding speedhacking. You would have to maintain some form of ratio between the server time in ticks and the client frames in ticks, and notice if this value starts to creep above 1, but this is less than perfect.
3. You will notice that the buffer decreases in length slowly over size, due to inevitable packet loss. To compensate for this, sending N previous moves with every move (frame) will enable the server to recover,provided that you have enough time to lookup the missing move. The way that I handle this is to check that the previous move is a single step away from the current move taken from the buffer for simulation, and if it is not, find the missing moves between the move (frame) IDs. For previous moves, I do not send the physics results for correction purposes, because it would be more bandwidth, and It should be the case that other valid moves are received that do include this information. Sending less data means less connection overhead, which should reduce the likelihood of your flooding the connection (within reason).
4. You should surely be able to load the map in an asynchronous fashion? If you have to block when loading, it should be trivial enough to tell the server that you're doing so. It seems illogical that the server would expect moves from a client who is technically not involved in the gameplay yet.

1. True, but lag spikes happens and i might need to recover somehow.
2. I'm using the same framerate for both client & server and i'm not sending deltas. I'm not processing several frames by server frame, that's actually what i want to avoid. What i'm having atm is a circular frame buffer from 0 to 255 which corresponds to client IDs indexes. Whenever i receive a message from a client i put it in this buffer at array index = client frame ID. My dejitter logic is basically processing from the frame buffer at local (server) client frame ID - X, that's maybe here that i'm doing something wrong !?
3. I'm using TCP.
4. I could load it asynchronously but i don't really care about the lag but more about the fact that i can't recover from this, i'm not comfortable with my system not being solid since it might happen in other cases. And in my case the server logically expects the client to send inputs at that time, if i understand correctly that's more a consequence of how a fixed timestep loop is working, it doesn't mean i can't fix this. But again i'm wondering how to recover from such cases.

### #5Farkon  Members   -  Reputation: 192

Like
0Likes
Like

Posted 23 May 2014 - 07:43 AM

If your physics are simple enough, you can store all your states for the amount of lag you want to be able to compensate for, rewind x steps and then simulate forward again. The rest of the clients have  to do this as well. Or you could just let the server correct the client.

If i'm understanding well you're describing the rewinding technique for clients, if so this is lag compensation while i'm talking about raw lag that *shouldn't* be there.

W hat does your fixed timestep look like? Can't you load before hand or send a message that you're ready to take on input? Why do you end up sending X events(who is you?)?

Definitely; that's something i imagined as a workaround though. I'm still a bit confused about the fact that i'm definining the lag by the first message i'm sending for the rest of the game. The server is using that ID and will increment it at server steps, so if i'm lagging more, the server will receive messages that are too late to process. I guess there is some kind of dynamic mechanism relative to the latency to make but i'm not sure how.

you is any client in this case: The client at t1 is receiving the map + player creation event, that loop takes 1 second, the client is sending first inputs. Server receives those inputs and set the first frame ID. Client is at t2 and catches up for the laggy previous frame hence spamming the server buffer with X inputs, which makes the server having longer buffers than expected. I'm aware that it's a corner case and that i can fix it but i still don't know how to set that first timestamp from which the server will iterates on (if it's ever how i should do).

There's not much you can do about this, if the client doesn't meet the minimum required system specs they simply can't play.

I admit i didn't really visualize the scenario that way. Angus pointed out the same thing. In my mind the idea was that sometimes people are lagging so much that i make them going out of a potential infinite loop and that those cases will happen no matter what but i guess i can just say that if the number of loops have passed a certain limit i just disconnect the player or something in that respect. Which i guess is more or less what you were thinking about. This makes sense.

Edited by Farkon, 23 May 2014 - 07:45 AM.

### #6Angus Hollands  Members   -  Reputation: 765

Like
0Likes
Like

Posted 23 May 2014 - 07:51 AM

1. If your client cannot meet the game requirements, they cannot play. This will usually mean that they are unable to simulate the game at a certain tick rate, that is required for the server and the client. As Hplus has mentioned before, it is necessary, and no different from setting minimum graphics hardware requirements.
2. As long as your client is using the same framerate for simulation as the server is, then you don't need to send delta time or any scalar value. Just send the move identifier, which appears to be your frame ID. To avoid missing simulation frames (assuming that the server simulation runs in one update call, rather than per-object, though it is still good practice to use this technique), store inputs in a dejitter buffer that has a minimum length. You fill the buffer at the start of the game to that length, and only stop to refill it when it reaches zero items. (When should really be "if" here). You have said you can process many frames in one server frame. This would make things difficult to monitor regarding speedhacking. You would have to maintain some form of ratio between the server time in ticks and the client frames in ticks, and notice if this value starts to creep above 1, but this is less than perfect.
3. You will notice that the buffer decreases in length slowly over size, due to inevitable packet loss. To compensate for this, sending N previous moves with every move (frame) will enable the server to recover,provided that you have enough time to lookup the missing move. The way that I handle this is to check that the previous move is a single step away from the current move taken from the buffer for simulation, and if it is not, find the missing moves between the move (frame) IDs. For previous moves, I do not send the physics results for correction purposes, because it would be more bandwidth, and It should be the case that other valid moves are received that do include this information. Sending less data means less connection overhead, which should reduce the likelihood of your flooding the connection (within reason).
4. You should surely be able to load the map in an asynchronous fashion? If you have to block when loading, it should be trivial enough to tell the server that you're doing so. It seems illogical that the server would expect moves from a client who is technically not involved in the gameplay yet.

1. True, but lag spikes happens and i might need to recover somehow.
2. I'm using the same framerate for both client & server and i'm not sending deltas. I'm not processing several frames by server frame, that's actually what i want to avoid. What i'm having atm is a circular frame buffer from 0 to 255 which corresponds to client IDs indexes. Whenever i receive a message from a client i put it in this buffer at array index = client frame ID. My dejitter logic is basically processing from the frame buffer at local (server) client frame ID - X, that's maybe here that i'm doing something wrong !?
3. I'm using TCP.
4. I could load it asynchronously but i don't really care about the lag but more about the fact that i can't recover from this, i'm not comfortable with my system not being solid since it might happen in other cases. And in my case the server logically expects the client to send inputs at that time, if i understand correctly that's more a consequence of how a fixed timestep loop is working, it doesn't mean i can't fix this. But again i'm wondering how to recover from such cases.

1. Yes, but provided that you are using some real measure of time. your client should simulate n additional frames and the server will still have moves in the buffer from which to take whilst it awaits to receive the late ones.

2. Assuming that you initial synchronised the clocks, then x would represent the length of the buffer in ticks. However, this does require you to synchronise the clocks. Alternatively, you would take the first move ID and use that as the offset, such that:

if (!this->calibrated)
{
this->id_difference = this->current_id - move->id;
this->calibrated = true;
}

dejittered_index = move->id + this->id_difference + this->offset_ticks;
this->move_array[dejittered_index] = move;

(I think, at least).

What type of game are you developing? It might be that TCP is not the best choice for your inputs.When attempting to avoid sending previous moves by using reliable delivery, it resulted in moves arriving too late to be simulated at the correct time - which means either lengthening the buffer past 150 ms (which is already quite a long time) or dropping moves.

If lag happens, it represents information arriving late. You need to ensure that information is always arriving early, so that "late" is still in the future, or just in time, which is whole purpose of the buffer. It doesn't matter whether the network causes the delay or the client itself, provided that the delay doesn't exceed the length of the buffer.

However, delays of one second are not arbitrarily small, and should to be handled specifically. It doesn't matter if they aren't - the command buffer should just consider it to be extreme latency, and refill and continue.

One thing that you might be concerned about is pushing the simulation into the past on the server, if your move buffer is unbounded. However, yours is circular, so there are no problems there. You will have to drop input when a map is loading, but what realistically is the client going to be doing during that time? Players won't care if their inputs are simulated on the server if they cannot see them affecting their local game state (the notion of networking is non-existent). They will be more concerned with the fact that their game froze than the fact that technically 1000ms of time was missed from simulation. As far as that client is concerned, the game paused during that time.

Because of this, I would recommend handling map loading more gracefully. Not because it takes a while to load a map, but because of the fact that during the time a map is loading, it would seem illogical to have the client waiting on the server doing nothing. A map load event is something that should happen for all clients, unless it is a zone -like transition, which should not be handled in a blocking manner if at all possible to avoid.

Edited by Angus Hollands, 23 May 2014 - 07:52 AM.

### #7Farkon  Members   -  Reputation: 192

Like
0Likes
Like

Posted 23 May 2014 - 08:26 AM

1. Yes, but provided that you are using some real measure of time. your client should simulate n additional frames and the server will still have moves in the buffer from which to take whilst it awaits to receive the late ones.

I do but what if the client is lagging over the number of cached frames ?

2. Assuming that you initial synchronised the clocks, then x would represent the length of the buffer in ticks. However, this does require you to synchronise the clocks. Alternatively, you would take the first move ID and use that as the offset, such that:

if (!this->calibrated)
{
this->id_difference = this->current_id - move->id;
this->calibrated = true;
}

dejittered_index = move->id + this->id_difference + this->offset_ticks;
this->move_array[dejittered_index] = move;

(I think, at least).

How do you get offest_ticks here ?

What type of game are you developing? It might be that TCP is not the best choice for your inputs.When attempting to avoid sending previous moves by using reliable delivery, it resulted in moves arriving too late to be simulated at the correct time - which means either lengthening the buffer past 150 ms (which is already quite a long time) or dropping moves.

This is action RPG-ish and if i had the choice i would go UDP but i'm using flash.

If lag happens, it represents information arriving late. You need to ensure that information is always arriving early, so that "late" is still in the future, or just in time, which is whole purpose of the buffer. It doesn't matter whether the network causes the delay or the client itself, provided that the delay doesn't exceed the length of the buffer.

So you set a fixed buffer size, same for all clients ? And no matter the latency, if the client is over that size, he's not meeting the requirement to play that game ? If so how big usually is that buffer ?

I'm still in the mist with setting the first frame ID (client to server), let's say i'm having a 1second lag when setting the ID, no map loading, just no luck. I'm going to set frame ID 0 at t1 and the next frame ID1 2 3 4 5 ... all at once at t2, the server will increment ID 0 by server step at t1+lag, then will receive 1 2 3 4 5 ids while only incrementing ID0 to ID1 at t2+lag when the id should be way further. There's something that i'm really not getting here :S

Edited by Farkon, 23 May 2014 - 08:38 AM.

### #8Farkon  Members   -  Reputation: 192

Like
0Likes
Like

Posted 23 May 2014 - 10:37 AM

I guess i could calculate the latency (in ticks) server-side and adjust the client frame id (still server-side) dynamically...

### #9Mussi  Crossbones+   -  Reputation: 2191

Like
0Likes
Like

Posted 23 May 2014 - 11:18 AM

I'm finding it a bit hard to understand what your problem is, so I'll just describe a typical scenerario and you can tell me what part you're deviating from or having trouble with.

• The client( C ) sends a join request to the server(S), S accepts and responds with info about other players and spawns C into the world. C receives response and spawns into the world along with the other players.
• C starts moving, sends input along with it's frame ID as you call it(assuming this is a counter for the fixed steps)
• S receives input and stores it's own and C's frame ID, now for every packet received onward, S can calculate the difference between the steps he took and the steps C took. S buffers the received input so that it can simulate the input X steps in the future. S notifies the other clients.
• If S receives input from C where the difference between the steps they took is larger than X, S corrects C.

That's the gist of it, what is the part that you're having trouble with?

Edited by Mussi, 23 May 2014 - 11:19 AM.

### #10Angus Hollands  Members   -  Reputation: 765

Like
0Likes
Like

Posted 23 May 2014 - 11:47 AM

I do but what if the client is lagging over the number of cached frames ?

If by that you mean it lags for longer than the server can compensate for, then you fail hard, meaning that the server waits for moves and in the meantime any client moves that are missing are "corrected" by the server state, meaning stuck in place. You could tell the server to defer simulation (in other words, don't update the move id as it didn't actually simulate), but that would lead to the server having too many moves, and the client remaining "lagged" (they would not feel the lag but their events would happen later and later). You want to drop moves as the buffer fills up and then simulate as normal.

How do you get offest_ticks here ?

offset ticks is essentially

roundf(buffer_time  * TICK_RATE)



This is action RPG-ish and if i had the choice i would go UDP but i'm using flash.

Hmm, in which case you really have little choice at the moment. I did stumble across this http://stackoverflow.com/questions/7728623/getting-started-with-rtmfp But otherwise you'll be sort of stuck, unless you ensure your move buffer is long enough to hide resend latency (might be best to ask other TCP familiars if you need to change your approach)

So you set a fixed buffer size, same for all clients ? And no matter the latency, if the client is over that size, he's not meeting the requirement to play that game ? If so how big usually is that buffer ?

I don't set the buffer size as a requirement actively. In my FPS, I have found that anything over 150-200 ms is uncomfortable to play with, but anything less and it becomes a waste of time to play. I still question whether that is due to other effects or just network latency. Play-testing will tell.

I'm still in the mist with setting the first frame ID (client to server), let's say i'm having a 1second lag when setting the ID, no map loading, just no luck. I'm going to set frame ID 0 at t1 and the next frame ID1 2 3 4 5 ... all at once at t2, the server will increment ID 0 by server step at t1+lag, then will receive 1 2 3 4 5 ids while only incrementing ID0 to ID1 at t2+lag when the id should be way further. There's something that i'm really not getting here :S

The way I handle it, which is nicer to do in Python, is to have a sorted list (heap). Dropped packets are recovered and inserted at the correct point in the list when recovery occurs (when we ask for the next move and NEXT_MOVE_ID - PREVIOUS_MOVE_ID is greater than one).

I don't maintain a server-side value for the move ID. The server just consumes one move per frame.

Here's my pop method for the jitter buffer

def pop(self):
if self._filling:
return None

result = self._buffer[0]
previous_item = self._previous_item
self._buffer.remove(result)

# Account for lost items
can_check_missing_items = previous_item is not None and not self._overflow

# Perform checks
if can_check_missing_items and self.check_for_lost_item(result, previous_item):
missing_items = self.find_lost_items(result, previous_item)

if missing_items:
new_result, *remainder = missing_items
# Add missing items to buffer
for item in remainder:
self.append(item)
self.append(result)
# Take first item
result = new_result

# If buffer is empty
if not self._buffer:
empty_callback = self.on_empty
if callable(empty_callback):
empty_callback()

self._filling = True

self._previous_item = result
self._overflow = False

return result


Essentially what happens is, if the buffer overflows (is overfilled), we remove a lot of items, and we will see difference in the next ID, which makes it look like a lost ID, hence we have to account for that.

• check_for_lost_items does the ID check as I mentioned
• recover_missing_items basically calls into the recovery system and returns the intermediate moves that were lost

Because moves are noticed to have been dropped if we have adjacent moves:

(e.g T0, T1, T2, [missing moves] T5, where bold indicates the previous move)

We do the following:

1. Get the missing moves
2. The first missing move is out current move, the rest are added to the move buffer
3. The move we removed that indicated we lost moves is also a later move, so we add it back. My buffer automatically sorts it to the correct position
4. Return the aforementioned first recovered move.

You're not doing it like this, and so it might not be so useful, but it demonstrates the idea.

If you simply maintain an index into the lookup buffer, and increment that every time you request an item, you will be fine (ensuring it wraps around). Any items that overfill the buffer are dropped. You only want to fill the buffer to a fraction of its full capacity, meaning that if you suddenly receive too many moves, they wont be dropped. (Let's say you 3/4 fill it). That faction must correspond to the amount of delay you want to use (my 150-200ms). Use the stored move's ID for any ack/ correction procedures. Just think of the server value as a lookup only.

This way, if a client tries to speed-hack, they just have moves that are dropped, because the buffer becomes saturated eventually. The server consumes moves at the permitted tick rate, which the client cannot influence.

Edited by Angus Hollands, 23 May 2014 - 11:54 AM.

### #11Farkon  Members   -  Reputation: 192

Like
0Likes
Like

Posted 24 May 2014 - 11:48 AM

Thank you both for taking the time to answer.

I'm finding it a bit hard to understand what your problem is, so I'll just describe a typical scenerario and you can tell me what part you're deviating from or having trouble with.

• The client( C ) sends a join request to the server(S), S accepts and responds with info about other players and spawns C into the world. C receives response and spawns into the world along with the other players.
• C starts moving, sends input along with it's frame ID as you call it(assuming this is a counter for the fixed steps)
• S receives input and stores it's own and C's frame ID, now for every packet received onward, S can calculate the difference between the steps he took and the steps C took. S buffers the received input so that it can simulate the input X steps in the future. S notifies the other clients.
• If S receives input from C where the difference between the steps they took is larger than X, S corrects C.

That's the gist of it, what is the part that you're having trouble with?

I think i understand the steps you are showing and i don't think i'm having a trouble with what i understand from this BUT I'm still wondering how variation of latency is handled here. I think it still all comes down to me not understanding the part of your implementation i'm not having !?

My big misunderstanding sums down to (warning: what i'm going to say is probably wrong but that's how i get it) : Server iterates over the client frame buffer starting from the first frame he has received from the client, and will process one frame per server loop. So once the first frame is received, that frame will define the latency for the rest of the game.

In my mind if a client is sending the first frame at high latency, since the server will take that frame as the first one to process (and after that, one per frame) i will always keep that lag even if my latency is reduced, because what i will do is just adding frames to the server buffer in a more responsive manner but the frames are still sucked at 60fps so the lag won't be eliminated, that's what i don't understand.

If by that you mean it lags for longer than the server can compensate for, then you fail hard, meaning that the server waits for moves and in the meantime any client moves that are missing are "corrected" by the server state, meaning stuck in place. You could tell the server to defer simulation (in other words, don't update the move id as it didn't actually simulate), but that would lead to the server having too many moves, and the client remaining "lagged" (they would not feel the lag but their events would happen later and later). You want to drop moves as the buffer fills up and then simulate as normal.

That was an extreme case to help me understand what i am doing wrong here. I agree that it shouldn't happen, and it actually doesn't in my case.

Hmm, in which case you really have little choice at the moment. I did stumble across this http://stackoverflow.com/questions/7728623/getting-started-with-rtmfp But otherwise you'll be sort of stuck, unless you ensure your move buffer is long enough to hide resend latency (might be best to ask other TCP familiars if you need to change your approach)

Thanks, i wasn't aware of rtmfp! even if it seems like it's for p2p connections, i guess i could make one client a server... that's worth digging, i'll still stick to the flash.net.Socket thingy for now though.

If you simply maintain an index into the lookup buffer, and increment that every time you request an item, you will be fine (ensuring it wraps around). Any items that overfill the buffer are dropped. You only want to fill the buffer to a fraction of its full capacity, meaning that if you suddenly receive too many moves, they wont be dropped. (Let's say you 3/4 fill it). That faction must correspond to the amount of delay you want to use (my 150-200ms). Use the stored move's ID for any ack/ correction procedures. Just think of the server value as a lookup only.

This way, if a client tries to speed-hack, they just have moves that are dropped, because the buffer becomes saturated eventually. The server consumes moves at the permitted tick rate, which the client cannot influence.

Thanks for that precise explanation, that's how i understand it as well. But i'm still confused about my lag variance issue (see first answer of that post).

### #12Angus Hollands  Members   -  Reputation: 765

Like
0Likes
Like

Posted 24 May 2014 - 12:52 PM

Essentially the dejittering time is approximately a fixed value. Whenever you "run out of moves" (e.g when you first start), you want to accumulate at least n ms of moves, which might be 6 ticks (arbitrary figure). The trick is, that if the connection is bursty, you might get 2, 3, 4, 8 as suddenly 4 moves are received. You need to tolerate a certain amount of overshoot. Your buffer minimum length is the minimum dejitter time - this should be as low as possible whilst still serving the purpose of protecting the client against connection jitter. The upper bound is the same, the maximum jitter you can tolerate. The client *might* find itself delayed by more than it needs to (on the server), but that's just a consequence of the uncertainty of connection latency. As long as the dejitter window is narrow enough, that is fine. The window width should respect the rate of transmission of the client. For example, if you are batching inputs, if you sent 3 inputs per network tick, your window needs to be at least 3, but safe to be at least double that. (If you receive two packets in one go, you don't drop them).

The jitter buffer is artificial lag. That's just a consequence of its design. The purpose is finding a trade off between too much artificial lag and too little connection stability.

### #13Mussi  Crossbones+   -  Reputation: 2191

Like
0Likes
Like

Posted 24 May 2014 - 02:31 PM

My big misunderstanding sums down to (warning: what i'm going to say is probably wrong but that's how i get it) : Server iterates over the client frame buffer starting from the first frame he has received from the client, and will process one frame per server loop. So once the first frame is received, that frame will define the latency for the rest of the game.

In my mind if a client is sending the first frame at high latency, since the server will take that frame as the first one to process (and after that, one per frame) i will always keep that lag even if my latency is reduced, because what i will do is just adding frames to the server buffer in a more responsive manner but the frames are still sucked at 60fps so the lag won't be eliminated, that's what i don't understand.

I understand what you're getting at now. You can't let the server consume more ticks to catch up, because that would allow for speedhacks. What you can do is perform the initial syncing over a period of lets say 3 seconds instead of basing it on a single packet. That way you can smooth out the lag spikes. The client could still have high latency in the first 3 seconds and very low latency after that, but it's less likely.

### #14Angus Hollands  Members   -  Reputation: 765

Like
0Likes
Like

Posted 24 May 2014 - 06:15 PM

As long as the buffer has reasonable bounds, dropping any extra inputs which exceed the maximum buffer length will prevent the client from feeling lagged, instead they will notice server correction. This case will only occur if you send an excessive number of inputs, so you have to tune your upper bound for your buffer to consider the tradeoff between command latency and connection quality.

Best ensure that you start sending inputs at a consistent rate though, don't try and account for the time spend loading for the map - that time is "dead time", meaning that user-input wasn't useful or valid during this time.

Edited by Angus Hollands, 30 May 2014 - 05:55 AM.

### #15Farkon  Members   -  Reputation: 192

Like
0Likes
Like

Posted 25 May 2014 - 02:12 PM