Timestamping & processing inputs

Started by
13 comments, last by Farkon 9 years, 10 months ago

Thank you both for taking the time to answer.

I'm finding it a bit hard to understand what your problem is, so I'll just describe a typical scenerario and you can tell me what part you're deviating from or having trouble with.

  • The client( C ) sends a join request to the server(S), S accepts and responds with info about other players and spawns C into the world. C receives response and spawns into the world along with the other players.
  • C starts moving, sends input along with it's frame ID as you call it(assuming this is a counter for the fixed steps)
  • S receives input and stores it's own and C's frame ID, now for every packet received onward, S can calculate the difference between the steps he took and the steps C took. S buffers the received input so that it can simulate the input X steps in the future. S notifies the other clients.
  • If S receives input from C where the difference between the steps they took is larger than X, S corrects C.

That's the gist of it, what is the part that you're having trouble with?

I think i understand the steps you are showing and i don't think i'm having a trouble with what i understand from this BUT I'm still wondering how variation of latency is handled here. I think it still all comes down to me not understanding the part of your implementation i'm not having !?

My big misunderstanding sums down to (warning: what i'm going to say is probably wrong but that's how i get it) : Server iterates over the client frame buffer starting from the first frame he has received from the client, and will process one frame per server loop. So once the first frame is received, that frame will define the latency for the rest of the game.

In my mind if a client is sending the first frame at high latency, since the server will take that frame as the first one to process (and after that, one per frame) i will always keep that lag even if my latency is reduced, because what i will do is just adding frames to the server buffer in a more responsive manner but the frames are still sucked at 60fps so the lag won't be eliminated, that's what i don't understand.

If by that you mean it lags for longer than the server can compensate for, then you fail hard, meaning that the server waits for moves and in the meantime any client moves that are missing are "corrected" by the server state, meaning stuck in place. You could tell the server to defer simulation (in other words, don't update the move id as it didn't actually simulate), but that would lead to the server having too many moves, and the client remaining "lagged" (they would not feel the lag but their events would happen later and later). You want to drop moves as the buffer fills up and then simulate as normal.

That was an extreme case to help me understand what i am doing wrong here. I agree that it shouldn't happen, and it actually doesn't in my case.

Hmm, in which case you really have little choice at the moment. I did stumble across this http://stackoverflow.com/questions/7728623/getting-started-with-rtmfp But otherwise you'll be sort of stuck, unless you ensure your move buffer is long enough to hide resend latency (might be best to ask other TCP familiars if you need to change your approach)

Thanks, i wasn't aware of rtmfp! even if it seems like it's for p2p connections, i guess i could make one client a server... that's worth digging, i'll still stick to the flash.net.Socket thingy for now though.

If you simply maintain an index into the lookup buffer, and increment that every time you request an item, you will be fine (ensuring it wraps around). Any items that overfill the buffer are dropped. You only want to fill the buffer to a fraction of its full capacity, meaning that if you suddenly receive too many moves, they wont be dropped. (Let's say you 3/4 fill it). That faction must correspond to the amount of delay you want to use (my 150-200ms). Use the stored move's ID for any ack/ correction procedures. Just think of the server value as a lookup only.

This way, if a client tries to speed-hack, they just have moves that are dropped, because the buffer becomes saturated eventually. The server consumes moves at the permitted tick rate, which the client cannot influence.

Thanks for that precise explanation, that's how i understand it as well. But i'm still confused about my lag variance issue (see first answer of that post).

Advertisement

Essentially the dejittering time is approximately a fixed value. Whenever you "run out of moves" (e.g when you first start), you want to accumulate at least n ms of moves, which might be 6 ticks (arbitrary figure). The trick is, that if the connection is bursty, you might get 2, 3, 4, 8 as suddenly 4 moves are received. You need to tolerate a certain amount of overshoot. Your buffer minimum length is the minimum dejitter time - this should be as low as possible whilst still serving the purpose of protecting the client against connection jitter. The upper bound is the same, the maximum jitter you can tolerate. The client *might* find itself delayed by more than it needs to (on the server), but that's just a consequence of the uncertainty of connection latency. As long as the dejitter window is narrow enough, that is fine. The window width should respect the rate of transmission of the client. For example, if you are batching inputs, if you sent 3 inputs per network tick, your window needs to be at least 3, but safe to be at least double that. (If you receive two packets in one go, you don't drop them).

The jitter buffer is artificial lag. That's just a consequence of its design. The purpose is finding a trade off between too much artificial lag and too little connection stability.


My big misunderstanding sums down to (warning: what i'm going to say is probably wrong but that's how i get it) : Server iterates over the client frame buffer starting from the first frame he has received from the client, and will process one frame per server loop. So once the first frame is received, that frame will define the latency for the rest of the game.

In my mind if a client is sending the first frame at high latency, since the server will take that frame as the first one to process (and after that, one per frame) i will always keep that lag even if my latency is reduced, because what i will do is just adding frames to the server buffer in a more responsive manner but the frames are still sucked at 60fps so the lag won't be eliminated, that's what i don't understand.

I understand what you're getting at now. You can't let the server consume more ticks to catch up, because that would allow for speedhacks. What you can do is perform the initial syncing over a period of lets say 3 seconds instead of basing it on a single packet. That way you can smooth out the lag spikes. The client could still have high latency in the first 3 seconds and very low latency after that, but it's less likely.

As long as the buffer has reasonable bounds, dropping any extra inputs which exceed the maximum buffer length will prevent the client from feeling lagged, instead they will notice server correction. This case will only occur if you send an excessive number of inputs, so you have to tune your upper bound for your buffer to consider the tradeoff between command latency and connection quality.

Best ensure that you start sending inputs at a consistent rate though, don't try and account for the time spend loading for the map - that time is "dead time", meaning that user-input wasn't useful or valid during this time.

Thanks, your two last answers is what i needed!

This topic is closed to new replies.

Advertisement