• Advertisement
Sign in to follow this  

Slow response in State interpolation when time gap is big between client and server

This topic is 399 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I implemented my first ever state interpolation in my game demo a few days ago. If client time and the server time has a time difference less than 10ms the player interpolates very smoothly. However if the time difference is bigger, such as 40ms or 90ms the player will suddenly stop and then after a while just flick to the target position like there is no interpolation at all.

My game demo is very simple. It only shows players moving around on the map. My current interpolation function is like this (written in Javascript):

interpolateState(previousState, currentState, renderTime) {
  if (renderTime <= currentState.timestamp && renderTime >= previousState.timestamp) {
      const total = currentState.timestamp - previousState.timestamp;
      const portion = renderTime - previousState.timestamp;
      const ratio = portion / total;
      // For every player do the following
      player.x = lerp(previousState.players[player.id].x, currentState.players[player.id].x, ratio);
      player.y = lerp(previousState.players[player.id].y, currentState.players[player.id].y, ratio);
      player.rotation = lerp(previousState.players[player.id].rotation, currentState.players[player.id].rotation, ratio);
  } else {
      sync(currentState);
  }
} 

sync(state) {
    player.x = findPlayerState(state).x;
    player.y = findPlayerState(state).y;
}

Here server runs at 60fps and send update every 50ms and the renderTime is the 200ms before the server time which I get from syncing the time with the server at client side.

On the client I put the states I received from the server into a queue which has 3 slots for storage temporary states so that on every network update the new state will be pushed into the queue and an existing state will be popped out for use. The queue will only pop a state when it's full, and to interpolate I need to have two states, that's why I choose 200ms before the present time.  This simple and naive approach seems to work but I think there are a lot to improve to make my game demo playable in real world situation. Is there anything I can work on to fix the lags due to server and client time difference?

Edited by caymanbruce

Share this post


Link to post
Share on other sites
Advertisement

Have you tried the things I mentioned in the other thread? I'll reword them to make it explicit:

1) You must not simply overwrite currentState and previousState whenever a new state comes in. Doing so will cause jerkiness. The states you want to interpolate between depend entirely on the render time, not on which states have arrived recently. You need to keep several states to make sure you have enough to work with.

You receive State x=1 at Time 500, then State x=2 at Time 1000. If rendertime is currently 750, then you treat x as 1.5. That's all good. Then you receive a new state, x=3 at Time 1500. But render time is still something like 900 (for example), and it's outside your 2 latest state timestamps, so you immediately snap it up to x=3, from the previous value of 1.5 or whatever. That's obviously wrong. Keep all your past states until the render time has become large enough that you know you'll never need it.

If you keep all past states for as long as they're needed, the only time interpolateState won't have valid data to work with is (a) right at the start, when 0 or 1 states have been received, or (b) if no state has been received for at least 200ms. In the first situation, you can skip rendering. In the second situation, extrapolating is probably fine (i.e. lerp, with a ratio > 1) until a new state comes in and you can snap the character back into place. This will be a rare occurrence if everything else works.

2) You mustn't just change renderTime or serverTime immediately based on data from the server. If you do, whenever there is a slow network packet, your rendering will not be smooth because time is no longer smooth. The time-sync algorithm you're using is not designed to give you smooth changes. To begin with, you should consider using it once at the start of play, to synchronise clocks, and then from that point onwards you use the local client time. This should work well enough for almost all situations. In future, you might consider performing clock adjustments during play, but by smoothing out those changes over time.

I'll add the following: you don't need 200ms of delay. Try 100ms, if lag is an issue.

Edited by Kylotan

Share this post


Link to post
Share on other sites

Typically, when you receive a new state from the server, you want to capture the current displayed position as the "checkpoint in time," (your "previous state") and then fill in the server state as the "future state to interpolate towards."

Typically, you'll set the server state to be the target a time T + 1 frame, where T is the time between server packets, giving you one extra frame of latency to allow for server jitter and such.

Very simplified:

State prevState;
State nextState;
State toRender;
onRender(Time t) {
  if (t < prevState.time) {
    toRender = prevState;
    warn("time too early");
  } else if (t > nextState.time) {
    toRender = nextState;
    warn("time too late");
  } else {
    toRender = lerp(prevState, nextState, (t - prevState.time)/(nextState.time - prevState.time));
  }
  renderState(toRender);
}

onReceive(State s) {
  if (s.time <= nextState.time) {
    warn("received bad time");
  } else {
    prevState = toRender;    // <--- IMPORTANT!
    nextState = s;
  }
}

Share this post


Link to post
Share on other sites

One key thing to note as a difference between the two answers above is that hplus0603 is advocating a 1 frame buffer - i.e. 50ms - rather than the original 200ms, which basically mandates holding on to a queue of the last 4 or 5 states.

Share this post


Link to post
Share on other sites

One key thing to note as a difference between the two answers above is that hplus0603 is advocating a 1 frame buffer - i.e. 50ms - rather than the original 200ms, which basically mandates holding on to a queue of the last 4 or 5 states.

Thanks so why do you suggest 100ms not 50ms or 200ms?

I am still digesting what you said in the previous post. The first problem I have is syncing the time. Now I try to sync the time only once at the beginning of the game instead of continuously syncing it. Say I sync the time for 5 times and get a mean offset, at an interval of 3 seconds. So that's a total 15 seconds before entering my game. That doesn't seem to be the case of other io games. I wonder if there is a better way syncing the time between server and clients.

Share this post


Link to post
Share on other sites

I suggested 100ms just as something that is less than 200ms. You could try lower than that. Ideally your time buffer is only as large as it needs to be to ensure that you always have some server data to render. In many games you don't really need to know or care what the server time is; when you get a new state, assume that is valid for exactly one frame from now and interpolate accordingly.

Even if you want to do server time sync, I don't see why you need to do it 5 times over 3 seconds. Once should suffice and it shouldn't even take a second.

I suspect trying to synchronise clocks and implement smooth rendering between past snapshots is overkill for the sort of game you're making. It's common for FPS games. You started out with the Valve and Quake 3 docs but you're (apparently) making some sort of Agar.io type game which clearly doesn't need that sort of approach. I would be willing to bet that typical games like that are just broadcasting out updates and interpolating towards the latest state, which is exactly what hplus0603 has recommended above. (So, ignore what I said about multiple snapshots, but ignore the server time, and treat updates received as being due at currentClientTime + 1 frame.)

Share this post


Link to post
Share on other sites

I suggested 100ms just as something that is less than 200ms. You could try lower than that. Ideally your time buffer is only as large as it needs to be to ensure that you always have some server data to render. In many games you don't really need to know or care what the server time is; when you get a new state, assume that is valid for exactly one frame from now and interpolate accordingly.

Even if you want to do server time sync, I don't see why you need to do it 5 times over 3 seconds. Once should suffice and it shouldn't even take a second.

I suspect trying to synchronise clocks and implement smooth rendering between past snapshots is overkill for the sort of game you're making. It's common for FPS games. You started out with the Valve and Quake 3 docs but you're (apparently) making some sort of Agar.io type game which clearly doesn't need that sort of approach. I would be willing to bet that typical games like that are just broadcasting out updates and interpolating towards the latest state, which is exactly what hplus0603 has recommended above. (So, ignore what I said about multiple snapshots, but ignore the server time, and treat updates received as being due at currentClientTime + 1 frame.)

 

Oh I am going back to the beginning now. But what makes the difference between these two approaches? Wouldn't using server time more accurate with positioning and collision detection? Even the io game has very bad latency sometimes. 

I do the syncing 5 times over 3 seconds because in this thread:

https://www.gamedev.net/topic/376680-time-synchronization-in-multiplayer/

someone said he does 10 syncing over an interval of several seconds. I can't tell how many times to sync the clock is best because it seems everyone here is more experienced than me.

Edited by caymanbruce

Share this post


Link to post
Share on other sites

The main difference between the 2 approaches is that the FPS approach is much more complex because it's trying much harder to keep everybody's perceived states equal.

In theory, respecting the server time-stamp on received state changes could make some movements appear smoother. Whether it's more accurate or not is a matter of opinion; the rendered value might more accurately represent where the object was at whatever the current 'rendertime' is, but the tradeoff is that you're rendering further in the past so it is a worse representation of where the object is now.

Compare that to the simpler approach where you start blending towards the new state immediately. The rendered positions may not be guaranteed to be exactly where the object was at some given point in the past, but they are moving towards the newly reported position right away. Rendered latency is actually lower in this situation because instead of waiting for a state to be 200ms or 100ms old (for example), you start factoring it in immediately.

Collision detection shouldn't matter because those decisions are handled by the server anyway (or at least they should be).

Regarding the time syncing: the anonymous poster in the other thread said "10 or so of these packets during your login process, over an interval of several seconds", by which he/she means all those packets are exchanged during those seconds - not one each time the period elapses. Note however that hplus0603 suggested doing timesyncs by measuring the round-trip time of regular game messages, which is a better approach if you need to keep clocks synced over time, but again, you can't just snap to the new value if you want smooth rendering.

Share this post


Link to post
Share on other sites

Wouldn't using server time more accurate with positioning and collision detection?


If you actually had instant server time on the client, then yes.
But you don't.

Share this post


Link to post
Share on other sites

 

Typically, when you receive a new state from the server, you want to capture the current displayed position as the "checkpoint in time," (your "previous state") and then fill in the server state as the "future state to interpolate towards."

Typically, you'll set the server state to be the target a time T + 1 frame, where T is the time between server packets, giving you one extra frame of latency to allow for server jitter and such.

Very simplified:

State prevState;
State nextState;
State toRender;
onRender(Time t) {
  if (t < prevState.time) {
    toRender = prevState;
    warn("time too early");
  } else if (t > nextState.time) {
    toRender = nextState;
    warn("time too late");
  } else {
    toRender = lerp(prevState, nextState, (t - prevState.time)/(nextState.time - prevState.time));
  }
  renderState(toRender);
}

onReceive(State s) {
  if (s.time <= nextState.time) {
    warn("received bad time");
  } else {
    prevState = toRender;    // <--- IMPORTANT!
    nextState = s;
  }
}

 

 

The main difference between the 2 approaches is that the FPS approach is much more complex because it's trying much harder to keep everybody's perceived states equal.

In theory, respecting the server time-stamp on received state changes could make some movements appear smoother. Whether it's more accurate or not is a matter of opinion; the rendered value might more accurately represent where the object was at whatever the current 'rendertime' is, but the tradeoff is that you're rendering further in the past so it is a worse representation of where the object is now.

Compare that to the simpler approach where you start blending towards the new state immediately. The rendered positions may not be guaranteed to be exactly where the object was at some given point in the past, but they are moving towards the newly reported position right away. Rendered latency is actually lower in this situation because instead of waiting for a state to be 200ms or 100ms old (for example), you start factoring it in immediately.

Collision detection shouldn't matter because those decisions are handled by the server anyway (or at least they should be).

Regarding the time syncing: the anonymous poster in the other thread said "10 or so of these packets during your login process, over an interval of several seconds", by which he/she means all those packets are exchanged during those seconds - not one each time the period elapses. Note however that hplus0603 suggested doing timesyncs by measuring the round-trip time of regular game messages, which is a better approach if you need to keep clocks synced over time, but again, you can't just snap to the new value if you want smooth rendering.

 

Thanks so much. Since I have already implemented some kind of time syncing I decide to use the combination of these two approaches, or something in the middle. I follow the code structure of hplus0603's post above and still use a queue to store the incoming states from the server. Game render time is clientLocalTime + offset - 200. The offset comes from syncing time between the server and client for 5 times in 5 seconds. Now the movement of players are indeed very smooth with interpolation. However the positions are far from accurate. On Player A's screen maybe A is above B, but on Player B's screen A is underneath B. Also if I change 200 to 100 the render time won't fall in the range of the two states popped from the queue. This is so strange. I think I still have many bugs to fix but you guys really give me a lot of confidence.

Edited by caymanbruce

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement