Slow response in State interpolation when time gap is big between client and server

Started by
22 comments, last by Kylotan 7 years, 1 month ago

On Player A's screen maybe A is above B, but on Player B's screen A is underneath B. - This is to be expected. If Player A and Player B start moving at the same time, they will see themselves move before news of the other player's motion reaches them. What each player sees is accurate, for them. If you can reduce latency, this effect will decrease.

Also if I change 200 to 100 the render time won't fall in the range of the two states popped from the queue - Then don't pop them from the queue. Look in the queue for the 2 relevant states either side of the time you want to render for and use them.

Advertisement
The hard-coded "200" value probably needs to be actually determined by measuring round-trip time to the server.
Or start with 0, and when you find that you don't have the data yet, increase the value to the point where you do.

If you want players to seem "even" then you need to extrapolate. Extrapolation is like interpolation, except you go some time further out past the "end" sample.
A long time ago, I wrote a simple library called EPIC for doing such extrapolation, and a simple demo app (in C++, for Windows.)
You can check it out at http://www.mindcontrol.org/~hplus/epic/
enum Bool { True, False, FileNotFound };

On Player A's screen maybe A is above B, but on Player B's screen A is underneath B. - This is to be expected. If Player A and Player B start moving at the same time, they will see themselves move before news of the other player's motion reaches them. What each player sees is accurate, for them. If you can reduce latency, this effect will decrease.

Also if I change 200 to 100 the render time won't fall in the range of the two states popped from the queue - Then don't pop them from the queue. Look in the queue for the 2 relevant states either side of the time you want to render for and use them.

This is tested on the same network in my room with two computers. If that latency makes a big difference in terms of player position I guess on the Internet it will only get worse.

The hard-coded "200" value probably needs to be actually determined by measuring round-trip time to the server.
Or start with 0, and when you find that you don't have the data yet, increase the value to the point where you do.

If you want players to seem "even" then you need to extrapolate. Extrapolation is like interpolation, except you go some time further out past the "end" sample.
A long time ago, I wrote a simple library called EPIC for doing such extrapolation, and a simple demo app (in C++, for Windows.)
You can check it out at http://www.mindcontrol.org/~hplus/epic/

Sorry I am confused with the darken sentence in your post. I thought the hard-coded value is decided by how far behind I am from the server time.

I choose 200ms because my queue is implemented like this:

  • It has 3 elements/states.
  • It only pops a state out of the queue when it's full.
  • Network update rate is 20 times per second which gives 50ms interval.
  • When I pop the first state out of the queue I don't interpolate because there is no previous state to interpolate. I start interpolation when another state reaches the client and one more state pops out as the current state.

Now since the queue starts to fill states to the time I pop out the first current state to be used for interpolation, 200ms has passed. But I still want to start moving the player from the first state I received from the server, so I subtract 200ms from clientLocaltime + offset.

This is tested on the same network in my room with two computers. If that latency makes a big difference in terms of player position I guess on the Internet it will only get worse.

It will get worse, yes. But latency also depends on how quickly your software writes to the network and how quickly it reads from it. That code isn't shown above.

I thought the hard-coded value is decided by how far behind I am from the server time.


You may be measuring something differently then. I don't know how far any player will be from the server. They may be 1 millisecond away. They may be 500 milliseconds away.
You are measuring something else here, probably "queue depth," which you have full control over, and should try to minimize as much as possible.
Once you have a new snapshot from the server, why would you use any older data other than as "come from" information?
enum Bool { True, False, FileNotFound };

I thought the hard-coded value is decided by how far behind I am from the server time.


You may be measuring something differently then. I don't know how far any player will be from the server. They may be 1 millisecond away. They may be 500 milliseconds away.
You are measuring something else here, probably "queue depth," which you have full control over, and should try to minimize as much as possible.
Once you have a new snapshot from the server, why would you use any older data other than as "come from" information?

I think I understand now. If I don't use a queue to store the states, I will only need two states for interpolation. One is the previous state, and the other is the future state or current state. Just need to swap the state buffer when a new state comes in. If I use a queue, I need to find a state in the queue that is just before the render time and a closet state that is behind the render time, this way the render time which is clientLocalTime + offset - 100 will be lying in the middle of the two states I find in the queue. The key value that matters most is the render time. Am I correct?

This is tested on the same network in my room with two computers. If that latency makes a big difference in terms of player position I guess on the Internet it will only get worse.

It will get worse, yes. But latency also depends on how quickly your software writes to the network and how quickly it reads from it. That code isn't shown above.

I think I know the reason. Before I was also running a physics simulation on the client. When the client receives updates from the server I excluded the player I am using on my client side and interpolated other players. That's why it will never look the same with the states position coming from the server. Now I have removed the simulation on the client and use only the states from the server (the server is totally authoritative now), the positions look just fine on all players' screens. I may need to use other techniques such as client prediction/extrapolation for lag compensation though, otherwise on the Internet the latency will make it unplayable.

If I don't use a queue to store the states, I will only need two states for interpolation. One is the previous state, and the other is the future state or current state. Just need to swap the state buffer when a new state comes in. If I use a queue, I need to find a state in the queue that is just before the render time and a closet state that is behind the render time, this way the render time which is clientLocalTime + offset - 100 will be lying in the middle of the two states I find in the queue. The key value that matters most is the render time. Am I correct?

The last part is correct. But the question is not "whether you use a queue or not". Whether you use a queue or not depends on whether you want to use a queue or not, or whether you need a queue for what you want to do.

There are 2 strategies to choose from here:

1) Collect sufficient past states so that you can render at some arbitrary time in the past (e.g. 200ms), usually measured relative to a timer somewhat-synchronised with the server. A queue is a good way of storing these states, as you don't know exactly how many you're going to have when covering the necessary time period. It's possible to change the time buffer from 200ms to whatever you like, providing you do it smoothly, and that it always stays large enough that you have 2 states 'either side' of it, to interpolate between.

2) Keep 2 states, so that you can render between the last received position, and some position previous to that. Ideally that previous position is whatever you were rendering when the latest position came in, because that guarantees smooth rendering on the client. The render time here is not attempting to match any particular server time; it's just attempting to provide smooth rendering that closely follows what the server is sending.

What you can't do - but were doing, initially - is trying to use the 2nd strategy's data structure for the 1st strategy's algorithm, and that can never work because you couldn't guarantee that your stored positions spanned the time you wanted to render at.

Regarding client prediction, you will quickly realise that 'running a client simulation locally' and 'other techniques such as client prediction/extrapolation' are actually the same thing, with the same symptoms of showing different things on different screens. The best solution is to reduce latency so that the differences are minimised.

If I don't use a queue to store the states, I will only need two states for interpolation. One is the previous state, and the other is the future state or current state. Just need to swap the state buffer when a new state comes in. If I use a queue, I need to find a state in the queue that is just before the render time and a closet state that is behind the render time, this way the render time which is clientLocalTime + offset - 100 will be lying in the middle of the two states I find in the queue. The key value that matters most is the render time. Am I correct?

The last part is correct. But the question is not "whether you use a queue or not". Whether you use a queue or not depends on whether you want to use a queue or not, or whether you need a queue for what you want to do.

There are 2 strategies to choose from here:

1) Collect sufficient past states so that you can render at some arbitrary time in the past (e.g. 200ms), usually measured relative to a timer somewhat-synchronised with the server. A queue is a good way of storing these states, as you don't know exactly how many you're going to have when covering the necessary time period. It's possible to change the time buffer from 200ms to whatever you like, providing you do it smoothly, and that it always stays large enough that you have 2 states 'either side' of it, to interpolate between.

2) Keep 2 states, so that you can render between the last received position, and some position previous to that. Ideally that previous position is whatever you were rendering when the latest position came in, because that guarantees smooth rendering on the client. The render time here is not attempting to match any particular server time; it's just attempting to provide smooth rendering that closely follows what the server is sending.

What you can't do - but were doing, initially - is trying to use the 2nd strategy's data structure for the 1st strategy's algorithm, and that can never work because you couldn't guarantee that your stored positions spanned the time you wanted to render at.

Regarding client prediction, you will quickly realise that 'running a client simulation locally' and 'other techniques such as client prediction/extrapolation' are actually the same thing, with the same symptoms of showing different things on different screens. The best solution is to reduce latency so that the differences are minimised.

Thanks for the detail explanation. I am using websocket I guess the client will always receive packets from server in order. And there would be no packet loss in transmission because it's TCP. So I think it might be enough to just use METHOD 2 -- just 2 states?

This topic is closed to new replies.

Advertisement