Jump to content

  • Log In with Google      Sign In   
  • Create Account

Robust algorithm to determine prediction time


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
2 replies to this topic

#1 Gumgo   Members   -  Reputation: 964

Like
1Likes
Like

Posted 22 June 2013 - 01:53 AM

I'm performing client side prediction, so the clients need to run ahead of the server. The clients need to run ahead far enough so that their input packets are received by the server slightly before they actually need to be executed. If they arrive too late, the server would either need to back up the world (which I'm not doing) or make some estimations (e.g. the player was moving forward so they probably still are) and try to fix things up as much as possible once the input packet does arrive (e.g. the player might jump 1 frame late).

 

I'm having some trouble determining the amount to predict by in a robust way that deals well with jitter and changing network conditions. Sometimes I "get it right" and the game runs well, but other times the prediction is a bit too short and there are a lot of input packets arriving too late, resulting client/server discrepancies (where the server is actually "wrong" in this case) and lots of client correction.

 

Call the prediction time P. A summary of the ideal behavior is:

P should be the smallest value which is large enough so that under "stable" conditions (even stable bad conditions), all input packets arrive to the server in time.

- If there is a brief network glitch, it should be ignored and should not be readjusted.

- If network conditions change (e.g. RTT or jitter goes up) P should be readjusted.

 

Currently, my algorithm to determine prediction time P is as follows:

 

- Packets are sent at a fixed rate R. Let S = 1/R be the spacing between two packets (e.g. R = 30 packets/sec means S = 33ms).

- The ping time and jitter of each outgoing packet is measured. The ping time is measured by taking the difference between the time the packet was sent and the time the ACK for the packet was received. Note that since packets (which contain ACKs) are sent with a spacing S, this means that the ping times will be somewhat "quantized" by S. To get an average ping time (RTT), I use an exponential moving average with a ratio of 0.1. To compute average jitter J, I take the differences between each packet's ping time and the current average RTT and compute the exponential moving variance (section 9 of this article).

- I have RTT and J - even though they are averages they still fluctuate a bit, especially J.

- In each packet, there is a server time update message. Call this T_p, for packet time. I could just let PRTT T_p but then P would be jittery because both RTT is changing and T_p has jitter to it. Instead, to compute P I do the following:

- I compute P_f - the prediction value associated with this single frame, which will be jittery. My algorithm is P_fT_pRTT + 2*JS. My reasoning for each term: RTT because we want to predict ahead of the server's time by RTT/2, 2*J because we want to add some padding for jitter, and S because we want to add some more padding for the quantization error described above.

- Each time I compute a new P_f for the frame, I update P_a, which is an exponential moving average of P_f with ratio = 0.1.

- On the very first frame, I set P to P_f, since I only have one data point. On subsequent frames, I increment P by 1, so that it is constantly and smoothly increasing, and I check whether P needs to be "re-synced":

- I take the value D = abs( PP_a ), the difference between the prediction time P and the average per-frame prediction times computed P_a, and test whether D > 2*J + S. If that expression is true, I "re-sync" P by assigning PP_a. I picked the value 2*JS with the reasoning that if our predicted time differs from the average per-frame predicted time by more than twice the jitter (with padding for quantization), then it's likely wrong.

 

Can anyone suggest improvements to this algorithm, or suggest a better algorithm? My main problem right now seems to be determining when to re-sync P. Performing too many re-sync results in glitches and correction, but if P is too low and I don't re-sync (because it is "close enough" to P_a) then that also results in glitches and correction.



Sponsor:

#2 hplus0603   Moderators   -  Reputation: 5725

Like
1Likes
Like

Posted 22 June 2013 - 05:14 PM

Let the server send a signal to the client for how late/early the input is, and the client adjust. If the server has data for step N+1 when it takes step N, tell the client it's early by 1 step. If the server doesn't have data for step N when taking step N, tell the client it's late. The client can then adjust as it needs to. A client with lots of jitter can choose to always be at least 2 steps early if it wants to.

 

When data is not available for a client, assume that the command is the same movement as for the previous step, and that no new actions are taken. This will be correct roughly 90% of the time (somewhat depending on game specifics) and is thus a pretty good guess.

 

When the server doesn't get the commands in time, or when a packet is lost (assuming you do UDP,) the client may end up being de-synced, and have to "snap." There's no way around that other than to pause the entire game state until all clients have provided their input, which it sounds like you don't want to do.


enum Bool { True, False, FileNotFound };

#3 Gumgo   Members   -  Reputation: 964

Like
1Likes
Like

Posted 22 June 2013 - 07:36 PM

Ah thanks, that sounds like a way better idea! It makes more sense for the server to simply tell each client how far off they are, rather than the clients trying to guess. I'll go ahead and implement that!






Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS