Jump to content

  • Log In with Google      Sign In   
  • Create Account

A question of input batching


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
7 replies to this topic

#1 Angus Hollands   Members   -  Reputation: 715

Like
0Likes
Like

Posted 04 January 2013 - 07:36 AM

Hey all! I've had a really lengthy process of trying to circumvent the inevitable problem of being unable to rewind physics states. Currently, I forward predict the client inputs so that they arrive "in time" (running the agent ahead). In theory (superficially) this looked to be a reasonable solution, and any minor error correction would be fine, playable (the best I could do!). But, when I started ripping apart the movement code to determine the unknown source of error, I found a snag. Simply assuming the inputs arrive at time current_time + rtt/2 is a perfect concept until input batching is used. But when you batch your inputs into a packet, they are delayed. So, firstly, here is a visual representation of my problem.

error.png
Now, If I assume that the "formula" written at the bottom would be correct, can anyone suggest any better alternatives, or offer any better insights. I feel less comfortable with the more variables in the prediction, and whilst the batch size should not change, it still unnerves me! Thank you for your time, in advance smile.png


Edited by Angus Hollands, 04 January 2013 - 07:37 AM.


Sponsor:

#2 Kylotan   Moderators   -  Reputation: 3338

Like
0Likes
Like

Posted 04 January 2013 - 10:54 AM

Simply assuming the inputs arrive at time current_time + rtt/2 is a perfect concept until input batching is used.

 

No, it's a perfect concept until your latency changes, which can and does happen all the time. Deliberately holding back data to batch it up is just one of several ways in which your data could arrive later than you hope. Any estimate of time differences between client and server are always just estimates and are subject to error and future change.

 

It's not clear to me what you mean by 'forward simulate the client inputs' - usually inputs are the only things that cannot be predicted (and thus predicted behaviour after an input usually has to be discarded or blended towards a corrected value).

 

If the underlying problem is that the Server receives input 0 at time 7 and assumes it happened at time 4 due to the batching latency, then have the client include timestamps alongside its inputs so the server can replay them in a way that matches the client. Obviously this introduces the capability for the client to lie to the server, so server-side validation would be necessary.



#3 Angus Hollands   Members   -  Reputation: 715

Like
0Likes
Like

Posted 04 January 2013 - 12:20 PM

Herein lies the problem, i can't replay inputs.   



#4 hplus0603   Moderators   -  Reputation: 5303

Like
0Likes
Like

Posted 04 January 2013 - 06:26 PM

First, the arrival time is actually "+ length" not "+ length - 1," unless the processing time for a single step is infinitely fast.

Second, you have to buffer (delay) received data on the server to compensate for the maximum possible jitter (change in latency) on the receiving server. This adds latency to the "average" case, and there's really no way around it.

You can do various adaptive mechanisms to provide the best possible experience to each client (the amount of buffering and batching doesn't need to be the same for all clients,) and then send a server-side correction if something changes.

You can correct even if you can't rewind; simply weld the player in place for X steps, where X is enough to cover the timestep from where the server applies the correction, to when the player receives the correction. Basically, the server says "at step X (in the future,) you will be in state S." The client can then apply that state when that time arrives, as long as the server sends it early enough. The server also needs to discard any input intended for the player before timestep X.
enum Bool { True, False, FileNotFound };

#5 Angus Hollands   Members   -  Reputation: 715

Like
0Likes
Like

Posted 05 January 2013 - 12:02 PM

I'm pretty use it's length - 1?

Although, i did miss something from the diagram. Basically, if the packet is 5 inputs worth, and each input was sampled every 2 ticks when compared to the game tick rate (which is the benchmark rate for all game data, and usually the server runs at the same tick rate, the client runs at half game tick rate for example) then it means that it took a total of 5 * 2 game ticks worth of time to populate. For any input in that packet, it now has to wait until the rest of the entries in the packet are read before it arrives. If it is the last input in the packet, it still has to wait until the server has processed all the first inputs, whereas if it is the first in the packet, it has to wait until the client has filled the batch. Taking the first entry for example, it has to wait (n entries - 1) * sample_rate_compared_to_game_tick_rate time. (This is because the batch is sent on the same tick as the last is processed)



#6 hplus0603   Moderators   -  Reputation: 5303

Like
0Likes
Like

Posted 05 January 2013 - 01:54 PM

I'm pretty use it's length - 1?

I'm pretty sure that, in reality, it isn't, because sampling and dealing with the input has non-zero duration.

However, that wasn't the main gist of my suggestion. The main gist of my suggestion was that you can authoritatively correct a client, if you can accept that the simulated entity is "pinned in place" on the server for a duration that is approximately RTT + batching size + jitter.
enum Bool { True, False, FileNotFound };

#7 Kylotan   Moderators   -  Reputation: 3338

Like
0Likes
Like

Posted 05 January 2013 - 08:08 PM

Herein lies the problem, i can't replay inputs.   

 

By 'replay inputs' I literally meant applying the client input to the server model, which you must already do in some form. If you have a server-side buffer (as hplus0603 suggests) then you can soak up some of that latency jitter. But it's not theoretically possible to completely solve this problem, as all it takes is one message to get held up for longer than your buffer and the system breaks, so you must always have the capability to either ignore a late message entirely or accommodate it. (And besides which, you wouldn't want an oversized buffer anyway because that gives you too much latency.)

 

In general terms the whole batch issue is irrelevant here - it's just another form of jitter, albeit one you introduce yourself via the client.



#8 hplus0603   Moderators   -  Reputation: 5303

Like
1Likes
Like

Posted 06 January 2013 - 12:12 AM

it's not theoretically possible to completely solve this problem

Another reason why it's not possible is because of multiple players acting on scarce resources. For example, if two players run for a door, and both try to close it for the other player, one of them will win, and the other won't, but on each local machine, each local player will think they came first.

You must be able to apply object state at a particular time step in the future, or any "correction" method will not work anyway and you might as well give up now :-)
enum Bool { True, False, FileNotFound };




Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS