# Network input handling

## Recommended Posts

Telanor    1486
I suppose that could work if we set a max supported number of actions. I'm still not sure whats causing the issue with the server simulating the input state for a shorter period of time than the client though.

##### Share on other sites
Inferiarum    739
You could also make the bitmask more flexible to allow for an arbitrary number of actions, you have to know if it is worth the effort.

If you time stamp the input states with the simulation tick they should be used for, there is a one to one mapping between input state and simulation tick, and, as long as the input packets arrive at the server in time, the same input should be used for the same simulation tick. So if this is what you are doing then I am not sure where the issue is.

##### Share on other sites
Telanor    1486
The states aren't being tagged with the simulation tick. I'm not sure I understand how that's supposed to work either. If I assume both the server and client are perfectly in sync and the client sends out a "move forward" command for tick 150, by the time the server gets it, it will already be past that point. If the client is meant to be behind the server, as some articles have suggested, then it still doesn't make sense, because the server will have already simulated the tick before the client has even issued the command. Edited by Telanor

##### Share on other sites
Inferiarum    739
If you want to do client side prediction, the client actually runs ahead of the server (for something slightly more than half the round trip time) because like you mentioned you want the input packets to reach the server in time.

If you get a game state update from the server, you simulate up to the current client time with local inputs and if you are the only player both client and server come to the same result for the game state at the same tick (assuming client does a full simulation).

Of course if there is a lot of interaction with other players (like in an multiplayer FPS) you get a problem because you have no information about the inputs of other players. That is why in this type of game the predicted state is only used to determine the camera position, and all other players are rendered based on information in the past.

E.g. if we have client time CT, round trip time RTT then the camera is placed using the predicted state at time CT but all other players are placed using the state at time
CT - RTT - IP, where IP is the interpolation time (https://developer.valvesoftware.com/wiki/Source_Multiplayer_Networking).

You could also try to predict the input of the other players and draw everything at time CT (http://www.gamasutra.com/view/news/177508/The_lagfighting_techniques_behind_GGPOs_netcode.php#.UMB7HdPjlh4).
This is possible if actions have something like a charge-up.

##### Share on other sites
Telanor    1486
How do you place the client ahead of the server? Do you just set the clients time = server time + round trip time when the client joins? What happens if their round trip time fluctuates? Or if they happen to connect in the middle of a lag spike, giving them a 400ms ping which later settles down to 50ms?

Also, that valve article doesn't seem to be feasible in my situation. They're relying on being able to rewind the state of all the players to do calcuations: "This doesn't mean you have to lead your aiming when shooting at other players since the server-side lag compensation knows about client entity interpolation and corrects this error." What do you do when it proves to be too expensive to rewind and resimulate the entire state, and when you have to deal with more then just players in the simulation?

Edit: I'd like to add that I while I haven't tested to see if we can actually afford to rewind and resimulate the entire state, the author of the physics engine advises against it in [url="http://www.bepu-games.com/forum/viewtopic.php?f=4&t=1633"]this thread[/url] Edited by Telanor

##### Share on other sites
Inferiarum    739
Considering the timing problem here is how I do it:

Every time you get an update from the server you can calculate the server time ST corresponding to the update. The target client time targCT would be
targCT = ST + RTT + c
where c is a constant that accounts for jitter in the RTT (such that input packets arrive in time with high probability) and RTT is the current estimate for the round trip time.
The actual client time CT is updated with the internal clock between each server packet and then adjusted according to something like
CT = 0.99*CT + 0.01*targCT

Note that these calculations are done in (more or less) continuos time.

Considering the client side prediction, I guess if you have an expensive physical simulation, it might be infeasible to calculate the prediction steps. In this case, as mentioned by hplus, you could just render everything at a render time
RT = CT - RTT - IP
and try to mask the control delay somehow. Edited by Inferiarum

##### Share on other sites
riuthamus    8361
The main issue is our interaction with the world. The player has the ability to modify the worlds contents via blocks, while it must maintain the player(s) locations and interactions as well. To create a responsive combat system ( which we hope to do ) and manage all of that seems to be a daunting task. Maybe if you knew what we were doing you would better know how to help. Btw, thank you for all of this help... it is simply amazing to have this much guidance and knowledge to assist.

[b]Game goal:[/b]
Our main drive is to have players fighting it out over land. This land is the same land that is near 100% dynamic and can be modified. So instead of a player vs a player you have to calculate and manage a player vs a player with block entity interaction as well. Depending on how we render things this could be several thousands functions firing at one time if a player used an explosive that blew up x blocks and all players in the area of the blast. Not saying that whatever was already discussed wont manage that just giving you a bit of scope. We want war and land control to be the central focus.

##### Share on other sites
ApochPiQ    22999
Here's how I'd approach this:

[list][*]All clients report at a fixed rate, say 20-30Hz
[*]Client and server contain the exact same prediction/extrapolation logic
[*]As clients report to the server, the server corrects its local simulation (moving forwards only!) to account for the inputs
[*]There is no requirement for timing lockstep; the server waits for no one
[*]Once the server has received some inputs for a tick, it relays the results of its simulation to the appropriate clients
[*]This relay happens at the end of the server tick regardless of who has reported in
[*]The server tracks the delay between when it expected inputs to be reported and when it actually sees them
[*]This is used to inform extrapolation on both the server and other clients
[*]Since everyone does the same extrapolation logic, all clients will appear to be in sync but actually lag behind the server due to relay time[/list]

When major state changes are relayed from the server to clients, you compute the last known transmission delay (based on the tracked latency) and tell [i]clients[/i] to fast-forward their simulation to match. The result is that you might "miss" the first few rendered frames of the world state changing, but the result will be accurate and mostly correctly timed.

The solution to this is to [i]delay[/i] local actions by the transmission delay factor, and hide the delay using animations. For example, suppose you have a rocket launcher that can radically alter terrain/buildings/etc. When player A fires a rocket, he sees an [i]instant[/i] animation/sound effect/etc. of the launcher charging up to fire. At the same instant, you tell the server to fire the rocket. When the server responds that it has done so, you actually do the rocket/explosion calculations on the client.

This keeps everyone in sync, keeps the game [i]feeling[/i] fast, and accurately hides the latency issues involved in distributed simulation.

##### Share on other sites
riuthamus    8361
[quote name='ApochPiQ' timestamp='1354813321' post='5007809']
Here's how I'd approach this:
[list]
[*]All clients report at a fixed rate, say 20-30Hz
[*]Client and server contain the exact same prediction/extrapolation logic
[*]As clients report to the server, the server corrects its local simulation (moving forwards only!) to account for the inputs
[*]There is no requirement for timing lockstep; the server waits for no one
[*]Once the server has received some inputs for a tick, it relays the results of its simulation to the appropriate clients
[*]This relay happens at the end of the server tick regardless of who has reported in
[*]The server tracks the delay between when it expected inputs to be reported and when it actually sees them
[*]This is used to inform extrapolation on both the server and other clients
[*]Since everyone does the same extrapolation logic, all clients will appear to be in sync but actually lag behind the server due to relay time
[/list]
When major state changes are relayed from the server to clients, you compute the last known transmission delay (based on the tracked latency) and tell [i]clients[/i] to fast-forward their simulation to match. The result is that you might "miss" the first few rendered frames of the world state changing, but the result will be accurate and mostly correctly timed.

The solution to this is to [i]delay[/i] local actions by the transmission delay factor, and hide the delay using animations. For example, suppose you have a rocket launcher that can radically alter terrain/buildings/etc. When player A fires a rocket, he sees an [i]instant[/i] animation/sound effect/etc. of the launcher charging up to fire. At the same instant, you tell the server to fire the rocket. When the server responds that it has done so, you actually do the rocket/explosion calculations on the client.

This keeps everyone in sync, keeps the game [i]feeling[/i] fast, and accurately hides the latency issues involved in distributed simulation.
[/quote]

Not bad, I suppose the only fear now is what people will come up with for hacks. I suppose that is a problem to address when the system is in place and being tested!

##### Share on other sites
ApochPiQ    22999
If you have the server validate everything a client asks to do, it's pretty foolproof.

##### Share on other sites
riuthamus    8361
Not that i am going to do this, but say I wanted to hire somebody to just look over our stuff, do you have any estimates that i should expect to pay to have somebody look at it? like a consultant? I mean we may just figure it all out from just talking like this but I like to have backup plans just in case.

##### Share on other sites
ApochPiQ    22999
Consulting for this scale of a project would be expensive. Look for something on the order of $150-$200 an hour and a several-week process.

##### Share on other sites
riuthamus    8361
*nods* indeed, thanks for the heads up!

##### Share on other sites
Telanor    1486
[quote]As clients report to the server, the server corrects its local simulation (moving forwards only!) to account for the inputs[/quote]

Corrects it how? Can you explain in more detail what happens here?

[quote]This is used to inform extrapolation on both the server and other clients[/quote]

Can you expand on this point too? I don't understand what you mean by "inform extrapolation".

[quote]The solution to this is to delay local actions by the transmission delay factor, and hide the delay using animations.[/quote]

We can do that for some things, but some might need to be instant actions. What do you do about that then? Even world of warcraft has some instant-cast spells, so how do they handle that?

##### Share on other sites
Telanor    1486
[quote name='Inferiarum' timestamp='1354800585' post='5007748']
Considering the timing problem here is how I do it:

Every time you get an update from the server you can calculate the server time ST corresponding to the update. The target client time targCT would be
targCT = ST + RTT + c
where c is a constant that accounts for jitter in the RTT (such that input packets arrive in time with high probability) and RTT is the current estimate for the round trip time.
The actual client time CT is updated with the internal clock between each server packet and then adjusted according to something like
CT = 0.99*CT + 0.01*targCT

Note that these calculations are done in (more or less) continuos time.

Considering the client side prediction, I guess if you have an expensive physical simulation, it might be infeasible to calculate the prediction steps. In this case, as mentioned by hplus, you could just render everything at a render time
RT = CT - RTT - IP
and try to mask the control delay somehow.
[/quote]

I keep trying to figure out how this would play out and I can't see how it could work:

Client A has RTT of 200ms
Client B has RTT of 200ms
Interpolation time of 50ms

Server Tick: 200, Client render: 150, Client Tick 400: Client B moves forward
Server Tick: 300, Client render: 250, Client Tick 500: Client B moves forward again. Server receives move command for tick 400
Server Tick: 400, Client render: 350, Client Tick 600: Server applies move and sends out world state. Server receives second move command for tick 500
Server Tick: 450, Client render: 400, Client Tick 650: At this point client A should see client B move, but the world state still has another 50ms before it reaches client A
Server Tick: 500, Client render: 450, Client Tick 700: Client A receives world state for tick 400. Now what?

Valve defaults to an interpolation time of 100ms. In this situation if the interp time was set to that, the client would have just barely received it in time. If the packet took a little longer than 100ms, it would have still been too late.

##### Share on other sites
Inferiarum    739
[quote name='Telanor' timestamp='1354861614' post='5008008']
[quote name='Inferiarum' timestamp='1354800585' post='5007748']
Considering the timing problem here is how I do it:

Every time you get an update from the server you can calculate the server time ST corresponding to the update. The target client time targCT would be
targCT = ST + RTT + c
where c is a constant that accounts for jitter in the RTT (such that input packets arrive in time with high probability) and RTT is the current estimate for the round trip time.
The actual client time CT is updated with the internal clock between each server packet and then adjusted according to something like
CT = 0.99*CT + 0.01*targCT

Note that these calculations are done in (more or less) continuos time.

Considering the client side prediction, I guess if you have an expensive physical simulation, it might be infeasible to calculate the prediction steps. In this case, as mentioned by hplus, you could just render everything at a render time
RT = CT - RTT - IP
and try to mask the control delay somehow.
[/quote]

I keep trying to figure out how this would play out and I can't see how it could work:

Client A has RTT of 200ms
Client B has RTT of 200ms
Interpolation time of 50ms

Server Tick: 200, Client render: 150, Client Tick 400: Client B moves forward
Server Tick: 300, Client render: 250, Client Tick 500: Client B moves forward again. Server receives move command for tick 400
Server Tick: 400, Client render: 350, Client Tick 600: Server applies move and sends out world state. Server receives second move command for tick 500
Server Tick: 450, Client render: 400, Client Tick 650: At this point client A should see client B move, but the world state still has another 50ms before it reaches client A
Server Tick: 500, Client render: 450, Client Tick 700: Client A receives world state for tick 400. Now what?

Valve defaults to an interpolation time of 100ms. In this situation if the interp time was set to that, the client would have just barely received it in time. If the packet took a little longer than 100ms, it would have still been too late.
[/quote]

what i meant was that the difference between the client time and the time corresponding to the latest update from the server is the RTT, the difference between client time and the simultaneous server time is RTT/2. With this timing, assuming constant RTT, the client input packets arrive at exactly the tick, when they are needed.

At the server you do not necessarily have to take the time stamp of the input packet into account, you can also just use the latest packet of each user for the update (edit: you still use the time stamp to detect out of order packets). The RTT tells you how far you have to extrapolate at the client (If you want to use prediction). Edited by Inferiarum

##### Share on other sites
ApochPiQ    22999
[quote name='Telanor' timestamp='1354859291' post='5008004']
[quote]As clients report to the server, the server corrects its local simulation (moving forwards only!) to account for the inputs[/quote]

Corrects it how? Can you explain in more detail what happens here?

[quote]This is used to inform extrapolation on both the server and other clients[/quote]

Can you expand on this point too? I don't understand what you mean by "inform extrapolation".

[quote]The solution to this is to delay local actions by the transmission delay factor, and hide the delay using animations.[/quote]

We can do that for some things, but some might need to be instant actions. What do you do about that then? Even world of warcraft has some instant-cast spells, so how do they handle that?
[/quote]

It's not magic. The server simply changes its state according to received inputs. There's no rewinding, no bizarre math, nothing like that; when you get an input, you change your state to match the input, as if you'd just gotten the keystrokes/etc. from the local game loop.

Extrapolation is correspondingly simple. If you are told "I'm moving North at 2 m/s" then you assume that that remains true until you are told otherwise.

"Instant actions" are a mythological beast in networked games. They do not exist. You can have actions which trigger as fast as you can relay the data around, but nothing is instant. Your design should accommodate this fact.

##### Share on other sites
riuthamus    8361
[quote name='ApochPiQ' timestamp='1354910241' post='5008208']
"Instant actions" are a mythological beast in networked games. They do not exist. You can have actions which trigger as fast as you can relay the data around, but nothing is instant. Your design should accommodate this fact.
[/quote]

How do you explain actions that have a 0 cast time and 0 delay? Obviously those can only go as fast as the network can transfer them but they do exist. So the method of hiding the delay with animations falters when that comes into play. For example, say we have a boss fight where one of the mechanics is to cast an interrupt on the boss. To do this you use an instant cast spell that blocks his ability, if the boss's ability has a long timer than you should be good despite network delays but say you have less than 1 or 2 seconds or even half a second to get it all done... what then? Priority timing? track when the boss started his skill when the player started his skill and time it out from there?

##### Share on other sites
ApochPiQ    22999
It's either using various techniques to mask latency, or doing retroactive cancels (e.g. you cast an "instant" interrupt on a spell with a 0.5-second windup and a 0.5-second effect time, and if the interrupt registers anywhere during that 1 second window, the entire thing is nullified even if it "hit" at, say, 0.75 seconds).

Keep in mind that updating the UI to show damage numbers/etc. is also an opportunity to mask latency.

##### Share on other sites
hplus0603    11347
[quote]How do you explain actions that have a 0 cast time and 0 delay?[/quote]

You say "bad, designer, no cookie!" and smack them over the fingers with a ruler.
After ten or twelve times, they start learning!
:-)

##### Share on other sites
riuthamus    8361
[quote name='hplus0603' timestamp='1354931918' post='5008316']
[quote]How do you explain actions that have a 0 cast time and 0 delay?[/quote]

You say "bad, designer, no cookie!" and smack them over the fingers with a ruler.
After ten or twelve times, they start learning!
:-)
[/quote]

*nods* well.. okay sounds like a plan. Thanks again we should have something this weekend worth showing for all of this information.

##### Share on other sites
Telanor    1486
[quote name='Inferiarum' timestamp='1354904633' post='5008185']

what i meant was that the difference between the client time and the time corresponding to the latest update from the server is the RTT, the difference between client time and the simultaneous server time is RTT/2. With this timing, assuming constant RTT, the client input packets arrive at exactly the tick, when they are needed.

At the server you do not necessarily have to take the time stamp of the input packet into account, you can also just use the latest packet of each user for the update (edit: you still use the time stamp to detect out of order packets). The RTT tells you how far you have to extrapolate at the client (If you want to use prediction).
[/quote]

Sorry if I'm being dense, but I'm still not getting it. If the server time is 300 and the client time is 500 and it takes 100ms for the update to get to the client, then when the update from tick 300 arrives, the client's time will be 600. You're saying that the RTT = CT - time stamp on last update. So RTT = 600 - 300 = 300. And you said that CT - current server time = RTT / 2. When the client is at 600, the server is at 400, so RTT/2 = 600-400 = 200. But RTT = 300, RTT/2 should be 150...

##### Share on other sites
Let's assume that the client and server time are sycnhronised, thus at the same time in the real world they equate to each other.

If the inputs are sampled at t=100, and the server receives them at t=200 then assuming they were sent at sample time, upstream_latency = recv_time - sample_time.
We can then predict that these inputs will be received by the server at sample_time + upstream_latency
On the client, we will rarely be told this by the server, so instead we can use the RTT time (time after sending an ACK that we receive a reply) to get the total trip time and halve it for a rough approximation of upstream latency. We can use the same method to determine when the inputs will be received.

What has been said is that if the client runs its local simulation at server_time + (rtt / 2) then the inputs will be received in time for intended processing on the server.

##### Share on other sites
Inferiarum    739
You have to estimate the RTT separately (e.g. include the latest time stamp of the input packets from the client in the server packet). If the server time is 300 and it takes 100 ms for the update to get to the client, then the client has the update with time stamp 300 when it is already 100 ms old, now he adds the RTT to the packet time stamp and is now 100 ms ahead (if the estimate is correct). If the actual client time differs from this target time you adjust it slightly.

Ok, so here is how I do it (slightly different from what I described above since I do not use the RTT explicitly)

On the client we have the internal clock CLK and an offset OS, such that ideally the client time CT = CLK + OS = ST + RTT/2

Client sends input packet with current time stamp CT1
server receives input, updates simulation, sends out new game state, includes the current server time ST, and the stamp of the latest input packet CT1
Now ideally CT1 and ST should be the same
When we receive the game state update on the client (which includes CT1 and ST) we update
OS = OS + alpha*(CT1 - ST )
with some small alpha > 0
and the new client time would be
CT2 = CLK + OS

You can still keep an estimate of the RTT if you need it somewhere
RTT = (1-alpha)*RTT + alpha*(CT2-CT1) Edited by Inferiarum

##### Share on other sites
Telanor    1486
Ok, I think I finally understand. This will mostly keep the clocks in sync, but I'm not sure that will help with the issue I'm having of the server and client not computing the same result. I've done some more testing and I believe the issue is coming from the fact that the input isn't being applied for the same duration of time. For example, in the test I just did, the client moved for 5468.7882ms while the server only moved for 5438.1314ms. Should I just accept that that's how things are going to work out and correct the error with some interpolation or is this something that needs to be fixed?