• Create Account

## Network input handling

Old topic!

Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

124 replies to this topic

### #41Inferiarum  Members

739
Like
1Likes
Like

Posted 07 December 2012 - 12:23 PM

Considering the timing problem here is how I do it:

Every time you get an update from the server you can calculate the server time ST corresponding to the update. The target client time targCT would be
targCT = ST + RTT + c
where c is a constant that accounts for jitter in the RTT (such that input packets arrive in time with high probability) and RTT is the current estimate for the round trip time.
The actual client time CT is updated with the internal clock between each server packet and then adjusted according to something like
CT = 0.99*CT + 0.01*targCT

Note that these calculations are done in (more or less) continuos time.

Considering the client side prediction, I guess if you have an expensive physical simulation, it might be infeasible to calculate the prediction steps. In this case, as mentioned by hplus, you could just render everything at a render time
RT = CT - RTT - IP
and try to mask the control delay somehow.

I keep trying to figure out how this would play out and I can't see how it could work:

Client A has RTT of 200ms
Client B has RTT of 200ms
Interpolation time of 50ms

Server Tick: 200, Client render: 150, Client Tick 400: Client B moves forward
Server Tick: 300, Client render: 250, Client Tick 500: Client B moves forward again. Server receives move command for tick 400
Server Tick: 400, Client render: 350, Client Tick 600: Server applies move and sends out world state. Server receives second move command for tick 500
Server Tick: 450, Client render: 400, Client Tick 650: At this point client A should see client B move, but the world state still has another 50ms before it reaches client A
Server Tick: 500, Client render: 450, Client Tick 700: Client A receives world state for tick 400. Now what?

Valve defaults to an interpolation time of 100ms. In this situation if the interp time was set to that, the client would have just barely received it in time. If the packet took a little longer than 100ms, it would have still been too late.

what i meant was that the difference between the client time and the time corresponding to the latest update from the server is the RTT, the difference between client time and the simultaneous server time is RTT/2. With this timing, assuming constant RTT, the client input packets arrive at exactly the tick, when they are needed.

At the server you do not necessarily have to take the time stamp of the input packet into account, you can also just use the latest packet of each user for the update (edit: you still use the time stamp to detect out of order packets). The RTT tells you how far you have to extrapolate at the client (If you want to use prediction).

Edited by Inferiarum, 07 December 2012 - 12:27 PM.

### #42ApochPiQ  Moderators

21389
Like
1Likes
Like

Posted 07 December 2012 - 01:57 PM

As clients report to the server, the server corrects its local simulation (moving forwards only!) to account for the inputs

Corrects it how? Can you explain in more detail what happens here?

This is used to inform extrapolation on both the server and other clients

Can you expand on this point too? I don't understand what you mean by "inform extrapolation".

The solution to this is to delay local actions by the transmission delay factor, and hide the delay using animations.

We can do that for some things, but some might need to be instant actions. What do you do about that then? Even world of warcraft has some instant-cast spells, so how do they handle that?

It's not magic. The server simply changes its state according to received inputs. There's no rewinding, no bizarre math, nothing like that; when you get an input, you change your state to match the input, as if you'd just gotten the keystrokes/etc. from the local game loop.

Extrapolation is correspondingly simple. If you are told "I'm moving North at 2 m/s" then you assume that that remains true until you are told otherwise.

"Instant actions" are a mythological beast in networked games. They do not exist. You can have actions which trigger as fast as you can relay the data around, but nothing is instant. Your design should accommodate this fact.
Wielder of the Sacred Wands

### #43riuthamus  Moderators

8194
Like
0Likes
Like

Posted 07 December 2012 - 03:26 PM

"Instant actions" are a mythological beast in networked games. They do not exist. You can have actions which trigger as fast as you can relay the data around, but nothing is instant. Your design should accommodate this fact.

How do you explain actions that have a 0 cast time and 0 delay? Obviously those can only go as fast as the network can transfer them but they do exist. So the method of hiding the delay with animations falters when that comes into play. For example, say we have a boss fight where one of the mechanics is to cast an interrupt on the boss. To do this you use an instant cast spell that blocks his ability, if the boss's ability has a long timer than you should be good despite network delays but say you have less than 1 or 2 seconds or even half a second to get it all done... what then? Priority timing? track when the boss started his skill when the player started his skill and time it out from there?

### #44ApochPiQ  Moderators

21389
Like
1Likes
Like

Posted 07 December 2012 - 04:37 PM

It's either using various techniques to mask latency, or doing retroactive cancels (e.g. you cast an "instant" interrupt on a spell with a 0.5-second windup and a 0.5-second effect time, and if the interrupt registers anywhere during that 1 second window, the entire thing is nullified even if it "hit" at, say, 0.75 seconds).

Keep in mind that updating the UI to show damage numbers/etc. is also an opportunity to mask latency.
Wielder of the Sacred Wands

### #45hplus0603  Moderators

10571
Like
1Likes
Like

Posted 07 December 2012 - 07:58 PM

How do you explain actions that have a 0 cast time and 0 delay?

You say "bad, designer, no cookie!" and smack them over the fingers with a ruler.
After ten or twelve times, they start learning!
:-)

enum Bool { True, False, FileNotFound };

### #46riuthamus  Moderators

8194
Like
0Likes
Like

Posted 07 December 2012 - 08:02 PM

How do you explain actions that have a 0 cast time and 0 delay?

You say "bad, designer, no cookie!" and smack them over the fingers with a ruler.
After ten or twelve times, they start learning!
:-)

*nods* well.. okay sounds like a plan. Thanks again we should have something this weekend worth showing for all of this information.

### #47Telanor  Members

1486
Like
1Likes
Like

Posted 07 December 2012 - 08:21 PM

what i meant was that the difference between the client time and the time corresponding to the latest update from the server is the RTT, the difference between client time and the simultaneous server time is RTT/2. With this timing, assuming constant RTT, the client input packets arrive at exactly the tick, when they are needed.

At the server you do not necessarily have to take the time stamp of the input packet into account, you can also just use the latest packet of each user for the update (edit: you still use the time stamp to detect out of order packets). The RTT tells you how far you have to extrapolate at the client (If you want to use prediction).

Sorry if I'm being dense, but I'm still not getting it. If the server time is 300 and the client time is 500 and it takes 100ms for the update to get to the client, then when the update from tick 300 arrives, the client's time will be 600. You're saying that the RTT = CT - time stamp on last update. So RTT = 600 - 300 = 300. And you said that CT - current server time = RTT / 2. When the client is at 600, the server is at 400, so RTT/2 = 600-400 = 200. But RTT = 300, RTT/2 should be 150...

### #48Angus Hollands  Members

858
Like
1Likes
Like

Posted 08 December 2012 - 07:06 AM

Let's assume that the client and server time are sycnhronised, thus at the same time in the real world they equate to each other.

If the inputs are sampled at t=100, and the server receives them at t=200 then assuming they were sent at sample time, upstream_latency = recv_time - sample_time.
We can then predict that these inputs will be received by the server at sample_time + upstream_latency
On the client, we will rarely be told this by the server, so instead we can use the RTT time (time after sending an ACK that we receive a reply) to get the total trip time and halve it for a rough approximation of upstream latency. We can use the same method to determine when the inputs will be received.

What has been said is that if the client runs its local simulation at server_time + (rtt / 2) then the inputs will be received in time for intended processing on the server.

### #49Inferiarum  Members

739
Like
2Likes
Like

Posted 08 December 2012 - 07:26 AM

You have to estimate the RTT separately (e.g. include the latest time stamp of the input packets from the client in the server packet). If the server time is 300 and it takes 100 ms for the update to get to the client, then the client has the update with time stamp 300 when it is already 100 ms old, now he adds the RTT to the packet time stamp and is now 100 ms ahead (if the estimate is correct). If the actual client time differs from this target time you adjust it slightly.

Ok, so here is how I do it (slightly different from what I described above since I do not use the RTT explicitly)

On the client we have the internal clock CLK and an offset OS, such that ideally the client time CT = CLK + OS = ST + RTT/2

Client sends input packet with current time stamp CT1
server receives input, updates simulation, sends out new game state, includes the current server time ST, and the stamp of the latest input packet CT1
Now ideally CT1 and ST should be the same
When we receive the game state update on the client (which includes CT1 and ST) we update
OS = OS + alpha*(CT1 - ST )
with some small alpha > 0
and the new client time would be
CT2 = CLK + OS

You can still keep an estimate of the RTT if you need it somewhere
RTT = (1-alpha)*RTT + alpha*(CT2-CT1)

Edited by Inferiarum, 09 December 2012 - 04:15 AM.

### #50Telanor  Members

1486
Like
1Likes
Like

Posted 08 December 2012 - 08:02 PM

Ok, I think I finally understand. This will mostly keep the clocks in sync, but I'm not sure that will help with the issue I'm having of the server and client not computing the same result. I've done some more testing and I believe the issue is coming from the fact that the input isn't being applied for the same duration of time. For example, in the test I just did, the client moved for 5468.7882ms while the server only moved for 5438.1314ms. Should I just accept that that's how things are going to work out and correct the error with some interpolation or is this something that needs to be fixed?

### #51hplus0603  Moderators

10571
Like
1Likes
Like

Posted 08 December 2012 - 09:10 PM

in the test I just did, the client moved for 5468.7882ms while the server only moved for 5438.1314ms

Why are you measuring milliseconds? You should be measuring number of steps.

Also, the inputs (on and off) should be time stamped with step numbers when they take effect. That way, they will run exactly the same number of steps on client and server.
enum Bool { True, False, FileNotFound };

### #52Telanor  Members

1486
Like
1Likes
Like

Posted 08 December 2012 - 09:49 PM

What do you mean steps? Actual character animation steps? Or "press W, move forward 1 foot" steps? The character movement as it's set up right now simply applies a velocity to the player for the duration the key is held. The character isn't moved in discreet amounts. Should it be?

### #53Inferiarum  Members

739
Like
1Likes
Like

Posted 09 December 2012 - 04:07 AM

well, I guess it was mentioned earlier that it is a good idea to do the simulation/game state updates with a fixed time step.

What you are doing right now is probably something like this (written as a pure function):

newState = update( oldState, inputs, deltaTime)

whereas with a fixed time step you would only have

newState = fixedUpdate( oldState, inputs)

and the duration of the update is kind of hard coded.

edit: if you do the fixed update, you only need the same set of inputs every time you do the update, for two simulations to be consistent. And since this is a discrete system time is represented by an integer counting the number of updates.

Edited by Inferiarum, 09 December 2012 - 04:11 AM.

### #54Telanor  Members

1486
Like
1Likes
Like

Posted 09 December 2012 - 05:18 AM

Well I've done as much as I can think of to get it to a proper fixed timestep and the issue hasn't been fixed at all. All of the updating, both on the client and the server, is now run from an event that fires from the physics engine whenever it runs a timestep. The physics internally accumulates the time and runs timesteps at fixed intervals. I still haven't added the time stamping of input states, but I don't think that that would fix this issue, since I'm running both locally, it's not like there's any lag involved.

### #55hplus0603  Moderators

10571
Like
1Likes
Like

Posted 09 December 2012 - 07:53 PM

I still haven't added the time stamping of input states, but I don't think that that would fix this issue, since I'm running both locally, it's not like there's any lag involved.

You should add the timestamping, and assert that it is correct. When it isn't, you can then start debugging why.
enum Bool { True, False, FileNotFound };

### #56Angus Hollands  Members

858
Like
1Likes
Like

Posted 10 December 2012 - 01:03 PM

Are you polling the input at the same tick rate as the server, or applying it for the same duration as a poll tick interval?

### #57Telanor  Members

1486
Like
1Likes
Like

Posted 10 December 2012 - 05:41 PM

Yes, everything is run at the same tick rate and, at a minimum, applied for the duration of a tick

### #58ApochPiQ  Moderators

21389
Like
1Likes
Like

Posted 10 December 2012 - 06:01 PM

Do I interpret that correctly in that things can be applied for more than a round number of ticks? e.g. is 1.03 ticks possible?
Wielder of the Sacred Wands

### #59Telanor  Members

1486
Like
1Likes
Like

Posted 10 December 2012 - 06:14 PM

As it is right now, they shouldn't be able to, assuming I haven't screwed up somewhere. It should all be applied for a whole number of ticks. Should it not be like that?

Edited by Telanor, 10 December 2012 - 06:18 PM.

### #60ApochPiQ  Moderators

21389
Like
2Likes
Like

Posted 10 December 2012 - 07:21 PM

You can go either way, depending on the resolution of your physics integrator, but applying consistent whole numbers of ticks is generally safest if your simulation rate is high enough that this doesn't cause jittery interactions.

If you are applying, say, 3 ticks on the client and 4 on the server, you've found your bug. If both agree that an action is applied for 3 ticks but the actual number of milliseconds applied differs, you have a bug in your timestep fixation code.
Wielder of the Sacred Wands

Old topic!

Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.