Jump to content

  • Log In with Google      Sign In   
  • Create Account


#ActualInferiarum

Posted 07 December 2012 - 12:27 PM


Considering the timing problem here is how I do it:

Every time you get an update from the server you can calculate the server time ST corresponding to the update. The target client time targCT would be
targCT = ST + RTT + c
where c is a constant that accounts for jitter in the RTT (such that input packets arrive in time with high probability) and RTT is the current estimate for the round trip time.
The actual client time CT is updated with the internal clock between each server packet and then adjusted according to something like
CT = 0.99*CT + 0.01*targCT

Note that these calculations are done in (more or less) continuos time.

Considering the client side prediction, I guess if you have an expensive physical simulation, it might be infeasible to calculate the prediction steps. In this case, as mentioned by hplus, you could just render everything at a render time
RT = CT - RTT - IP
and try to mask the control delay somehow.


I keep trying to figure out how this would play out and I can't see how it could work:

Client A has RTT of 200ms
Client B has RTT of 200ms
Interpolation time of 50ms

Server Tick: 200, Client render: 150, Client Tick 400: Client B moves forward
Server Tick: 300, Client render: 250, Client Tick 500: Client B moves forward again. Server receives move command for tick 400
Server Tick: 400, Client render: 350, Client Tick 600: Server applies move and sends out world state. Server receives second move command for tick 500
Server Tick: 450, Client render: 400, Client Tick 650: At this point client A should see client B move, but the world state still has another 50ms before it reaches client A
Server Tick: 500, Client render: 450, Client Tick 700: Client A receives world state for tick 400. Now what?


Valve defaults to an interpolation time of 100ms. In this situation if the interp time was set to that, the client would have just barely received it in time. If the packet took a little longer than 100ms, it would have still been too late.


what i meant was that the difference between the client time and the time corresponding to the latest update from the server is the RTT, the difference between client time and the simultaneous server time is RTT/2. With this timing, assuming constant RTT, the client input packets arrive at exactly the tick, when they are needed.

At the server you do not necessarily have to take the time stamp of the input packet into account, you can also just use the latest packet of each user for the update (edit: you still use the time stamp to detect out of order packets). The RTT tells you how far you have to extrapolate at the client (If you want to use prediction).

#2Inferiarum

Posted 07 December 2012 - 12:25 PM


Considering the timing problem here is how I do it:

Every time you get an update from the server you can calculate the server time ST corresponding to the update. The target client time targCT would be
targCT = ST + RTT + c
where c is a constant that accounts for jitter in the RTT (such that input packets arrive in time with high probability) and RTT is the current estimate for the round trip time.
The actual client time CT is updated with the internal clock between each server packet and then adjusted according to something like
CT = 0.99*CT + 0.01*targCT

Note that these calculations are done in (more or less) continuos time.

Considering the client side prediction, I guess if you have an expensive physical simulation, it might be infeasible to calculate the prediction steps. In this case, as mentioned by hplus, you could just render everything at a render time
RT = CT - RTT - IP
and try to mask the control delay somehow.


I keep trying to figure out how this would play out and I can't see how it could work:

Client A has RTT of 200ms
Client B has RTT of 200ms
Interpolation time of 50ms

Server Tick: 200, Client render: 150, Client Tick 400: Client B moves forward
Server Tick: 300, Client render: 250, Client Tick 500: Client B moves forward again. Server receives move command for tick 400
Server Tick: 400, Client render: 350, Client Tick 600: Server applies move and sends out world state. Server receives second move command for tick 500
Server Tick: 450, Client render: 400, Client Tick 650: At this point client A should see client B move, but the world state still has another 50ms before it reaches client A
Server Tick: 500, Client render: 450, Client Tick 700: Client A receives world state for tick 400. Now what?


Valve defaults to an interpolation time of 100ms. In this situation if the interp time was set to that, the client would have just barely received it in time. If the packet took a little longer than 100ms, it would have still been too late.


what i meant was that the difference between the client time and the time corresponding to the latest update from the server is the RTT, the difference between client time and the simultaneous server time is RTT/2. With this timing, assuming constant RTT, the client input packets arrive at exactly the tick, when they are needed.

At the server you do not necessarily have to take the time stamp of the input packet into account, you can also just use the latest packet of each user for the update (edit: you still use the time stamp to detect out of order packets). The RTT tells you how far you have to extrapolate at the client.

#1Inferiarum

Posted 07 December 2012 - 12:23 PM


Considering the timing problem here is how I do it:

Every time you get an update from the server you can calculate the server time ST corresponding to the update. The target client time targCT would be
targCT = ST + RTT + c
where c is a constant that accounts for jitter in the RTT (such that input packets arrive in time with high probability) and RTT is the current estimate for the round trip time.
The actual client time CT is updated with the internal clock between each server packet and then adjusted according to something like
CT = 0.99*CT + 0.01*targCT

Note that these calculations are done in (more or less) continuos time.

Considering the client side prediction, I guess if you have an expensive physical simulation, it might be infeasible to calculate the prediction steps. In this case, as mentioned by hplus, you could just render everything at a render time
RT = CT - RTT - IP
and try to mask the control delay somehow.


I keep trying to figure out how this would play out and I can't see how it could work:

Client A has RTT of 200ms
Client B has RTT of 200ms
Interpolation time of 50ms

Server Tick: 200, Client render: 150, Client Tick 400: Client B moves forward
Server Tick: 300, Client render: 250, Client Tick 500: Client B moves forward again. Server receives move command for tick 400
Server Tick: 400, Client render: 350, Client Tick 600: Server applies move and sends out world state. Server receives second move command for tick 500
Server Tick: 450, Client render: 400, Client Tick 650: At this point client A should see client B move, but the world state still has another 50ms before it reaches client A
Server Tick: 500, Client render: 450, Client Tick 700: Client A receives world state for tick 400. Now what?


Valve defaults to an interpolation time of 100ms. In this situation if the interp time was set to that, the client would have just barely received it in time. If the packet took a little longer than 100ms, it would have still been too late.


what i meant was that the difference between the client time and the time corresponding to the latest update from the server is the RTT, the difference between client time and the simultaneous server time is RTT/2. With this timing, assuming constant RTT, the client input packets arrive at exactly the tick, when they are needed.

At the server you do not necessarily have to take the time stamp of the input packet into account, you can also just use the latest packet of each user for the update. The RTT tells you how far you have to extrapolate at the client.

PARTNERS