Jump to content
Posted 05 December 2012 - 12:51 AM
Posted 05 December 2012 - 08:49 PM
Posted 06 December 2012 - 03:37 AM
Well I think we are doing all those things. The physics has its own internal stepping. I call update and tell it how much time has passed and then it will accumulate the time and run as many steps as needed. Also, the input isn't being consumed. Like I said, the server combines the states. So if it has W down, D down as its current state and it receives a Space down message, then the state becomes W down, D down, Space down.
Posted 06 December 2012 - 03:41 AM
Posted 06 December 2012 - 03:54 AM
Edited by Inferiarum, 06 December 2012 - 04:01 AM.
Posted 06 December 2012 - 03:57 AM
Posted 06 December 2012 - 04:15 AM
Posted 06 December 2012 - 04:23 AM
Edited by Telanor, 06 December 2012 - 04:23 AM.
Posted 06 December 2012 - 05:02 AM
Posted 06 December 2012 - 06:05 AM
Edited by Telanor, 06 December 2012 - 06:26 AM.
Posted 06 December 2012 - 07:29 AM
Edited by Inferiarum, 06 December 2012 - 07:42 AM.
Posted 06 December 2012 - 09:09 AM
Posted 06 December 2012 - 11:02 AM
Posted 06 December 2012 - 01:25 PM
Here's how I'd approach this:
When major state changes are relayed from the server to clients, you compute the last known transmission delay (based on the tracked latency) and tell clients to fast-forward their simulation to match. The result is that you might "miss" the first few rendered frames of the world state changing, but the result will be accurate and mostly correctly timed.
- All clients report at a fixed rate, say 20-30Hz
- Client and server contain the exact same prediction/extrapolation logic
- As clients report to the server, the server corrects its local simulation (moving forwards only!) to account for the inputs
- There is no requirement for timing lockstep; the server waits for no one
- Once the server has received some inputs for a tick, it relays the results of its simulation to the appropriate clients
- This relay happens at the end of the server tick regardless of who has reported in
- The server tracks the delay between when it expected inputs to be reported and when it actually sees them
- This is used to inform extrapolation on both the server and other clients
- Since everyone does the same extrapolation logic, all clients will appear to be in sync but actually lag behind the server due to relay time
The solution to this is to delay local actions by the transmission delay factor, and hide the delay using animations. For example, suppose you have a rocket launcher that can radically alter terrain/buildings/etc. When player A fires a rocket, he sees an instant animation/sound effect/etc. of the launcher charging up to fire. At the same instant, you tell the server to fire the rocket. When the server responds that it has done so, you actually do the rocket/explosion calculations on the client.
This keeps everyone in sync, keeps the game feeling fast, and accurately hides the latency issues involved in distributed simulation.
Posted 06 December 2012 - 01:49 PM
Posted 06 December 2012 - 11:48 PM
As clients report to the server, the server corrects its local simulation (moving forwards only!) to account for the inputs
This is used to inform extrapolation on both the server and other clients
The solution to this is to delay local actions by the transmission delay factor, and hide the delay using animations.
Posted 07 December 2012 - 12:26 AM
Considering the timing problem here is how I do it:
Every time you get an update from the server you can calculate the server time ST corresponding to the update. The target client time targCT would be
targCT = ST + RTT + c
where c is a constant that accounts for jitter in the RTT (such that input packets arrive in time with high probability) and RTT is the current estimate for the round trip time.
The actual client time CT is updated with the internal clock between each server packet and then adjusted according to something like
CT = 0.99*CT + 0.01*targCT
Note that these calculations are done in (more or less) continuos time.
Considering the client side prediction, I guess if you have an expensive physical simulation, it might be infeasible to calculate the prediction steps. In this case, as mentioned by hplus, you could just render everything at a render time
RT = CT - RTT - IP
and try to mask the control delay somehow.