• Create Account

## Network input handling

Old topic!

Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

124 replies to this topic

### #21hplus0603  Moderators

Posted 05 December 2012 - 12:51 AM

I think the server and client should run physics at the same number of steps per second. This means that you may need to step physics more than once between each rendered frame to "catch up" if rendering is slow. I don't think using a "tick rate" that is different from physics rate is all that useful on the server; in fact, I think it would just get in the way.
Instead, I would use a high-precision timer (QueryPerformanceCounter on Windows, clock_gettime(CLOCK_REALTIME) or similar on Linux) and calculate how much I need to wait until the next physics tick, and then pass that as the timeout delay to the call to select() (assuming you use select().) If you get data before it's time to run physics, select() unblocks, you decode the data, and go back to select(). Once it's time (which you detect by the time-to-sleep calculation returning 0 or a negative number,) you step physics, then stay in the same loop. Thus, there is no fixed clock rate at the server, other than the rate at which it steps physics.

If you use another API, you may have to do other things to arrange for waking up each time it's time for physics, but no matter what it is, using any deadline other than "the time I know I need to run the next physics tick" is unlikely to be optimal. Also, don't try to "anticipate" any latency in wake-up -- just do the math as if the thread will wake up perfectly, and when it wakes up a little bit late, you're effectively running in "catch-up" mode with less than a full frame to catch up, so everything's still fine.

Also, I would not "consume" input, but instead "set" input. Each packet, send the state of all input to the server. The server sets "the input of the client is X until I hear differently" in its receiving. If you drop a packet, chances are, that input was the same as the previous input, and the server and client don't de-sync because of it. Also, you can stuff multiple frames into a single packet. Say you run at 60 Hz, and send packets at 20 Hz. Then you include the input for the last three frames in each packet you send.
enum Bool { True, False, FileNotFound };

### #22Telanor  Members

Posted 05 December 2012 - 08:49 PM

Well I think we are doing all those things. The physics has its own internal stepping. I call update and tell it how much time has passed and then it will accumulate the time and run as many steps as needed. Also, the input isn't being consumed. Like I said, the server combines the states. So if it has W down, D down as its current state and it receives a Space down message, then the state becomes W down, D down, Space down.

### #23Inferiarum  Members

Posted 06 December 2012 - 03:37 AM

Well I think we are doing all those things. The physics has its own internal stepping. I call update and tell it how much time has passed and then it will accumulate the time and run as many steps as needed. Also, the input isn't being consumed. Like I said, the server combines the states. So if it has W down, D down as its current state and it receives a Space down message, then the state becomes W down, D down, Space down.

So you kind of delta encode the packets? I think this is not really that practical for input packets, since you have to deal with lost packets, and it may actually need more data then sending the whole state at each game logic tick. As hplus said, most of the time a small integer is enough to send all key state information as a bitmask.

### #24Telanor  Members

Posted 06 December 2012 - 03:41 AM

Well the thing is, I'm not actually sending keys, I'm sending actions. And since mods can add new actions, I can't use a bitmask. Also, we're using lidgren to handle the low level networking details. There's no worry of lost packets.

### #25Inferiarum  Members

Posted 06 December 2012 - 03:54 AM

You can still send a bitmask you just have to have the same information on the server and the client, which bit maps to which action. For example you could write all actions in a text file and just assign IDs successively when reading it. Of course on the client you need additional information about which key maps to which action, but you should have that already.

That is, the client produces the bitmask out of information from the input devices and the mapping from actions to IDs, sends it to the server and the server gets the bitmask from the network interface.

The actual part of the program (e.g. a class which stores the current bitmask internally and also knows the mapping from the actions to the bit IDs ) that presents the input to the game logic is the same for client and server.

edit: also in my opinion resending lost input packets in this scenario does not make any sense because at the time the lost packet arrives at the server it is probably out of date anyway.

Edited by Inferiarum, 06 December 2012 - 04:01 AM.

### #26Telanor  Members

Posted 06 December 2012 - 03:57 AM

I suppose that could work if we set a max supported number of actions. I'm still not sure whats causing the issue with the server simulating the input state for a shorter period of time than the client though.

### #27Inferiarum  Members

Posted 06 December 2012 - 04:15 AM

You could also make the bitmask more flexible to allow for an arbitrary number of actions, you have to know if it is worth the effort.

If you time stamp the input states with the simulation tick they should be used for, there is a one to one mapping between input state and simulation tick, and, as long as the input packets arrive at the server in time, the same input should be used for the same simulation tick. So if this is what you are doing then I am not sure where the issue is.

### #28Telanor  Members

Posted 06 December 2012 - 04:23 AM

The states aren't being tagged with the simulation tick. I'm not sure I understand how that's supposed to work either. If I assume both the server and client are perfectly in sync and the client sends out a "move forward" command for tick 150, by the time the server gets it, it will already be past that point. If the client is meant to be behind the server, as some articles have suggested, then it still doesn't make sense, because the server will have already simulated the tick before the client has even issued the command.

Edited by Telanor, 06 December 2012 - 04:23 AM.

### #29Inferiarum  Members

Posted 06 December 2012 - 05:02 AM

If you want to do client side prediction, the client actually runs ahead of the server (for something slightly more than half the round trip time) because like you mentioned you want the input packets to reach the server in time.

If you get a game state update from the server, you simulate up to the current client time with local inputs and if you are the only player both client and server come to the same result for the game state at the same tick (assuming client does a full simulation).

Of course if there is a lot of interaction with other players (like in an multiplayer FPS) you get a problem because you have no information about the inputs of other players. That is why in this type of game the predicted state is only used to determine the camera position, and all other players are rendered based on information in the past.

E.g. if we have client time CT, round trip time RTT then the camera is placed using the predicted state at time CT but all other players are placed using the state at time
CT - RTT - IP, where IP is the interpolation time (https://developer.valvesoftware.com/wiki/Source_Multiplayer_Networking).

You could also try to predict the input of the other players and draw everything at time CT (http://www.gamasutra.com/view/news/177508/The_lagfighting_techniques_behind_GGPOs_netcode.php#.UMB7HdPjlh4).
This is possible if actions have something like a charge-up.

### #30Telanor  Members

Posted 06 December 2012 - 06:05 AM

How do you place the client ahead of the server? Do you just set the clients time = server time + round trip time when the client joins? What happens if their round trip time fluctuates? Or if they happen to connect in the middle of a lag spike, giving them a 400ms ping which later settles down to 50ms?

Also, that valve article doesn't seem to be feasible in my situation. They're relying on being able to rewind the state of all the players to do calcuations: "This doesn't mean you have to lead your aiming when shooting at other players since the server-side lag compensation knows about client entity interpolation and corrects this error." What do you do when it proves to be too expensive to rewind and resimulate the entire state, and when you have to deal with more then just players in the simulation?

Edit: I'd like to add that I while I haven't tested to see if we can actually afford to rewind and resimulate the entire state, the author of the physics engine advises against it in this thread

Edited by Telanor, 06 December 2012 - 06:26 AM.

### #31Inferiarum  Members

Posted 06 December 2012 - 07:29 AM

Considering the timing problem here is how I do it:

Every time you get an update from the server you can calculate the server time ST corresponding to the update. The target client time targCT would be
targCT = ST + RTT + c
where c is a constant that accounts for jitter in the RTT (such that input packets arrive in time with high probability) and RTT is the current estimate for the round trip time.
The actual client time CT is updated with the internal clock between each server packet and then adjusted according to something like
CT = 0.99*CT + 0.01*targCT

Note that these calculations are done in (more or less) continuos time.

Considering the client side prediction, I guess if you have an expensive physical simulation, it might be infeasible to calculate the prediction steps. In this case, as mentioned by hplus, you could just render everything at a render time
RT = CT - RTT - IP
and try to mask the control delay somehow.

Edited by Inferiarum, 06 December 2012 - 07:42 AM.

### #32riuthamus  Moderators

Posted 06 December 2012 - 09:09 AM

The main issue is our interaction with the world. The player has the ability to modify the worlds contents via blocks, while it must maintain the player(s) locations and interactions as well. To create a responsive combat system ( which we hope to do ) and manage all of that seems to be a daunting task. Maybe if you knew what we were doing you would better know how to help. Btw, thank you for all of this help... it is simply amazing to have this much guidance and knowledge to assist.

Game goal:
Our main drive is to have players fighting it out over land. This land is the same land that is near 100% dynamic and can be modified. So instead of a player vs a player you have to calculate and manage a player vs a player with block entity interaction as well. Depending on how we render things this could be several thousands functions firing at one time if a player used an explosive that blew up x blocks and all players in the area of the blast. Not saying that whatever was already discussed wont manage that just giving you a bit of scope. We want war and land control to be the central focus.

### #33ApochPiQ  Moderators

Posted 06 December 2012 - 11:02 AM

Here's how I'd approach this:

• All clients report at a fixed rate, say 20-30Hz
• Client and server contain the exact same prediction/extrapolation logic
• As clients report to the server, the server corrects its local simulation (moving forwards only!) to account for the inputs
• There is no requirement for timing lockstep; the server waits for no one
• Once the server has received some inputs for a tick, it relays the results of its simulation to the appropriate clients
• This relay happens at the end of the server tick regardless of who has reported in
• The server tracks the delay between when it expected inputs to be reported and when it actually sees them
• This is used to inform extrapolation on both the server and other clients
• Since everyone does the same extrapolation logic, all clients will appear to be in sync but actually lag behind the server due to relay time

When major state changes are relayed from the server to clients, you compute the last known transmission delay (based on the tracked latency) and tell clients to fast-forward their simulation to match. The result is that you might "miss" the first few rendered frames of the world state changing, but the result will be accurate and mostly correctly timed.

The solution to this is to delay local actions by the transmission delay factor, and hide the delay using animations. For example, suppose you have a rocket launcher that can radically alter terrain/buildings/etc. When player A fires a rocket, he sees an instant animation/sound effect/etc. of the launcher charging up to fire. At the same instant, you tell the server to fire the rocket. When the server responds that it has done so, you actually do the rocket/explosion calculations on the client.

This keeps everyone in sync, keeps the game feeling fast, and accurately hides the latency issues involved in distributed simulation.
Wielder of the Sacred Wands

### #34riuthamus  Moderators

Posted 06 December 2012 - 01:25 PM

Here's how I'd approach this:

• All clients report at a fixed rate, say 20-30Hz
• Client and server contain the exact same prediction/extrapolation logic
• As clients report to the server, the server corrects its local simulation (moving forwards only!) to account for the inputs
• There is no requirement for timing lockstep; the server waits for no one
• Once the server has received some inputs for a tick, it relays the results of its simulation to the appropriate clients
• This relay happens at the end of the server tick regardless of who has reported in
• The server tracks the delay between when it expected inputs to be reported and when it actually sees them
• This is used to inform extrapolation on both the server and other clients
• Since everyone does the same extrapolation logic, all clients will appear to be in sync but actually lag behind the server due to relay time
When major state changes are relayed from the server to clients, you compute the last known transmission delay (based on the tracked latency) and tell clients to fast-forward their simulation to match. The result is that you might "miss" the first few rendered frames of the world state changing, but the result will be accurate and mostly correctly timed.

The solution to this is to delay local actions by the transmission delay factor, and hide the delay using animations. For example, suppose you have a rocket launcher that can radically alter terrain/buildings/etc. When player A fires a rocket, he sees an instant animation/sound effect/etc. of the launcher charging up to fire. At the same instant, you tell the server to fire the rocket. When the server responds that it has done so, you actually do the rocket/explosion calculations on the client.

This keeps everyone in sync, keeps the game feeling fast, and accurately hides the latency issues involved in distributed simulation.

Not bad, I suppose the only fear now is what people will come up with for hacks. I suppose that is a problem to address when the system is in place and being tested!

### #35ApochPiQ  Moderators

Posted 06 December 2012 - 01:38 PM

If you have the server validate everything a client asks to do, it's pretty foolproof.
Wielder of the Sacred Wands

### #36riuthamus  Moderators

Posted 06 December 2012 - 01:49 PM

Not that i am going to do this, but say I wanted to hire somebody to just look over our stuff, do you have any estimates that i should expect to pay to have somebody look at it? like a consultant? I mean we may just figure it all out from just talking like this but I like to have backup plans just in case.

### #37ApochPiQ  Moderators

Posted 06 December 2012 - 02:48 PM

Consulting for this scale of a project would be expensive. Look for something on the order of $150-$200 an hour and a several-week process.
Wielder of the Sacred Wands

### #38riuthamus  Moderators

Posted 06 December 2012 - 03:38 PM

*nods* indeed, thanks for the heads up!

### #39Telanor  Members

Posted 06 December 2012 - 11:48 PM

As clients report to the server, the server corrects its local simulation (moving forwards only!) to account for the inputs

Corrects it how? Can you explain in more detail what happens here?

This is used to inform extrapolation on both the server and other clients

Can you expand on this point too? I don't understand what you mean by "inform extrapolation".

The solution to this is to delay local actions by the transmission delay factor, and hide the delay using animations.

We can do that for some things, but some might need to be instant actions. What do you do about that then? Even world of warcraft has some instant-cast spells, so how do they handle that?

### #40Telanor  Members

Posted 07 December 2012 - 12:26 AM

Considering the timing problem here is how I do it:

Every time you get an update from the server you can calculate the server time ST corresponding to the update. The target client time targCT would be
targCT = ST + RTT + c
where c is a constant that accounts for jitter in the RTT (such that input packets arrive in time with high probability) and RTT is the current estimate for the round trip time.
The actual client time CT is updated with the internal clock between each server packet and then adjusted according to something like
CT = 0.99*CT + 0.01*targCT

Note that these calculations are done in (more or less) continuos time.

Considering the client side prediction, I guess if you have an expensive physical simulation, it might be infeasible to calculate the prediction steps. In this case, as mentioned by hplus, you could just render everything at a render time
RT = CT - RTT - IP
and try to mask the control delay somehow.

I keep trying to figure out how this would play out and I can't see how it could work:

Client A has RTT of 200ms
Client B has RTT of 200ms
Interpolation time of 50ms

Server Tick: 200, Client render: 150, Client Tick 400: Client B moves forward
Server Tick: 300, Client render: 250, Client Tick 500: Client B moves forward again. Server receives move command for tick 400
Server Tick: 400, Client render: 350, Client Tick 600: Server applies move and sends out world state. Server receives second move command for tick 500
Server Tick: 450, Client render: 400, Client Tick 650: At this point client A should see client B move, but the world state still has another 50ms before it reaches client A
Server Tick: 500, Client render: 450, Client Tick 700: Client A receives world state for tick 400. Now what?

Valve defaults to an interpolation time of 100ms. In this situation if the interp time was set to that, the client would have just barely received it in time. If the packet took a little longer than 100ms, it would have still been too late.

Old topic!

Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.