Client agent ahead of server

Started by
5 comments, last by Angus Hollands 11 years, 7 months ago
Hey everyone.
I am working on a multiplayer FPS, and I need to support latency delayed input packets. Otherwise, client-side prediction doesn't match with server simulation. My question is, how should one do this? My thoughts are simply to determine the upstream latency, and then store the latest result from the inputs forward by that time, in otherwords, store the client simulation in the future slightly so that it would match the time that the server received the inputs.
However, I'm not sure if this is the best method. Are there any better solutions?
Advertisement
Typically, you run the client events immediately, but keep a log of what the outcome was, so that when you receive data from the server, you can compare. If there's a discrepancy, snap the client to the old state, and tell the server you did a snap to state X so it knows to re-apply that as well.

For remove entities, you need a little bit of buffering as well, to de-jitter incoming packets. Then you display the entities based on data after the de-jitter delay. This may be displayed as-is, or with forward extrapolation.
enum Bool { True, False, FileNotFound };
That is what I'm doing. However, if the upstream latency is significant, input packets intended for tick X, and stored as the simulation for tick X are actually received and processed for tick X + Upstream tick delay. Thus, according to the server state, the prediction client side is actually wrong by the upstream_delay * movement velocity.
How does one overcome this?
Each packet needs to be timestamped with the (global) tick it is intended for.

The client needs to adapt to how late packets typically arrive from the server. Packets that don't arrive in time are assumed to be lost. If a packet arrives after it's considered lost, bump up the estimation of the server latency by some amount (say, the amount packet is late + 10 ms) but cap the amount of bumping allowed per packet to something reasonable like 50 ms.

On the client, things will be displayed (and client prediction resolved) at time (estimated global tick time + estimated transmission latency). Meanwhile, client commands will be issued at tick (estimated global tick + estimated transmission latency in ticks). I e, the server needs to tell the client how late (or early) packet arrive, to update the estimate, and client needs to measure how late (or early) server packets arrive. There are then two functions on the client: turn client-based timestamp into estimate of global tick, and turn server-based global tick into estimated client timestamp.
enum Bool { True, False, FileNotFound };
A very rough example of what client-side prediction would look like (code not tested, just showing the process of how client prediction / server correction would work).


// player actions.
struct PlayerInputs
{
float dt; // frame timestep
Vector mouselook;
Vector keymove;
u_int actions;
};
struct PlayerState
{
// current update sequence number.
int sqn;
Vector position;
Vector velocity;
Quaternion orientation;
Vector angularVelocity;
// update player state with a set of inputs.
void update(PlayerInputs inputs)
{
sqn++;
// do stuff with inputs. Move player position, jump, ect...
//.....
}
};
// a bundle of player inputs, and the subsequent player state calculated from those inputs.
struct PredictionPacket
{
PlayerInputs inputs;
PlayerState state;
};
struct ClientPrediction
{
// list of local predictions.
std::list<PredictionPacket> packets;
// add a new update at the end of the queue.
void recordUpdate(PlayerInputs inputs, PlayerState state);
{
packets.push_front(PredictionPacket(Inputs, state))
}
// server acknowledged our prediction.
void applyAcknowledgement(int ack)
{
// discard older packets that have been acknowledged.
while(!packets.empty() && (packets.back().state.sqn <= ack))
packets.pop_back();
}

// server sent us a correction.
PlayerState applyCorrection(PlayerState correction)
{
// discard older packets.
while(!packets.empty() && (packets.back().state.sqn < correction.sqn))
packets.pop_back();
// apply the correction for that sequence number.
if(!packets.empty() && (packets.back().state.sqn == correction.sqn))
{
// that's the state we should be in at that sequence number.
packets.back().state = correction;
// correct states up to the latest, current state.
for(std::list<PredictionPacket>::reverse_iterator it = packets.rbegin(); it != packets.rend(); ++it)
{
correction.update((*it).inputs);
(*it).state = correction;
}
}
// that's the latest state we should be in.
return correction;
}
};
void UpdateClient(PlayerInputs inputs)
{
// player update
m_currentState.update(inputs);
m_prediction.recordUpdate(inputs, m_currentState);
// send prediction packets not yet acknowledge by server.
sendPrediction();
// process server acknwledgements, if we received any.
int ack;
if(receivedAcknowledgement(ack))
m_prediction.applyAcknowledgement(ack);
// process server corrections, if we received any.
PlayerState server_correction;
if(receiveCorrectionPacket(server_correction))
m_currentState = m_prediction.applyCorrection(server_correction);
}
struct ServerCorrection
{
// list of updates received from client.
std::list<PredictionPacket> packets;
// what we need to do after receiving a new update.
bool has_ack;
bool has_correction;
PlayerState server_correction;
// apply new client packet.
void applyPredicition(PredictionPacket client_prediction)
{
// discard older packets received from client.
while(!packets.empty() && (packets.back().state.sqn < ack))
packets.pop_back();
// empty queue.
if(packets.empty())
{
// not the first packet.
if(client_prediction.state.sqn != 0)
return;

// first packet in the list. always accept.
packets.push_front(client_prediction);

// reset flags.
has_ack = true;
has_correction = false;
}
// part of an unbroken queue.
else
{
// not the packet we expected. Block until the client sends us that packet.
if(client_prediction.state.sqn != (packets.front().state.sqn + 1))
return;
// calculate the state on server.
PlayerState server_prediction = packets.front().state;
server_prediction.update(client_prediction.inputs);

// no corrections currently pending.
// see if we can ack or re-correct.
if(!has_correction)
{
// new correction needed to be sent.
if(server_prediction != client_prediction.state)
{
has_correction = true;
server_correction = server_prediction;
}
// we can acknowledge the client prediction.
else
{
has_ack = true;
}
}
// add state at the end of the queue.
packets.push_front(PredictionPacket(client_prediction.inputs, server_prediction));
}
}
// get correction, if any are pending.
bool getCorrection(PlayerState& correction)
{
// no corrections pending.
if(!has_correction)
return false;
// send latest correction.
correction = server_correction;
has_correction = false;
return true;
}
// get ack, if any are pending.
bool getAcknowledgement(int& ack)
{
// no ack pending.
if(!has_ack)
return false;
// send latest ack.
ack = packets.front().state.sqn;
has_ack = false;
return true;
}
};
void UpdateServer()
{
// process client packets.
PredictionPacket client_prediction;
while(receivePredictionPacket(client_prediction))
{
// validate client side predcition.
m_server_correction.applyPredicition(client_prediction);
}
// we have corrected the client prediction.
PlayerState server_correction;
if(m_server_correction.getCorrection(server_correction))
sendCorrection(server_correction);
// client prediction is good.
int ack;
if(m_server_correction.getAcknowledgement(ack))
sendAcknowledgement(ack);
}

Everything is better with Metal.

So the client will be running behind on the server, and the server will be running behind on the client.

Then you will probably need some global time sync to smooth everything.

Everything is better with Metal.


Each packet needs to be timestamped with the (global) tick it is intended for.

The client needs to adapt to how late packets typically arrive from the server. Packets that don't arrive in time are assumed to be lost. If a packet arrives after it's considered lost, bump up the estimation of the server latency by some amount (say, the amount packet is late + 10 ms) but cap the amount of bumping allowed per packet to something reasonable like 50 ms.

On the client, things will be displayed (and client prediction resolved) at time (estimated global tick time + estimated transmission latency). Meanwhile, client commands will be issued at tick (estimated global tick + estimated transmission latency in ticks). I e, the server needs to tell the client how late (or early) packet arrive, to update the estimate, and client needs to measure how late (or early) server packets arrive. There are then two functions on the client: turn client-based timestamp into estimate of global tick, and turn server-based global tick into estimated client timestamp.


After accounting for the RTT and storing client predictions in the future, it seems to be good!

This topic is closed to new replies.

Advertisement