Coordinating Time and Determining Latency

Started by
7 comments, last by misterPhyrePhox 17 years, 11 months ago
A bit of background: When the server sends a state update packet to the client, I need to account for the latency of that client; if the server sends a state update packet that encompasses the world state at time 0, the packet may arrive at time 5, and so the client must do a bit of calculation to advance the received state up to the current client time. Doing this requires that I synchronize time across the client and the server. The easiest solution I thought of was to have the server ping the client at certain intervals; the client would respond, and the server would divide the round-trip time by 2 to ascertain the approximate time it takes for a packet to reach the client. The server could then send a packet to synchronize time that accounted for the latency of the client. But then I thought of what might be a more efficient solution; the server would send a state update, and the client would respond with its current input. The state update packet would act as a ping from the server, and the client input packet would act as a ping response from the client, thus eliminating the need for a separate automatic pinging mechanism. But, on average, the state update packet will be much larger than the client input packet. So that brings me to my question: does it take longer to transmit larger packets? For example, say the state update packet is 10 bytes, and the client input packet is 5 bytes. It takes the client 15 ms to respond to the state update packet with an input packet. Does that mean that it took 10 ms (=2/3) to send the state update packet, and 5 ms (=1/3) to send the client input packet? Or does that mean it took 7.5 ms (=1/2) to send the state update packet? Edit: I'm not sure if it was clear, but this is a real time game and I am using UDP.
Advertisement
what you could do is have the server include a timestamp in its message indicating the time when it is just about to send its packet, and have the client indicate a timestamp in its response based on the time it had when it first receives the packet (before it starts writing the input data for the response packet). This way, you could have (as close as possible) the time difference between the server send and the client receive.

Of course, you'll have to do a little handshaking in the beginning, when initially connecting, to figure out what the difference is between the client's "now" time and the server's "now" time.
Greenspun's Tenth Rule of Programming: "Any sufficiently complicated C or Fortran program contains an ad-hoc, informally-specified bug-ridden slow implementation of half of Common Lisp."
Quote:Original post by void*
what you could do is have the server include a timestamp in its message indicating the time when it is just about to send its packet, and have the client indicate a timestamp in its response based on the time it had when it first receives the packet (before it starts writing the input data for the response packet). This way, you could have (as close as possible) the time difference between the server send and the client receive.

Of course, you'll have to do a little handshaking in the beginning, when initially connecting, to figure out what the difference is between the client's "now" time and the server's "now" time.


The server is going to send some kind of time with its state update packet, thats a given -- its needed, at the very least, so that the client can reject out of order packets. I don't think the client timestamp would be very useful for the server, though; the server doesn't really care about the client's time -- it just wants the clients time to be accurate, and in order to do that, it does the latency calculations and coordinates the time.
Maybe I don't completely understand what you're saying?
Thanks a bunch for the response, though!
You use a timestamp from the client and you also do a ping to the client. The ping will give you are roundtrip time and you can adjust the client time my taking half of the ping. You need to keep sending the pings and then averaging the ping times to get the most acurate ajustment. This is not a perfect solution but it works pretty good.

theTroll
The client sends its timestamp to the server. The server records the client timestamp, and the server time at which that was received. The server then sends its server time, plus the client timestamp and server time from the last received packet, to the client. (Or it can send just the server time delta)

The client can then record its timestamp when it received the packet, and can now calculate the following quantities:

1. Round-trip-time including server processing (last client timestamp plus new client timestamp).
2. Server processing time (server send time minus server receive time).

From this, the round-trip-time without processing time can be calculated. Divide by half to estimate one-way transmission latency.

You can improve this by sending the predicted server time when the packet arrives from the client, and the server compares to its actual server time, and sends the delta to the client for adjustment.
enum Bool { True, False, FileNotFound };
Quote:Original post by hplus0603
The client sends its timestamp to the server. The server records the client timestamp, and the server time at which that was received. The server then sends its server time, plus the client timestamp and server time from the last received packet, to the client. (Or it can send just the server time delta)

The client can then record its timestamp when it received the packet, and can now calculate the following quantities:

1. Round-trip-time including server processing (last client timestamp plus new client timestamp).
2. Server processing time (server send time minus server receive time).

From this, the round-trip-time without processing time can be calculated. Divide by half to estimate one-way transmission latency.

You can improve this by sending the predicted server time when the packet arrives from the client, and the server compares to its actual server time, and sends the delta to the client for adjustment.


I'm sorry, but I don't completely understand that; it seems a bit confusing!
Is there anything wrong with my original idea?
Could someone please answer my original question: does it take longer to transmit larger packets?
http://www.codewhore.com/howto1.html
You can split the time in your example in several parts:

1) Transmission time -- this time is per-byte, and has some constant Tt in your system.

2) Routing time -- this is per-packet, and has some other constant Tr in your system.

3) Propagation time -- this is the "speed of light" between you and your destination. This is a constant per transaction (and thus, typically per packet), call it Tp.

3) Server processing time -- this is per-packet, and probably varies wildly depending on what the packet contains. Call it Ts.

In addition, there may be queuing time on the server if the server uses a time-stepped response model, for example -- I lump that into "server processing time".

The round-trip of your packet, where the query is size Sq and the respons is size Sr, would be something like:

Tt * Sq + Tt * Sr + 2*Tr + 2*Tp + Ts

On modems, Tt is fairly big, and you really need to worry about it (when it comes to latency). On broadband, Tt becomes a lot smaller, and can be approximated or ignored to a much greater extent.

Also, you should know that a UDP packet with IP headers has 28 bytes of packet overhead, if you want to measure things at this level.
enum Bool { True, False, FileNotFound };
Thank you very very much, hplus0603! Thats a great explanation and formula. Many thanks, also, to the AP, whoever he is; a great article!
I'm actually using enet which adds an additional 12 bytes to the packet, making packet overhead 40 bytes per packet, so I'll take that into account as well.

This topic is closed to new replies.

Advertisement