Negative ping? Or how do I sync time with client and server?

Started by
17 comments, last by hplus0603 7 years, 12 months ago
Hi,
I'm setting timestamp(System.currentTimeMillis()) in a packet, that the server is sending. To get the ping, I do this:

    long ping = System.currentTimeMillis() - packet.timestamp;
I'm testing now on a few mobile devices with HSPA(or HSDPA) network and I get negative ping! What the heck? It's like ~-300. On iPhone/desktop it seems to be fine. Is there any other way I could get the ping? I suspect phone's time is not really correct. What do you guys suggest?

Game I'm making - GANGFORT, Google Play, iTunes

Advertisement

I suspect phone's time is not really correct.


Bingo. Clocks aren’t perfect, analog or digital. Think of a mechanical watch, always slipping either a few minutes ahead or behind over time. Digital clocks suffer the same problem, but on a much smaller scale (see: how can a digital clock drift in accuracy?). These days, your system periodically syncs the time with a nearby time server. This helps, but you still don’t have the exact time, unless you can you guarantee that all time servers have the exact same time.

What do you guys suggest?


Just ask the server what time it is, and adjust accordingly:

1. Client requests server time.
2. Client adjusts the value returned by the server, according to the time it took for that request to return.
3. Use this value as a baseline for all timestamps returned by the server, or repeat and average error.

The best way to figure out the actual time is to send a packet, and then check how long it takes you to get the response to that packet. That way you are using the same clock for both the send and the receive part of your check.

You need to establish a baseline. Typically, what you do is put timing information in your header.
It might look something like:

this-packet-my-id
current-my-timestamp
last-received-packet-your-id
last-received-packet-your-timestamp
last-received-packet-my-timestamp

Each side will then remember the data from the packet last received by the other end, so they can fill out the "your" parts.

This will allow you to derive whatever information you need about the round-trip time, because you can look at the last timestamp YOU YOURSELF sent, and compare it to what the current time is.
enum Bool { True, False, FileNotFound };

In reference to the time aspect rather than the networking aspect...

Computers change their clocks ALL THE TIME.

Computer clock time does not necessarily match wall clock time.

Computer clocks can move faster than the wall clock time.

Computer clocks can move slower than the wall clock time.

Computer clocks can move backward in time.

Computer clocks can run backwards multiple times in a row.

A day may have more than 24 hours.

A day may have less than 24 hours.

You can experience the same time more than once.

You can skip over blocks of time.

An hour may not contain 60 minutes.

A minute may not contain 60 seconds.
A second may not contain 1000 milliseconds.
The clocks on any two computers are almost certainly set to different times.
And another fun one:
The clock may be adjusted from one time to any other time behind your back without notice, at any time.
To help combat some of these real life difficulties with time, most operating systems provide a way to get elapsed time from an arbitrary event, such as elapsed time from the start of your program or since the last restart.
Be very careful when it comes to looking at real life clocks. Elapsed time since an arbitrary timestamp is a more reliable timekeeping system, although it can also suffer from drift and other problems.
Sometimes using a custom time unit can work, e.g. N physics steps, if the client and server agree to step their physics at the same rate.

Clocks still drift, my clock can go at a slightly different speed than the server's clock for many reasons.

This is one reason the header mentioned by hplus above is such a good practice. Every message that is received includes both a step-counter and an elapsed-time-counter. The drift between machines is constantly being corrected and accounted for in the simulation.

System.currentTimeMillis() uses wall-clock time. Specifically, it returns the number of milliseconds elapsed since midnight, January 1, 1970 UTC. This means that times are subject to NTP correction, interference from users changing their system clocks, and other issues. You will get better luck with System.nanoTime() (divide by 1,000,000 to get milliseconds). Although documentation suggests that implementations will be imperfect, it at least is required to be a monotonic clock. On modern Windows, Linux, and OS X, iOS, and Android; it should work suitably for your purposes.

The important thing is that the clients have to synchronize to the server's time. The first packet the server sends should contain a base timestamp, which the clients use as a reference to what the current time is on the server. Depending on your needs, you may need to send a timestamp with every packet for correction, or you might only need to send a time synchronization packet every few frames. This will be to adjust for communication latency and packet misordering, not for any inaccuracy in how the server is measuring its time.

It is important that the client also uses System.nanoTime() for prediction and correction. It's not the precision you need, millisecond precision is enough for game logic, but the regularity of the results.

This is like what frob mentioned in one of his previous posts:

To help combat some of these real life difficulties with time, most operating systems provide a way to get elapsed time from an arbitrary event, such as elapsed time from the start of your program or since the last restart.

Although the actual value returned by System.nanoTime() may be radically different from operating system to operating system, at least System.nanoTime() guarantees that time does not jump backwards or otherwise change unpredictably, it is useable for your purposes in a way that System.currentTimeMillis() is not.

So how do I send server's timestamp without losing accuracy due to latency?

Game I'm making - GANGFORT, Google Play, iTunes

The clock may be adjusted from one time to any other time behind your back without notice, at any time


You'll probably want to use a real-time clock rather than a wall-clock clock for that reason.

Linux: clock_gettime(CLOCK_MONOTONIC_RAW)
Windows: QueryPerformanceCounter()

how do I send server's timestamp without losing accuracy due to latency?


You don't. Your job is to make sure that events happen in the same order, and at the same relative rate, on all the observer's computers.

Translating start-time plus time-since-start into current-physics-timestep-number is a good start in doing that.
Also, calculating delta-time between when you get a packet, and when the server said it sent the packet, to estimate an approximate offset between your-clock-time and actual-simulation-step-number is common.
Those estimates are usually filtered-adjusted over time, to compensate for clock skew, and also fudged a bit to compensate for jitter.
enum Bool { True, False, FileNotFound };

This topic is closed to new replies.

Advertisement