Sign in to follow this  
zgintasz

Negative ping? Or how do I sync time with client and server?

Recommended Posts

Hi,
 
I'm setting timestamp(System.currentTimeMillis()) in a packet, that the server is sending. To get the ping, I do this:
 
    long ping = System.currentTimeMillis() - packet.timestamp;
 
I'm testing now on a few mobile devices with HSPA(or HSDPA) network and I get negative ping! What the heck? It's like ~-300. On iPhone/desktop it seems to be fine. Is there any other way I could get the ping? I suspect phone's time is not really correct. What do you guys suggest?
Edited by Gintas Z.

Share this post


Link to post
Share on other sites

I suspect phone's time is not really correct.


Bingo. Clocks aren’t perfect, analog or digital. Think of a mechanical watch, always slipping either a few minutes ahead or behind over time. Digital clocks suffer the same problem, but on a much smaller scale (see: how can a digital clock drift in accuracy?). These days, your system periodically syncs the time with a nearby time server. This helps, but you still don’t have the exact time, unless you can you guarantee that all time servers have the exact same time.
 

What do you guys suggest?


Just ask the server what time it is, and adjust accordingly:

1. Client requests server time.
2. Client adjusts the value returned by the server, according to the time it took for that request to return.
3. Use this value as a baseline for all timestamps returned by the server, or repeat and average error. Edited by fastcall22

Share this post


Link to post
Share on other sites

The best way to figure out the actual time is to send a packet, and then check how long it takes you to get the response to that packet. That way you are using the same clock for both the send and the receive part of your check.

Share this post


Link to post
Share on other sites

In reference to the time aspect rather than the networking aspect...

 

Computers change their clocks ALL THE TIME.

 

 

Computer clock time does not necessarily match wall clock time.

Computer clocks can move faster than the wall clock time.

Computer clocks can move slower than the wall clock time.

Computer clocks can move backward in time.

Computer clocks can run backwards multiple times in a row.

A day may have more than 24 hours.

A day may have less than 24 hours.

You can experience the same time more than once.

You can skip over blocks of time.

An hour may not contain 60 minutes.

A minute may not contain 60 seconds.
A second may not contain 1000 milliseconds.
The clocks on any two computers are almost certainly set to different times.
 
And another fun one:
The clock may be adjusted from one time to any other time behind your back without notice, at any time.
 
 
To help combat some of these real life difficulties with time, most operating systems provide a way to get elapsed time from an arbitrary event, such as elapsed time from the start of your program or since the last restart. 
 
Be very careful when it comes to looking at real life clocks.  Elapsed time since an arbitrary timestamp is a more reliable timekeeping system, although it can also suffer from drift and other problems.

Share this post


Link to post
Share on other sites

Clocks still drift, my clock can go at a slightly different speed than the server's clock for many reasons.

 

This is one reason the header mentioned by hplus above is such a good practice.  Every message that is received includes both a step-counter and an elapsed-time-counter. The drift between machines is constantly being corrected and accounted for in the simulation.

Share this post


Link to post
Share on other sites

System.currentTimeMillis() uses wall-clock time. Specifically, it returns the number of milliseconds elapsed since midnight, January 1, 1970 UTC. This means that times are subject to NTP correction, interference from users changing their system clocks, and other issues. You will get better luck with System.nanoTime() (divide by 1,000,000 to get milliseconds). Although documentation suggests that implementations will be imperfect, it at least is required to be a monotonic clock. On modern Windows, Linux, and OS X, iOS, and Android; it should work suitably for your purposes.

The important thing is that the clients have to synchronize to the server's time. The first packet the server sends should contain a base timestamp, which the clients use as a reference to what the current time is on the server. Depending on your needs, you may need to send a timestamp with every packet for correction, or you might only need to send a time synchronization packet every few frames. This will be to adjust for communication latency and packet misordering, not for any inaccuracy in how the server is measuring its time.

It is important that the client also uses System.nanoTime() for prediction and correction. It's not the precision you need, millisecond precision is enough for game logic, but the regularity of the results.

This is like what frob mentioned in one of his previous posts:

To help combat some of these real life difficulties with time, most operating systems provide a way to get elapsed time from an arbitrary event, such as elapsed time from the start of your program or since the last restart.

Although the actual value returned by System.nanoTime() may be radically different from operating system to operating system, at least System.nanoTime() guarantees that time does not jump backwards or otherwise change unpredictably, it is useable for your purposes in a way that System.currentTimeMillis() is not. Edited by nfries88

Share this post


Link to post
Share on other sites

The clock may be adjusted from one time to any other time behind your back without notice, at any time


You'll probably want to use a real-time clock rather than a wall-clock clock for that reason.

Linux: clock_gettime(CLOCK_MONOTONIC_RAW)
Windows: QueryPerformanceCounter()

how do I send server's timestamp without losing accuracy due to latency?


You don't. Your job is to make sure that events happen in the same order, and at the same relative rate, on all the observer's computers.

Translating start-time plus time-since-start into current-physics-timestep-number is a good start in doing that.
Also, calculating delta-time between when you get a packet, and when the server said it sent the packet, to estimate an approximate offset between your-clock-time and actual-simulation-step-number is common.
Those estimates are usually filtered-adjusted over time, to compensate for clock skew, and also fudged a bit to compensate for jitter.

Share this post


Link to post
Share on other sites

You'll probably want to use a real-time clock rather than a wall-clock clock for that reason.

Linux: clock_gettime(CLOCK_MONOTONIC_RAW)
Windows: QueryPerformanceCounter()

OP is using Java, System.nanoTime() provides this functionality (as of JDK8, implementation uses: QPC on Windows, CLOCK_MONOTONIC on Linux, mach_absolute_time on OS X and iOS).
 

 

So how do I send server's timestamp without losing accuracy due to latency?

It's impossible to avoid inaccuracy from latency. You could try to compensate for it, but depending on your needs, it might be a waste of effort. Your focus should be on making sure things are happening in the right order, objects are moving at the right rate, and that all of this is happening within an acceptable time difference between clients.

Share this post


Link to post
Share on other sites

Are you sure System.nanoTime() isn't affected by clock changes? I moved system's clock one day forward and it changed. But I'm not using JDK8, I'm using Java 1.7. And I tested on desktop. Not sure yet how it goes on android, but I know android does not support Java 8 yet, so this probably will not work.

Share this post


Link to post
Share on other sites

Are you sure System.nanoTime() isn't affected by clock changes? I moved system's clock one day forward and it changed. But I'm not using JDK8, I'm using Java 1.7. And I tested on desktop. Not sure yet how it goes on android, but I know android does not support Java 8 yet, so this probably will not work.

Android reference specifically says System.nanoTime is equivalent to CLOCK_MONOTONIC: http://developer.android.com/reference/java/lang/System.html#nanoTime%28%29
JDK 7's implementation for Mac OS X (and by extension, iOS) was using wall-clock time because the porters for OS X weren't aware of mach_absolute_time. But on a modern version of Windows (since Windows NT) or Linux kernel (2.6), JDK 7's implementation should have been fine. Older versions of Windows and Linux would lack QPC and CLOCK_MONOTONIC and other fallbacks would have to be used, which probably produce wall-clock time.

Edited by nfries88

Share this post


Link to post
Share on other sites

I just tested on android 5.0 Genymotion emulator. Why is System.nanoTime() so different? It's printed at the same time. Date/time is set to auto in settings.

 

millis: 1461352093202; nano: 239310141883

Edited by Gintas Z.

Share this post


Link to post
Share on other sites

I just tested on android 5.0 Genymotion emulator. Why is System.nanoTime() so different? It's printed at the same time. Date/time is set to auto in settings.

 

millis: 1461352093202; nano: 239310141883

on Android, nanoTime is implemented with clock_gettime(CLOCK_MONOTONIC), which provides time elapsed in nanoseconds since processor startup (on an emulator, probably since you started the emulator).
currentTimeMillis provides system wall-clock time, as milliseconds, in POSIX time format.
That's why it's not currentTimeNano, but nanoTime. You're not getting the current wall-clock time. You're getting an arbitrary, implementation-defined, elapsed time value.

I'm interested in what desktop you were testing from to get a time jump from nanoTime after changing wall-clock time.

Share this post


Link to post
Share on other sites

Hmm, so how am I supposed to use it if every device's time base is different(including server's)? I thought it would be universal and replacing currentTimeMillis with nanoTime will solve everything.

 

 

Just ask the server what time it is, and adjust accordingly:


1. Client requests server time.
2. Client adjusts the value returned by the server, according to the time it took for that request to return.
3. Use this value as a baseline for all timestamps returned by the server, or repeat and average error.

 

 

So it goes like this, right?

1. client requests server's timestamp, requestTime = System.currentTimeMillis().

2. server sends it's timestamp

3. client received server's timestamp,

delay = (System.currentTimeMillis - requestTime) / 2f;

currentServerTime = requestTime + delay;

Am I correct to divide it by two?

4. server's update packet includes sendTimestamp. Latency for client = currentServerTime - sendTimestamp.

 

Could the first packet's delay be somewhy too big? Is this a good solution?

Edited by Gintas Z.

Share this post


Link to post
Share on other sites

You could have a look at how eg NTP (network time protocol) does it, which aims to synchronize local time with remote reliable atomic clock time.

 

I think your idea is reasonable, except network delays are not constant either, so you'd need to do this at a regular basis.

Share this post


Link to post
Share on other sites
That's the essential gist of it. You'll want to keep that calculation running as part of your regular protocol headers -- you don't need to "request" anything; each network packet should start with a header that contains sequence number and timing information. (Last-seen, my-current)

You can then use statistical methods to estimate the jitter/loss/delay of the link, or you can use some simpler function like "new_jitter = (old_jitter - 0.03) * 0.9 + measured_jitter * 0.1 + 0.03 seconds"

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this