Jump to content
  • Advertisement
Sign in to follow this  
65536

Syncing clocks the impossible way and doing it with Winsock.

This topic is 3447 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Some background to the problem: I have two computers with monitors side by side. I wrote a screen saver that scrolls images across the screens. It is written so that the images look like they are scrolling from one monitor to the other. After all the images have been displayed it randomly shuffles them. Every thing is choreographed using the current system time. After a few days the clocks on the two computers are off by enough that it is easy to see the images are not aligned. Every time the screen saver starts I want it to try to connect to the other computer and sync the clocks. For my purposes if the clocks are within 50ms of each other it will generate pixel perfect results. However, I want to sync the clocks the best I can within reason. Not because I need to, but just as a challenge. As I understand it, the accepted approach to is to find the total latency and divide that by two. Then send the current time and account for the latency just calculated. That works fine if the latency is symmetrical, and I believe it is in most cases. But I'm trying to account for all variables that I can reasonably account for. I read that it is impossible to calculate the one way latency without the clocks being in sync. The source also goes on to say that is why the NTS protocol doesn't bother with it. I'm not sure if that's true or not. After a bit of experimentation and reasoning I agree its impossible on one condition. If the one way latency and the time differential between the two clocks are independent it is unsolvable. You have two computers A and B. A's time is the accepted time. A sends the current time to B. B receives it and sets that to its current time. The latency and the time differential are now no longer independent. They are the same. B's clock is off by the exact amount of time it takes to get a message from A to B. If A sends the current time to B again you can now solve for the one way latency and also the time differential. Using these principles it can be mathematically solved like so: Assumed: Latency_ab=Time_offsetb Time_b=Time_a+Latency_ab+Time_offsetb Solution: Time_b-Time_a=2Time_offsetb Time_offsetb=(1/2)(Time_b-Time_a) The solution makes sense. If the time it takes to send a message from A to B, is the same time that B's clock is off from A. When A sends the time to B, the amount the clock is off is half of the difference between the time B says it received the message and the time A sent the message. Implementationally you obviously wouldn't need to change the clock on B until you have the correct time. However, you could if you were concerned about the latency in setting the system time. In most applications I doubt it would be significant though. You would also probably repeat both parts of the process several times to get an average value since latency can vary by a relatively large amount in relatively common conditions. Basically what I'm wondering is if my logic is correct. Every other approach I tried did not give a solution. I'm not entirely sure that this approach is valid. Its also hard to test how accurate the approach is when you are working with milliseconds or smaller :). I would have posted this in the math section, but I have some questions about implementing the network side of it. I have very little experience with Winsock. I would like to implement it with TCP to simplify the coding since it is just a simple screen saver. What I'm wondering is if you use the send() function or similar with TCP is it possible that multiple send()s might be combined into a single packet? If so is there any way to flush the network buffers or something similar so that B doesn't get multiple send()s from A the same packet or too close together? That would definitely skew your average. Or would I just be better off using UDP? The primary reason why I want to avoid it is that I want to send a specific number of packets to find average times. If I have B simply wait for that many packets it might wait forever if there is any packet loss with UDP. Granted over a wired LAN packet loss is pretty rare, but Murphy's Law and all its going to happen. Not that its the end of the world if I have to use UDP there are other ways of structuring it. I'm just wondering if there is any simple way of getting TCP to work. Sorry about the book. I didn't want to give too little info. Thanks in advance for any help.

Share this post


Link to post
Share on other sites
Advertisement
The one problem I can see you facing is to do with Nagle's algorithm. This can be turned off though (I think). Not really done any Winsock programming, but I remembered reading about this when doing network programming before and thought it would cause some problems with your algorithm. Hope this helps.

Share this post


Link to post
Share on other sites
Quote:
Original post by 65536
You have two computers A and B. A's time is the accepted time. A sends the current time to B. B receives it and sets that to its current time. The latency and the time differential are now no longer independent. They are the same. B's clock is off by the exact amount of time it takes to get a message from A to B. If A sends the current time to B again you can now solve for the one way latency and also the time differential.

Except one way latency is not a constant. You don't have a Latency_ab, you have Latency_ab1 and Latency_ab2, which are likely to be similar, but you don't know that. Averaging it doesn't necessarily help, although perhaps using the median would work well. Still, this is why it's impossible to calculate the one way latency. It's certainly possible to estimate it, but that's not the same thing.

There may be some other mathematical reason why it can't be strictly calculated, but I'm operating on little sleep right now and can't reason very clearly. :)

Quote:
What I'm wondering is if you use the send() function or similar with TCP is it possible that multiple send()s might be combined into a single packet?
Definitely.
Quote:
If so is there any way to flush the network buffers or something similar so that B doesn't get multiple send()s from A the same packet or too close together?
No. If you don't want buffering and the like, you don't want TCP.
Quote:
Or would I just be better off using UDP? The primary reason why I want to avoid it is that I want to send a specific number of packets to find average times. If I have B simply wait for that many packets it might wait forever if there is any packet loss with UDP.

Just have A keep sending packets until asked by B to stop.

Share this post


Link to post
Share on other sites
My method is not a valid way of syncing the clocks. It is impossible to determine the one way latency unless the clocks are synced, even ignoring latency jitter.

If you set the clock on B to A when it receives the message,
Time_a=Time_b+Latency_ab

If A sends the current time to B,
A----B
1--->2
Unknown: a2,b1,Latency_ab
Time_a2=Time_a1+Latency_ab
Time_b2=Time_b1+Latency_ab

From the equation above,
Time_a1=Time_b1+Latency_ab
Time_a2=Time_b2+Latency_ab

You cannot solve for any of the unknowns.


For my purposes sending the time and adding half of the total latency should be fine. Thanks for the comments.

Share this post


Link to post
Share on other sites
I implemented a client for this at Uni. You could do the same as part of your application and sync the clocks.

Share this post


Link to post
Share on other sites
Quote:
Original post by Kylotan
There may be some other mathematical reason why it can't be strictly calculated, but I'm operating on little sleep right now and can't reason very clearly. :)


While not a strictly formal argument, I believe a good intuition comes from looking at it as a game: A and B are trying to figure out the latency, and you can choose the latencies at will. Then you can define different scenarios in which the two-way latencies are the same, but the one-way latencies differ A and B will be unable to tell the difference between those scenarios, because the scenarios simply look the same from both points of view.

Share this post


Link to post
Share on other sites
I'm trying to understand what you said there, and it doesn't make sense to me.

In fact, I agree with Prefect above that it's probably not possible to calculate one-way latency (rather than round-trip latency) reliably, other than to assume it's half RTT. This is why I'm interested in making sense of your post.

Quote:
Original post by 65536
You have two computers A and B. A's time is the accepted time. A sends the current time to B. B receives it and sets that to its current time. The latency and the time differential are now no longer independent. They are the same. B's clock is off by the exact amount of time it takes to get a message from A to B. <snip>

Assumed:
Latency_ab=Time_offsetb
Time_b=Time_a+Latency_ab+Time_offsetb

Namely, I'm not sure if I agree with that last assumption.

You said A sends its own current time to B. When B receives it, it sets its clock to match. Now it's true the difference between their clocks will be equal to their one-way latency (from A to B).

So, shouldn't this be true:

Time_b = Time_a + Latency_ab

Why exactly do you say it is 'Time_b=Time_a+Latency_ab+Time_offsetb' instead?

I apologize if this is a mistake on my part, but I'd like to figure out what's going on here.

Share this post


Link to post
Share on other sites
Quote:
Original post by shurcool
I apologize if this is a mistake on my part, but I'd like to figure out what's going on here.


Your right. I revised my argument a couple posts above. I do not believe it is possible to solve for the one way latency when the clocks are not in sync.

Share this post


Link to post
Share on other sites
Assuming the latency either way is fixed (doesn't vary), but that the latency in the two directions are not necessarily the same.
First A sends the time to B.
time_b1 = time_a1 + latency_ab
I.e b's clock is now latency_ab milliseconds behind A's clock
Next at some point server B sends its time to A giving a second time at A.
time_a2 = time_b1 + latency_ba
Now substitute the original formula for time_b:
time_a2 = time_a1 + latency_ab + latency_ba
All server A sees is that the time it receives back is out by the sum of the latency in both directions, i.e. the rtt. So no luck there.
A can therafter send the time to B again:
time_b2 = time_a2 + latency_ab
Substitute the previous formula for time_a2:
time_b2 = time_a1 + 2*latency_ab + latency_ba
B can't distinguish between how much lag come from latency_ab and how much came from latency_ba. Everything on the right hand side of the equation is an unknown to server B, and in fact if you look back over every equation so far, all variables on the right hand side are unknown to the server on the left-hand-side of the equation. It just isn't possible. Your formula with a half can never work.

The best you can get is to send many packets and take the future-most time as being correct. It must be the packet the experienced the least lag.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

Participate in the game development conversation and more when you create an account on GameDev.net!

Sign me up!