Jump to content

  • Log In with Google      Sign In   
  • Create Account

Latency and server/client-time


Old topic!

Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.


  • You cannot reply to this topic
12 replies to this topic

#1 Max Power   Members   -  Reputation: 314

Like
0Likes
Like

Posted 20 August 2014 - 06:36 AM

Hi,

 

I am working on a UDP client/server setup with boost::asio at the moment. I've got some basic ack&reliability system in place and tested a connection over the internet yesterday for the first time (instead of running on ports on the same machine...). There's only chat, sending of input and syncing of a constant number of game objects so far.

 

Against my expectations, everything is working pretty nicely already. Latency was around 120 ms. I guess that's ok, given I'm not running an actual server, just two ordinary PCs. I just check on the local machine, how long it takes until I get an (immediate) reply for a sent packet.

 

Now I'm wondering if there's a way to split this ping into send/receive-travel times. I mean most of the time, I want to know how old a packet is that was sent one-way, like a server-snapshot/update.

 

I could just compare simulation times for that, if they are running synchronously. But the way I see it, the only way to sync timers on two machines is by knowing how long a message is underway in ONE direction.

 

Any advice?



#2 wodinoneeye   Members   -  Reputation: 1546

Like
1Likes
Like

Posted 20 August 2014 - 08:42 AM

The best you might be able to do is keep track of the differences in time  Send your 'ping' msg (with your departure clock time) and have the other side mark it with its departure time and when it arrives add the packets return incoming time (with your local clock ).

 

You can then tell when the roundtrip and each half varies (by doing various dif calculations with the) to see if they get longer or shorter  (if not the exact time they took).    The clock values (perfromance timer times) pasted from either side should remain fairly releative to each other at least.

 

Each leg of the round trip can be figured  for their changing time (from other packets times) once the relative clock values are known.


--------------------------------------------Ratings are Opinion, not Fact

#3 frob   Moderators   -  Reputation: 38076

Like
1Likes
Like

Posted 20 August 2014 - 01:33 PM

Why are you doing that? In practice there is so much noise it is very rarely used. To synchronize machines requires many measurements done over time.

 

 

Games generally use relative time from the moment they start, as observed by whoever is in control. The server says this is simulation step 2143, or that you are 34213ms into the game, and that's the end of it. They might use the round trip time estimates and use half of it for the approximation, but trying to determine the exact per-direction latency at any time is a more challenging problem.

 

Latency is constantly changing. Some other process opens an Internet connection and you are fighting over bandwidth. Or the neighbor starts a file download and your upstream connection deprioritizes your traffic very slightly for a while. Few games need more precision than you can find with the round trip time.


Check out my book, Game Development with Unity, aimed at beginners who want to build fun games fast.

Also check out my personal website at bryanwagstaff.com, where I occasionally write about assorted stuff.


#4 hplus0603   Moderators   -  Reputation: 9593

Like
1Likes
Like

Posted 20 August 2014 - 01:43 PM

If the only information you have available is time sent and time received according to an unsynchronized clock on either end, then no, you cannot really split the round-trip time into a "send" and a "receive" part.

However, that doesn't actually matter. What you want to know is "how early should I send commands so that they are at the server at server time T," and "When I receive the state of an object stamped at time T, what time should I display it at?" Both of these questions can be answered with relative measurements, rather than trying to absolutely pin down the server/client send/receive latency. And both of those answers typically include some amount of latency compensation (de-jitter buffer) that make an absolute measurement less useful anyway.
enum Bool { True, False, FileNotFound };

#5 wodinoneeye   Members   -  Reputation: 1546

Like
1Likes
Like

Posted 21 August 2014 - 06:19 PM

If the only information you have available is time sent and time received according to an unsynchronized clock on either end, then no, you cannot really split the round-trip time into a "send" and a "receive" part.

However, that doesn't actually matter. What you want to know is "how early should I send commands so that they are at the server at server time T," and "When I receive the state of an object stamped at time T, what time should I display it at?" Both of these questions can be answered with relative measurements, rather than trying to absolutely pin down the server/client send/receive latency. And both of those answers typically include some amount of latency compensation (de-jitter buffer) that make an absolute measurement less useful anyway.

 

 

You have the time difference from both sides (and the history from previous transmissions of the same time data)

 

The clocks might be unsynchronized between each other but arent they usually each consistant to itself (and thus relatively consistant in difference to ecah other clock...)?

 

So you keep history of the data    his send time versus my recieve time   and compare that difference to the next send done (and so on)

You can build a statistical model of typical transit time  (and both sides can do this)  AND you can communicate the result to the other side of the connection (and the difference of that (the ratio of diference of differences) .can tell you more).

 

The change in send time can be valuable (by magnitude at least)  as when things go downhill they go very downhill (and its time for some throttling or other compensation) and you can judge roughly averages to see how much variation the transmission times are to do some adaption stategies.


--------------------------------------------Ratings are Opinion, not Fact

#6 hplus0603   Moderators   -  Reputation: 9593

Like
2Likes
Like

Posted 21 August 2014 - 07:37 PM

The only data you need to keep (and communicate) is whether the data made it too late, much too early, or about right. If the data made it too late, tell the client to send it sooner (you can say by how much.) If the data made it much too early, tell the client to send it later (by about half of the difference.) And, when the data arrives within some target window (say, between 0 and 100 ms before it's needed,) then you tell the client it's doing OK.
enum Bool { True, False, FileNotFound };

#7 Max Power   Members   -  Reputation: 314

Like
0Likes
Like

Posted 24 August 2014 - 08:15 AM

Sorry for not replying, but I have been working on the UDP-connection code the whole week. After many hours of frustrating debugging I can finally guarantee 100% reliability for packets that are flagged as "essential" to come through, and for packets that are marked "sequence_critical" to come through and be processed before any other packet that isn't flagged "sequence_independent", even if individual packets or the whole data-exchange is completely lost in one or both directions for any duration. Yay! That just off-topic...

 

Now I guess I won't really know wether I need synced timers or not until I understand game-syncing and lag-compensation techniques...



#8 hplus0603   Moderators   -  Reputation: 9593

Like
0Likes
Like

Posted 24 August 2014 - 11:00 AM

I can finally guarantee 100% reliability for packets that are flagged as "essential" to come through


Really? What if I take a pair of scissors to your Ethernet cable?
enum Bool { True, False, FileNotFound };

#9 Max Power   Members   -  Reputation: 314

Like
0Likes
Like

Posted 24 August 2014 - 03:41 PM

Damn, I knew I had overlooked something!

 

//TODO: get scissor-proof ethernet cables or train ferrets to fight off scissor-wielding intruders



#10 hplus0603   Moderators   -  Reputation: 9593

Like
0Likes
Like

Posted 24 August 2014 - 05:08 PM

And, more practically: You cannot be 100% certain. Your API/library needs to expose the possibility of failure to the client. Pretending that re-trying will always work is not going to work, and the client is likely more able to make the right determination of what to do than your library is, because it has more information.
enum Bool { True, False, FileNotFound };

#11 wodinoneeye   Members   -  Reputation: 1546

Like
0Likes
Like

Posted 25 August 2014 - 09:31 AM

And, more practically: You cannot be 100% certain. Your API/library needs to expose the possibility of failure to the client. Pretending that re-trying will always work is not going to work, and the client is likely more able to make the right determination of what to do than your library is, because it has more information.

 

 

I previously have used UDP, building from the ground up to include features like that along with the reliable delivery mechanism (session security, timing statistics, connection keep-alives in lieu of traffic, lower priority file xfers, App level throttling notifications, thread/process friendly minimal-locking features, msg-aggregation/ connection-postboxing, connection disruption notifications, etc...)  All integrated within the Networking thread for efficiency    (example is : connection timing statistic ping/reply being handled directly to cut out app level delays and maintaining the statistics a higher App level would make use of).    Reinventing the wheel I suppose, but I was attempting to squeeze out as much capacity/efficiency for a application that did alot of inter-server traffic.


--------------------------------------------Ratings are Opinion, not Fact

#12 Max Power   Members   -  Reputation: 314

Like
0Likes
Like

Posted 25 August 2014 - 12:16 PM

Well...

 

I have callback function pointers inside my connection-manager class for the user to set. Like onConnect, onTimeout and stuff like that. I know it's very basic, but I'm hopeful it'll suit my needs ^^. If not, I will expand it along the way. It's definitely something to get started with. I've got lots of graphical, physical, audio and game-related things to take care of as well... and now with networking in mind, I'm gonna have to restructure the whole thing more or less completely, I guess.



#13 wodinoneeye   Members   -  Reputation: 1546

Like
0Likes
Like

Posted 26 August 2014 - 09:40 PM

Well...

 

I have callback function pointers inside my connection-manager class for the user to set. Like onConnect, onTimeout and stuff like that. I know it's very basic, but I'm hopeful it'll suit my needs ^^. If not, I will expand it along the way. It's definitely something to get started with. I've got lots of graphical, physical, audio and game-related things to take care of as well... and now with networking in mind, I'm gonna have to restructure the whole thing more or less completely, I guess.

 

 

A big 'gaaaah!!'  is optimizing for multiple cores  (lock issues) where you have only one Network thread (with Affinity)  that then all the higher level App threads work through it  (again for high performance needs that might not exist for less stressed game mechanics).     


--------------------------------------------Ratings are Opinion, not Fact




Old topic!

Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.




PARTNERS