Sign in to follow this  
Max Power

Latency and server/client-time

Recommended Posts

Hi,

 

I am working on a UDP client/server setup with boost::asio at the moment. I've got some basic ack&reliability system in place and tested a connection over the internet yesterday for the first time (instead of running on ports on the same machine...). There's only chat, sending of input and syncing of a constant number of game objects so far.

 

Against my expectations, everything is working pretty nicely already. Latency was around 120 ms. I guess that's ok, given I'm not running an actual server, just two ordinary PCs. I just check on the local machine, how long it takes until I get an (immediate) reply for a sent packet.

 

Now I'm wondering if there's a way to split this ping into send/receive-travel times. I mean most of the time, I want to know how old a packet is that was sent one-way, like a server-snapshot/update.

 

I could just compare simulation times for that, if they are running synchronously. But the way I see it, the only way to sync timers on two machines is by knowing how long a message is underway in ONE direction.

 

Any advice?

Share this post


Link to post
Share on other sites

The best you might be able to do is keep track of the differences in time  Send your 'ping' msg (with your departure clock time) and have the other side mark it with its departure time and when it arrives add the packets return incoming time (with your local clock ).

 

You can then tell when the roundtrip and each half varies (by doing various dif calculations with the) to see if they get longer or shorter  (if not the exact time they took).    The clock values (perfromance timer times) pasted from either side should remain fairly releative to each other at least.

 

Each leg of the round trip can be figured  for their changing time (from other packets times) once the relative clock values are known.

Share this post


Link to post
Share on other sites

Why are you doing that? In practice there is so much noise it is very rarely used. To synchronize machines requires many measurements done over time.

 

 

Games generally use relative time from the moment they start, as observed by whoever is in control. The server says this is simulation step 2143, or that you are 34213ms into the game, and that's the end of it. They might use the round trip time estimates and use half of it for the approximation, but trying to determine the exact per-direction latency at any time is a more challenging problem.

 

Latency is constantly changing. Some other process opens an Internet connection and you are fighting over bandwidth. Or the neighbor starts a file download and your upstream connection deprioritizes your traffic very slightly for a while. Few games need more precision than you can find with the round trip time.

Share this post


Link to post
Share on other sites
If the only information you have available is time sent and time received according to an unsynchronized clock on either end, then no, you cannot really split the round-trip time into a "send" and a "receive" part.

However, that doesn't actually matter. What you want to know is "how early should I send commands so that they are at the server at server time T," and "When I receive the state of an object stamped at time T, what time should I display it at?" Both of these questions can be answered with relative measurements, rather than trying to absolutely pin down the server/client send/receive latency. And both of those answers typically include some amount of latency compensation (de-jitter buffer) that make an absolute measurement less useful anyway.

Share this post


Link to post
Share on other sites

If the only information you have available is time sent and time received according to an unsynchronized clock on either end, then no, you cannot really split the round-trip time into a "send" and a "receive" part.

However, that doesn't actually matter. What you want to know is "how early should I send commands so that they are at the server at server time T," and "When I receive the state of an object stamped at time T, what time should I display it at?" Both of these questions can be answered with relative measurements, rather than trying to absolutely pin down the server/client send/receive latency. And both of those answers typically include some amount of latency compensation (de-jitter buffer) that make an absolute measurement less useful anyway.

 

 

You have the time difference from both sides (and the history from previous transmissions of the same time data)

 

The clocks might be unsynchronized between each other but arent they usually each consistant to itself (and thus relatively consistant in difference to ecah other clock...)?

 

So you keep history of the data    his send time versus my recieve time   and compare that difference to the next send done (and so on)

You can build a statistical model of typical transit time  (and both sides can do this)  AND you can communicate the result to the other side of the connection (and the difference of that (the ratio of diference of differences) .can tell you more).

 

The change in send time can be valuable (by magnitude at least)  as when things go downhill they go very downhill (and its time for some throttling or other compensation) and you can judge roughly averages to see how much variation the transmission times are to do some adaption stategies.

Share this post


Link to post
Share on other sites
The only data you need to keep (and communicate) is whether the data made it too late, much too early, or about right. If the data made it too late, tell the client to send it sooner (you can say by how much.) If the data made it much too early, tell the client to send it later (by about half of the difference.) And, when the data arrives within some target window (say, between 0 and 100 ms before it's needed,) then you tell the client it's doing OK.

Share this post


Link to post
Share on other sites

Sorry for not replying, but I have been working on the UDP-connection code the whole week. After many hours of frustrating debugging I can finally guarantee 100% reliability for packets that are flagged as "essential" to come through, and for packets that are marked "sequence_critical" to come through and be processed before any other packet that isn't flagged "sequence_independent", even if individual packets or the whole data-exchange is completely lost in one or both directions for any duration. Yay! That just off-topic...

 

Now I guess I won't really know wether I need synced timers or not until I understand game-syncing and lag-compensation techniques...

Share this post


Link to post
Share on other sites
And, more practically: You cannot be 100% certain. Your API/library needs to expose the possibility of failure to the client. Pretending that re-trying will always work is not going to work, and the client is likely more able to make the right determination of what to do than your library is, because it has more information.

Share this post


Link to post
Share on other sites

And, more practically: You cannot be 100% certain. Your API/library needs to expose the possibility of failure to the client. Pretending that re-trying will always work is not going to work, and the client is likely more able to make the right determination of what to do than your library is, because it has more information.

 

 

I previously have used UDP, building from the ground up to include features like that along with the reliable delivery mechanism (session security, timing statistics, connection keep-alives in lieu of traffic, lower priority file xfers, App level throttling notifications, thread/process friendly minimal-locking features, msg-aggregation/ connection-postboxing, connection disruption notifications, etc...)  All integrated within the Networking thread for efficiency    (example is : connection timing statistic ping/reply being handled directly to cut out app level delays and maintaining the statistics a higher App level would make use of).    Reinventing the wheel I suppose, but I was attempting to squeeze out as much capacity/efficiency for a application that did alot of inter-server traffic.

Share this post


Link to post
Share on other sites

Well...

 

I have callback function pointers inside my connection-manager class for the user to set. Like onConnect, onTimeout and stuff like that. I know it's very basic, but I'm hopeful it'll suit my needs ^^. If not, I will expand it along the way. It's definitely something to get started with. I've got lots of graphical, physical, audio and game-related things to take care of as well... and now with networking in mind, I'm gonna have to restructure the whole thing more or less completely, I guess.

Share this post


Link to post
Share on other sites

Well...

 

I have callback function pointers inside my connection-manager class for the user to set. Like onConnect, onTimeout and stuff like that. I know it's very basic, but I'm hopeful it'll suit my needs ^^. If not, I will expand it along the way. It's definitely something to get started with. I've got lots of graphical, physical, audio and game-related things to take care of as well... and now with networking in mind, I'm gonna have to restructure the whole thing more or less completely, I guess.

 

 

A big 'gaaaah!!'  is optimizing for multiple cores  (lock issues) where you have only one Network thread (with Affinity)  that then all the higher level App threads work through it  (again for high performance needs that might not exist for less stressed game mechanics).     

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this