Sign in to follow this  
sufficientreason

Initializing Time Between Server Updates

Recommended Posts

My authoritative server updates the world state every 33ms and sends that state with a timestamp to every client (pretend we have no flow control). Each client can estimate the current server time (using UDP ping packets sent every second or two*).

How does a client know exactly where it is time-wise between two received states? Obviously it may take slightly more or less than 33ms for an updated world state to be generated, even if the deltaTime is fixed at 33ms regardless of the actual time. I thought about having the server send every client its current time and the time of its most recent update to get an offset, but that offset can drift over time given the overhead of resetting a timer, and so on.

Should I just interpolate using the actual timestamp? As in, I know state 1 has a timestamp of 1023ms, and state 2 has a timestamp of 1058ms (35ms difference), and that the current server time minus 100ms is 1041ms. That doesn't "feel" right, since I'll be interpolating for 35ms over a state difference that was calculated using a delta time of 33ms.

Alternatively, I could just buffer all states in order and take a new one out every 33ms, but how do I know exactly where I am between the current state and the next given the server's remote time?

I apologize if this has been asked before, but couldn't find a definitive answer. Thank you for all the help!

* -- Another question, is a ~1 second regular update a sufficient estimate of ping? I know I could piggyback ping information on the data I'm already sending back and forth, but the library I'm using (lidgren) has its own ping system that sends packets at regular intervals, so I'd like to use that if possible.

Share this post


Link to post
Share on other sites
hplus0603    11347
You don't know exactly where you are. All you can do is guess.

What matters for physics/simulation is that there is a definite ordering/sequencing of events. Using step numbers for this is usually the best implementation.
What matters for display is that the rate of time progression on the screen doesn't jump around too much. Typically, you'll either display "behind time" so that entities are always displayed in known-good positions, or you'll display "ahead of time" so that positions are roughly correct, modulo changes made during the last RTT. If you play on a lossy connection, masking dropped packets is more important than being 100% correct for display.
In general, you can assume that the client system clock advances at a rate very close to, but not identical to, the server clock. Thus, for frame-to-frame decisions, you can use regular client timers like QueryPerformanceCounter() or microtime().

You really don't want to use special packets for ping. Instead, include a time stamp in each packet you send to the server. Then, from the server, timestamp when it receives that packet, and timestamp when the next packet out is being sent. Thus, the data from the server to the client will then include "last packet I received from you you had timestamped as X, and I received it at my time Y. It is now my time Z." The client, in turn, timestamps the receipt of that packet, let's say as W. Your total (server-inclusive) lag is (W - X). Your network (transmission + network software) lag is roughly ((W - X) - (Z - Y)). If you include this data in each physical network packet header, you can estimate server clocks and lag very often, and smooth the estimate over time -- say, calculate a running average, or the average over the last 20 packets, or something like that.

And, if all events are timed to simulation ticks anyway, you can easily truncate your time measurements to 16 bits of milliseconds (which will wrap), assuming your round-trip-time is expected to be less than 30 seconds :-) That means it's only 6 bytes of overhead per packet.

Share this post


Link to post
Share on other sites
The library I'm using for reliable UDP (lidgren-gen3) already sends out regular ping packets, so there's nothing I can do about that without drastically changing that code or writing my own reliable UDP library. Trust me, I wish it didn't. With that in mind, do I even really need to calculate RTT for clock estimation? I really only care about S2C latency for clock offset, and we know that RTT/2 is not a fantastic estimate of one-way latency. You presented this algorithm in a post a long time ago:

[quote name='hplus0603' timestamp='1279079978' post='4676771']
1) When connecting, the server sends a baseline value, and the client uses this value.
2) Periodically (with each heartbeat, say, or with each packet), the server sends a new timestamp.
3) The client compares this timestamp with the calculated timestamp. If it is within some allowable jitter range (say, +/- 20 milliseconds) then it's assumed to still be in sync.
4) If two successive server timestamps appear to be out of sync, then adjust the client clock offset, and make a slight adjustment to the client clock rate (skew).

If you need to adjust the clock backwards, you may want to make that adjustment using a large amount of skew, rather than a time jump backwards, to avoid double execution of timed events.

To apply skew, you calculate server time as:

serverTime = (clientTime - offset) * skew

The trick is that you have to adjust the "offset" at the same time you adjust the "skew" to avoid jumping the time estimate at the time of adjustment.
[/quote]

That doesn't use RTT, correct? And is it suitable for a twitchy FPS-style game? I would still get RTT updated every 5 seconds or so just for displaying to the user just through the underlying library code, but for interpolation I would estimate the server clock time using this algorithm.

So, as I see it, all of my packets from the server to the client contain [i]both[/i] a sequence number and a timestamp. The timestamp is used for the algorithm above. The sequence number is used for ordering states in an interpolation buffer (I delay by 70ms), so states are ordered in the buffer and fetched every (33ms * skew). Is that correct? Do I use the same skew for calculating serverTime as I do for calculating the game tick time?

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this