Starting 2 Clients at the same time

Started by
6 comments, last by SelfPossessed 11 years, 9 months ago
SITUATION

I'm using rollback style netcode (inspired from what I read about GGPO) where the game can rewind, fix errors, and fast forward to the current time. I have a bare bones version of it already working.

As long as the start time for two clients is as close as possible, the game runs smoothly. For example, if two clients have the same ping (like two computers on the same router) and they luckily don't have a latency spike during the start command transmission from the server, all is well. That isn't reliable of course and sometimes one client will be behind the other by enough that rollbacks occur far too frequently and stuff constantly teleports around the screen.

QUESTION 1

Is there an established method of getting the start time as close as possible? If so, can I get a link to it or a description of how this is done?

QUESTION 2

Alternatively, if no such method exists, I could use really use some help in fleshing out a method of doing this. Wall of text describing my current idea follows. Thoughts/critiques/improvements are more than welcome.

Idea to synchronize start times

Assumption: We have the average ping of player 1 and player 2. I'm not sure how to calculate this, but let's say we have it.

Steps:

  1. There is a default value bufferTime of say 3 seconds (subject to change)
  2. Server calculates average ping difference between the two players where pingDifference = p1.averagePing - p2.averagePing
  3. Server sends the faster player a "start attempt" message that includes bufferTime + abs(pingDifference). Server also records the local time that this message was sent
  4. Server sends the slower player a "start attempt" message that includes bufferTime. Server also records the local time that this message was sent
  5. Both clients respond with an "acknowledgement" message the moment they receive that message. They also record the local time that the message was received in startAttemptReceivedTime
  6. Server uses the "acknowledgement" message to calculate the RTT (round trip time) of the "start attempt" message
  7. If either client's RTT deviates more than say 5ms from their average ping, the server restarts the process by sending the "start attempt" message to both clients again with the same values
  8. This process repeats until the RTT of the "start attempt" is acceptable for both clients
  9. Server then sends both clients a "confirm" message
  10. Clients receive the "confirm" message and calculate the actualGameStartTime
  11. The faster client has actualGameStartTime = startAttemptReceivedTime + bufferTime + abs(pingDifference)
  12. The slower client has actualGameStartTime = startAttemptReceivedTime + bufferTime


The idea here is that the server keeps attempting to start the game for the clients, then calculates the RTT (round trip time) to determine that there wasn't a latency spike when that message was sent to the clients. Once no latency spike is confirmed, the server tells the clients that it's OK to start the game at a calculated later time. The bufferTime is a value large enough that the OK from the server can handle a normal latency spike.

Even if the OK message was received too late due to a spike, a fail check can be made and the Client in question can send a "failure" message to the server to start the process all over again.

The start times in theory should be relatively close. It makes the assumption that if the RTT is close enough to the average ping, then chances are that the client received the "start attempt" message after a delay of RTT/2. It isn't 100% true (the "start attempt" message could be sent slower to the Client and the "acknowledgement" message could be sent faster giving the same RTT). But in theory it will be true often enough and the worst case scenario shouldn't be that bad.

Help: How do I calculate the average ping?

If the above holds true, then there is still the issue of calculating the average ping of each player. Assuming a typically lobby system like a simplified battle.net, this ping will have time to build up to a relatively accurate value. To ensure accuracy, the ping should be continually updated every so often.

The average ping should be the most likely/common RTT of a message between the Client and Server. That means it needs to be able to ignore outliers while still adapting to a recent change in ping (like if a roommate started torrenting and the likely ping goes up by like 100ms).

At the same time, the server shouldn't have to record ALL received ping values. A user who stays connected to the server long enough will cause the server to hold way too much data in memory just to calculate ping. Besides, old enough ping data can pretty much be scrapped regardless.

One way to do it is similar to how TCP calculates the EstimatedRTT for knowing when it has to resend a packet. Basically, use weighted pings. AveragePing = LatestRTT * 0.9 + LastRTT * 0.1. Or perhaps use more pings in the calculation, like AveragePing = RTT1 * 0.4 + RTT2 * 0.3 + RTT3 * 0.2 + RTT4 * 0.1 where RTT1 is the latest and most important RTT and RTT4 is the oldest one.

Another way is to maybe store more data and then use the mode on those values. How much data to store though to properly calculate the mode and still be able to react to a new change I don't know. Statistics isn't my forte at all.

What would be the best way to calculate the average ping?

END

Thanks in advance for any and all help with this! I appreciate it.
Advertisement
The FAQ for this forum contains a few links about "time management" or "clock management."

In general, you don't actually want exact clock sync; instead you want everybody to agree on the order in which events take effect.

The typical way of doing that is to use a fixed number of "steps" or "updates" or "pulses" per second, with the server deciding where "step 0" starts. Clients can stay in sync by keeping an estimate of RTT, and looking at the step numbers received from the server.

You may want to run some number of "empty" steps during game start, until each client has enough data to have a good "lock" on the step number progression, and reports "ready" to the server.

Late joiners can go through the same mechanism, as long as they don't expect that the step number starts at 0.

To answer your later question:

Calculating the average ping is also treated a bit in the FAQ, and you can also look at the algorithms used for NTP. But, in general, you measure time from send of message to receipt of ack, and measure processing time on the other end, and subtract processing time from wall-clock round-trip, and that's your estimated network round-trip time. To deal with jitter, you can average the value if you want, but I'd probably say that if the averaged value stays within some reasonable amount (10-30 ms?) of the current estimate, just keep the current estimate (but keep averaging actual measurements.) Continual changes to the estimated RTT is typically worse than the possible latency impact of running a game step or two of extra lag.

Another option, that's great for certain use cases, is to have the client predict the step number at which a packet will arrive, and put that into the packet when sending it. The server then updates the client with accuracy data -- "you were X steps off ahead/behind" -- and the client could update its estimate based on that data.
enum Bool { True, False, FileNotFound };
Thanks for the reply hplus0603!

1) Established Method for Synchronizing Start Time

I might be wrong, but I don't think that solution works for rollback style netcode, which I have already implemented and am using (still improving and tweaking it though). I had previously prototyped and play tested a lock step netcode system, which if I understand correctly does something similar to the fixed number of steps thing you described. With lock step I indeed did not have the start time synchronization problem, but it didn't work out since the input delay was pretty much twice your ping. It was unplayable for what I want to do (a fighting game).

Just in case, I'll try to explain my netcode a bit better. If I'm misunderstanding you and this pulse system you described still works with rollback, I'd greatly appreciate clarification on how they would work together since it's just not clicking in my head.

ROLLBACK

There are no fixed number of steps per second. Once the game has started, the clients don't even wait for messages from the server. Clients don't attempt to sync themselves over time. Clients don't have to estimate anything. There is no dead reckoning, no interpolation.

Instead, clients blindly keep playing the game locally until a command (a player input) from the other player is received (I'm using the server as a middleman bounce server). At that point, the client rewinds the game to the frame # of the received command, executes that command, then fast forwards back to the current frame #.

This rollback process corrects errors instead of trying to maintain a perfect state. In the most extreme case, if the other player doesn't input a single command for say an entire minute, your client won't receive a single message for an entire minute. Note that the only data sent between clients is player input. State information (position, velocity, etc.) is NOT sent.

While stuff will teleport on screen each time a rollback occurs, clients can set the input delay to reduce the apparent effect of the rollbacks. Players can set the input delay value based on personal preference; some players prefer having no input delay and are willing to deal with more rollbacks. Note that input delay shouldn't change mid match. It's a user preference that's set before a match begins.

The rollback netcode still does work if both clients do not start at the same time. The only problem is that the client that started earlier will experience continuous error corrections, or teleporting around the screen, unless their input delay is set to an absurd value.

Basically, as long as the game starts at the same time between clients, no other clock or time synchronization is necessary as the error correction will handle the game state for you. Actually, you don't want the clients constantly synchronizing the clock after it starts as that can screw up time sensitive inputs (like if you have a 1 frame window to execute a command). Even if one client has a slightly faster moving clock than the other, the difference shouldn't be so big that it's noticeable by the end of a match. It's more important not to mess up time sensitive inputs.


2) Average Ping

I'll look into the NTP algorithm. Never heard of NTP before. Maybe it'll do what I want.

I think I wasn't clear. My fault entirely. Just in case I was understood, I'll clarify what I meant by average ping.

When I said average ping, I didn't want a mean average. I'm looking for the most likely ping between the client and the server. Rather than the mean average, I'm more looking for the mode average of the most recent pings. I don't care what the RTT of the last message is; I care about what it should have been.

For example, if my ping values are [50, 50, 52, 49, 48, 50, 51, 200, 50, 51, 49, 48], then the value I want is 50ms as the 200 is the outlier. However, say my roommate suddenly started torrenting and my ping data is now [50, 50, 52, 49, 48, 50, 51, 200, 50, 51, 49, 48, 101, 99, 100, 100, 102, 98]. I now want the value of 100ms as I can expect future pings to be 100ms until something changes again (like I yell at said roommate to turn the torrenting off).

I'm not using this average value to display it to the player. I already have the current ping displayed to the player, which was simple. I want this average value to help synchronize the starting time of the two clients. So calculating one ping isn't enough.
Tried to do some more reading on clock synchronization. I suspect that I just don't understand the concept properly, hence why it's not clicking. There's a lot of stuff that's going over my head after all. Is there a simpler article to read?

I did come across http://www.gamedev.n...programs-r2493. I actually have a similar algorithm already in place in my code, where sync requests are sent to the server. A difference is that instead of a primary client calculating the latency with the other client, I use the server to estimate the latency between both clients and calculate the difference. It sorta works, but I still once managed to get my game to start ahead of the other player's. I suspect that my ping fluctuated during the sync process that one time, hence why I was looking for that special average ping formula.
There are no fixed number of steps per second.[/quote]

In my opinion, you need a fixed step size, no matter what the game kind. It could be 60 Hz if you want. Rewind and replay can work fine on top of the lockstep step numbering -- you simply rewind the state to the point of the sent message. Specifically, if the client sends at estimated global step number 100, and the message gets to the other player at estimated global step number 107, then the receiving client will rewind to step 100 and apply the input.

The "most likely round trip time" is exactly what NTP calculates, using statistical methods. If that's over your head, then what I'm suggesting is something like this:

current_estimate = 100ms;
future_estimate = 100ms;

on_new_server_measurement(ping):
future_estimate = future_estimate * 0.8 + ping * 0.2;
if abs(current_estimate - future_estimate) > 16ms:
current_estimate = future_estimate


This will not change the estimate if it just varies a little bit around an even mean, but it will adjust in "one big gulp" once the rtt changes enough that it matters, and then it will hopefully stay there for a while, this keeping the game running at the expected speed for the vast majority of time.

enum Bool { True, False, FileNotFound };
In a sense then, I do have fixed steps as the framerate is preset. I send frame data between clients instead of timestamps. I still think that synchronizing the start time and not synchronizing the clocks after is the way to go with rollback, but I could be wrong.

As for the pulse thing, I do have a similar system in place where the clients send messages every 250ms to attempt to synchronize the start time, but I lacked the estimated RTT to compare against so it got the wrong start time sometimes.

As for the formula, that's a slick if statement in the algorithm. It doesn't deal with a change of < 16ms too well, but it's unlikely that will happen (going from an average of like 50ms to 60ms) and even if it did, 10ms won't hurt too bad. Weighting the old values stronger than the new ones is something I was thinking about as well but I wasn't sure if that was the proper approach. Very very cool.

I'll do a bit more reading on NTP now that I know that it's what I was looking for. It's worth trying to wrap my head around. If I really can't get it, the algorithm you provided is enough for me.

You've been a lot of help hplus0603. Thanks a bunch! smile.png It's really helpful getting ideas/thoughts critiqued by someone more experienced.

I have some reading and thinking to do now. smile.png

EDIT: As a bonus for doing all this reading, I now know that I'm implementing trailing state synchronization. Cool.

EDIT 2: While looking up articles on NTP, I came across http://www.mine-cont...c/timesync.html which details an algorithm to synchronize starting time. It's supposedly similar to SNTP and has been tested to reliably get it within 100ms for an existing RTS. 100ms is a bit much...but even still I might try it out. It's actually the same thing that http://www.gamedev.net/page/resources/_/technical/multiplayer-and-network-programming/clock-synchronization-of-client-programs-r2493 recommended, though it was clearer with the actual steps. Reading reading reading.
I still think that synchronizing the start time and not synchronizing the clocks after is the way to go with rollback, but I could be wrong.[/quote]

Do you really trust the refresh rates to be exactly the same on all clients? I wouldn't do that. Run physics at a fixed real-time step rate, but still adjust to the master, for the most robust implementation.

It doesn't deal with a change of < 16ms too well[/quote]

That's the whole point! You don't want to make small adjustments to the estimated offset, because those small adjustments may push you across tick update boundaries needlessly (dropping/adding ticks compared to frames rendered.) The idea there is to stick with one estimate, as long as it's "good enough." You get to decide what "good enough" means :-)

If you pre-roll the game clock for one or two seconds, then all the clocks will be sufficiently in sync once you say "go" -- it'll certainly be better than 100 ms, assuming that your tick rate is better than 10 Hz. Also, you probably don't want to use separate messages for "probing time" -- put the timing-related fields in the header of each datagram you send each way, together with things like "packet serial number" or whatever. That way, you can keep a running update at the best resolution you have.
enum Bool { True, False, FileNotFound };
Hmm. I think I 'll end up doing continuous clock synchronization in the long run then. I plan on having short matches for my game so I think I can get by for now.

As for the pre-roll for 2 or so seconds, the article sorta has prerolling for even longer to do the calculation. The 100ms might have been a worst case scenario, dunno.

Ended up being busy, but I have some NTP stuff bookmarked.

Once again thanks for the help. :)

This topic is closed to new replies.

Advertisement