I'm using rollback style netcode (inspired from what I read about GGPO) where the game can rewind, fix errors, and fast forward to the current time. I have a bare bones version of it already working.
As long as the start time for two clients is as close as possible, the game runs smoothly. For example, if two clients have the same ping (like two computers on the same router) and they luckily don't have a latency spike during the start command transmission from the server, all is well. That isn't reliable of course and sometimes one client will be behind the other by enough that rollbacks occur far too frequently and stuff constantly teleports around the screen.
Is there an established method of getting the start time as close as possible? If so, can I get a link to it or a description of how this is done?
Alternatively, if no such method exists, I could use really use some help in fleshing out a method of doing this. Wall of text describing my current idea follows. Thoughts/critiques/improvements are more than welcome.
Idea to synchronize start times
Assumption: We have the average ping of player 1 and player 2. I'm not sure how to calculate this, but let's say we have it.
- There is a default value bufferTime of say 3 seconds (subject to change)
- Server calculates average ping difference between the two players where pingDifference = p1.averagePing - p2.averagePing
- Server sends the faster player a "start attempt" message that includes bufferTime + abs(pingDifference). Server also records the local time that this message was sent
- Server sends the slower player a "start attempt" message that includes bufferTime. Server also records the local time that this message was sent
- Both clients respond with an "acknowledgement" message the moment they receive that message. They also record the local time that the message was received in startAttemptReceivedTime
- Server uses the "acknowledgement" message to calculate the RTT (round trip time) of the "start attempt" message
- If either client's RTT deviates more than say 5ms from their average ping, the server restarts the process by sending the "start attempt" message to both clients again with the same values
- This process repeats until the RTT of the "start attempt" is acceptable for both clients
- Server then sends both clients a "confirm" message
- Clients receive the "confirm" message and calculate the actualGameStartTime
- The faster client has actualGameStartTime = startAttemptReceivedTime + bufferTime + abs(pingDifference)
- The slower client has actualGameStartTime = startAttemptReceivedTime + bufferTime
The idea here is that the server keeps attempting to start the game for the clients, then calculates the RTT (round trip time) to determine that there wasn't a latency spike when that message was sent to the clients. Once no latency spike is confirmed, the server tells the clients that it's OK to start the game at a calculated later time. The bufferTime is a value large enough that the OK from the server can handle a normal latency spike.
Even if the OK message was received too late due to a spike, a fail check can be made and the Client in question can send a "failure" message to the server to start the process all over again.
The start times in theory should be relatively close. It makes the assumption that if the RTT is close enough to the average ping, then chances are that the client received the "start attempt" message after a delay of RTT/2. It isn't 100% true (the "start attempt" message could be sent slower to the Client and the "acknowledgement" message could be sent faster giving the same RTT). But in theory it will be true often enough and the worst case scenario shouldn't be that bad.
Help: How do I calculate the average ping?
If the above holds true, then there is still the issue of calculating the average ping of each player. Assuming a typically lobby system like a simplified battle.net, this ping will have time to build up to a relatively accurate value. To ensure accuracy, the ping should be continually updated every so often.
The average ping should be the most likely/common RTT of a message between the Client and Server. That means it needs to be able to ignore outliers while still adapting to a recent change in ping (like if a roommate started torrenting and the likely ping goes up by like 100ms).
At the same time, the server shouldn't have to record ALL received ping values. A user who stays connected to the server long enough will cause the server to hold way too much data in memory just to calculate ping. Besides, old enough ping data can pretty much be scrapped regardless.
One way to do it is similar to how TCP calculates the EstimatedRTT for knowing when it has to resend a packet. Basically, use weighted pings. AveragePing = LatestRTT * 0.9 + LastRTT * 0.1. Or perhaps use more pings in the calculation, like AveragePing = RTT1 * 0.4 + RTT2 * 0.3 + RTT3 * 0.2 + RTT4 * 0.1 where RTT1 is the latest and most important RTT and RTT4 is the oldest one.
Another way is to maybe store more data and then use the mode on those values. How much data to store though to properly calculate the mode and still be able to react to a new change I don't know. Statistics isn't my forte at all.
What would be the best way to calculate the average ping?
Thanks in advance for any and all help with this! I appreciate it.
Edited by SelfPossessed, 05 July 2012 - 05:32 PM.