Client timing / SNTP & roundtrip delay

Started by
1 comment, last by khelkun 14 years, 1 month ago
Hello I'd like to expose my approach to compute the roundtrip delay between a game client and its server. I would consider this delay as the average "player ping". I would be grateful if you can let me know if something is wrong with this approach or not. Thanks. From this topic hplus0603 wrote:
Quote: Typically, your packet to the server will include "this is my current time". Packets from the server to the player will include "this is your timestamp from the last packet, which I received X time ago". From this data, you can calculate round-trip transmission time as current-time - your-timestamp - X (where X represents "processing time" on the server, loosely). This adds 2 bytes from client->server (assuming you're OK with millisecond precision, and seeing roll-over every 32 seconds) and 4 bytes from server->client (last-timestamp and time-since-received). Note that you typically pack a number of messages into a single packet, so the "overhead" for determining ping is only once per network packet (part of framing). Also, that data is not unnecessary if you actually need the ping, either to present it to the user, or to do interpolation calculation with. I would think it's well worth it to collect this data.
Considering the following client/server network communication pipeline:
  • Game actions of the players are sent from clients to server through HTTP web service requests.
  • Game events are sent from server to clients through HTTP message push (Comet protocol).
I want to expose the way I think I can compute average "player ping" on each client based on the Simple Network Time protocol (SNTP):
  • Each web service request sent from client includes the timestamp at which the request was sent client side: T1.
  • The server reminds the last T1 of each last client web service request and the timestamp at which it received it: T2.
  • So next game event push to the client will include T1, T2 & T3 which is the timestamp at which the server sends the game event.
  • Finally T4 will be the timestamp at which the client receives the game event which was pushed by the server.
So for each game event received by a client we have T1, T2, T3 and T4. Those timestamps should allow us to compute the roundtrip delay d= (T4 - T1) - (T2 -T3). I would consider this roundtrip delay as the average "player ping" we can observe in any FPS online game when you press tab key to see the in game players and their pings. Thanks for reading this
Advertisement
That's generally how it's done. Note that, in most cases, this "ping" measurement will include some application-level delays, as well as pure network delays. If you have a "soft real time" networking thread that can timestamp packets as they come in and add the appropriate timestamp when they go out, and you can make sure to keep the TCP send buffer mostly empty for both machines, then you can get close to the network-only "ping" time.
enum Bool { True, False, FileNotFound };
Get the timestamp at the packet level: I keep that in mind.

Thanks a lot for your answer.

This topic is closed to new replies.

Advertisement