How to implement connection percentage?

Started by
6 comments, last by Rich76 14 years, 2 months ago
I noticed Poker Stars allows you to view other player's connection percentage. How is something like that implemented? The game of course uses TCP to tranfer data. I was thinking that maybe the client sends the server UDP messages every fraction of a second and then the server calculates the UDP messages received per second..? Anyone have any ideas?
Advertisement
What do you mean by "connection percentage"? Do yoy mean like "100% == perfect connection, 75% == ok connection" or something? Can you describe in a bit more detail for those of us who don't play Poker Stars :)
Quote:Original post by Codeka
What do you mean by "connection percentage"? Do yoy mean like "100% == perfect connection, 75% == ok connection" or something? Can you describe in a bit more detail for those of us who don't play Poker Stars :)


:) Sorry.

Yeah, that's what it means, I think. I noticed players will have a low percentage right before disconnecting. Sometimes they'll have a low percentage and they take forever to respond.

You're probably talking about latency. It can be used to detect when a client is disconnecting. In order to detect latency send a ping packet from the server and store the current time in ms. Then when the client gets the ping send a pong. When the server gets the pong get the current time and do pongReceiveTime - pingSendTime. This will be the round-trip latency which could be turned into a % to show jitters from the normal or the preferred latency.
Sounds like you could also mean packet loss, although it sounds like that percentage given in that game is some kind of combination of network metrics, maybe including latency for last request/response. (See Sirisian's post)

If you like, you could find out whether it's using UDP or not definitively by using a network sniffer like Wireshark.
Thank you both. :)
When using TCP, if you know the time at which packets are supposed to be received (say you poll/ping every so often), then you can time the jitter of those packets, and use jitter as a proxy for connection quality.

Another option is to timestamp each packet as it is sent. This will let the server establish a reasonable client/server clock relation, simply as the minimum of the difference between the two received. Then measure how much later each packet is than the fastest packet you've received, and classify it as something like "<100 ms: 100%, <300ms: 75%, ...."
enum Bool { True, False, FileNotFound };
Quote:Original post by hplus0603
When using TCP, if you know the time at which packets are supposed to be received (say you poll/ping every so often), then you can time the jitter of those packets, and use jitter as a proxy for connection quality.

Another option is to timestamp each packet as it is sent. This will let the server establish a reasonable client/server clock relation, simply as the minimum of the difference between the two received. Then measure how much later each packet is than the fastest packet you've received, and classify it as something like "<100 ms: 100%, <300ms: 75%, ...."


Brilliant. Thank you :)

This topic is closed to new replies.

Advertisement