Jump to content
  • Advertisement
Sign in to follow this  
Tazadar

Ping request update rate ?

This topic is 2607 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hello I am making an UDP server/client,

Ping is computed client side, it sends a packet containing the command PING and server answer with PONG.
With that I can get the time easily.
My question is what is the best update rate for the ping ? Every second ? Every 500 ms ? Especially when there are no ACK on the PING.
The ping is important because I am using it when the client sends a packet to the client for the time (witch is time + ping).
The time is used to know packet order and to remove/discard old packets.

Thank you very much.

Taz.

Share this post


Link to post
Share on other sites
Advertisement

Ping is computed client side, it sends a packet containing the command PING and server answer with PONG.


If you want to keep track of how long a particular PING/PONG sequence took, you should include a unique identifier (such as a serial number) in the PING, and echo that back in the PONG. That way you can match them up, even across network drops etc.

However, if you keep sending packets with commands, you don't need a separate PING packet at all.
You should number command packets with a sequence number, and discard any commands where the sequence number is out of order (assuming this is the semantic you want).
The ping time should only be necessary for presentation level logic -- how far to extrapolate entities displayed; how far back to interpolate objects if doing server-side hit testing; etc.

Typically, you'll schedule network packets once every X milliseconds, and put any and all commands that happened between previous packet and next packet into that packet -- so there are many "messages" per "packet." The tick number and local timing information can go in a header in that UDP message, and be used to calculate timing.

Share this post


Link to post
Share on other sites
Thank you so much hplus0603 !

So If I understand what you mean the client should send its packet in bulk.
Is it the same case for the server ? Should the server send all is packet in bulk or one by one ?
IE : sending a PONG packet then sending an update packet etc....
And finally, what is the best packet size ?

Thanks again !

Share this post


Link to post
Share on other sites

Is it the same case for the server ? Should the server send all is packet in bulk or one by one ?
...

And finally, what is the best packet size ?


Everyone should batch messages into bulk packets. In general, bulk packets should be sent on a time-based schedule -- 10 times a second, or 30 times a second, or whatever. If there's nothing to send, just send the ping/pong stuff and the acknowledgements needed to keep your "reliable" messages happy (if you have any).

The best packet size is as close to 0 as you can get while still providing the game experience you want to provide.

Share this post


Link to post
Share on other sites
If you are sending your packets on a fixed interval, like every 15ms then you can figure out the ping all the time on the server side.

Server
---------
Packet 1 received at 30 ms
Packet 2 recieved at 45 ms (45-30) = 15ms - 15ms = 0 ping
Packet 3 recieved at 70 ms (70-45) = 25ms - 15ms = 10 ping
Packet 4 dropped----------------------
Packet 5 recieved at 115 ms (115-70)/2 packets = 22.5ms - 15ms = 7.5 ping

You know the players ping even when they dont actually send you a ping request. Therefore just put the ping message in one of the updates that you send back to the player. This is of course only if you know that the player is supposed to send you another message on a fixed interval. I subtracted 15 ms because that was the intervals so if they send the next packets 15ms later and I receive exactly 15ms from the last packet, then there was 0 ping.

Share this post


Link to post
Share on other sites

If you are sending your packets on a fixed interval, like every 15ms then you can figure out the ping all the time on the server side.

Server
---------
Packet 1 received at 30 ms
Packet 2 recieved at 45 ms (45-30) = 15ms - 15ms = 0 ping
Packet 3 recieved at 70 ms (70-45) = 25ms - 15ms = 10 ping
Packet 4 dropped----------------------
Packet 5 recieved at 115 ms (115-70)/2 packets = 22.5ms - 15ms = 7.5 ping

You know the players ping even when they dont actually send you a ping request. Therefore just put the ping message in one of the updates that you send back to the player. This is of course only if you know that the player is supposed to send you another message on a fixed interval. I subtracted 15 ms because that was the intervals so if they send the next packets 15ms later and I receive exactly 15ms from the last packet, then there was 0 ping.


Hmmm does it really work ?

I mean if I have a stable ping let's say around 100 :
my Packet 1 will be at 130ms
my Packet 2 will be at 145ms => 0 ping but should be 100
my Packet 3 will be at 170ms => 10 ping but should be 110

This comes from the fact my client keeps sending packet each 15ms (I took the same value as your exemple).

Share this post


Link to post
Share on other sites
Hidden

[quote name='dpadam450' timestamp='1319339357' post='4875510']
If you are sending your packets on a fixed interval, like every 15ms then you can figure out the ping all the time on the server side.

Server
---------
Packet 1 received at 30 ms
Packet 2 recieved at 45 ms (45-30) = 15ms - 15ms = 0 ping
Packet 3 recieved at 70 ms (70-45) = 25ms - 15ms = 10 ping
Packet 4 dropped----------------------
Packet 5 recieved at 115 ms (115-70)/2 packets = 22.5ms - 15ms = 7.5 ping

You know the players ping even when they dont actually send you a ping request. Therefore just put the ping message in one of the updates that you send back to the player. This is of course only if you know that the player is supposed to send you another message on a fixed interval. I subtracted 15 ms because that was the intervals so if they send the next packets 15ms later and I receive exactly 15ms from the last packet, then there was 0 ping.


Hmmm does it really work ?

I mean if I have a stable ping let's say around 100 :
my Packet 1 will be at 130ms
my Packet 2 will be at 145ms => 0 ping but should be 100
my Packet 3 will be at 170ms => 10 ping but should be 110

This comes from the fact my client keeps sending packet each 15ms (I took the same value as your exemple).
[/quote]

Share this post


Link to post

Packet 2 recieved at 45 ms (45-30) = 15ms - 15ms = 0 ping
Packet 3 recieved at 70 ms (70-45) = 25ms - 15ms = 10 ping


That's measuring jitter, not ping.

Generally, you should measure game time in "game ticks." Additionally, on the clients, you need to establish a relation between wallclock time and game tick number, that can change over time.

Typically, the client will send a message something like:
"Here's my packet 10, and my current game tick is 22." The client can additionally remember that it sent packet 10 at tick 22 at my time 15:36:28.445

The server will get that packet, and timestamp it with time and game tick. Let's say the server gets it at server time 17:35:40.872 and game tick 24.

Next, the server processes updates, and sends out a packet to the client. Let's say the server does this at server time 17:35:40.901. This packet contains information:
"Your packet 10 was received at my tick 24. It is now tick 25. Processing time for the packet was 29 milliseconds."

Some time later, the client receives that message. Let's say the client receives it at client tick 27. Additionally, it is then client time 15:36:28.600.

The client can now calculate the time through the network -- 155 milliseconds -- and subtract the amount of processing time -- 29 milliseconds -- for a round-trip ping estimate of 126 miliseconds. Additionally, the client will know that the server tick number was 25 at a time 63 milliseconds ago (half the estimated RTT). With the client currently at tick 27, its estimate is currently alright, so it doesn't need to make any adjustments.

This process happens for each and every packet in the communication, so there will be a "pipeline" of outstanding messages. While the client might just have sent packet 10, he may be getting the acknowledgement for packet 8 back from the server (assuming there's one packet every 2 or 3 ticks, in this case).

For a really precise estimate of time, network packet dequeue needs to be de-coupled from the CPU, rather than polled once per frame. If you poll the network once per frame, you will get jitter in your estimate up to the duration of a frame. If you use I/O completion ports (or a thread + epoll, or whatever) with high-priority handler threads, you will get better precision in timing -- but only if I/O is _all_ those threads do, and then hand off the actual processing of the packets to the main thread, or some other regular-priority thread.

Share this post


Link to post
Share on other sites
Thank you very much hplus0603 for these explanations, it helps a lot !

I have another question about the tick.
Let's say I take my tick as my update rate (lets say 33 ms so we have 30 updates per sec).
Since the tick is an integer it means that a tick life time is 33 ms. I can receive a packet at time 0 or at time 32, it will have the same tick.
It means that the tick implies a time error up to 32 ms.

So I suppose I must try to get a tick as small as I can in order to get more accuracy on time right ?

Thanks,

Taz.

Share this post


Link to post
Share on other sites

So I suppose I must try to get a tick as small as I can in order to get more accuracy on time right ?


Depends on what time you want. If you want the game to be able to resolve events with a resolution smaller than 33 milliseconds, then yes, you need to have a smaller tick size. This is why "high-performance" FPS game servers sometimes boast a tick rate of 66 or even 100. The possible small gain in game event resolution generally isn't worth the 3x increase in overall CPU and bandwidth cost for most games, though.

If you just want better resolution for the round-trip-time estimation, then no, you don't need a smaller tick size, because that's entirely based on wallclock times. The algorithm I sketched out above does two things at once: estimate the clock/tick relation, and estimate the round-trip-time.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!