Update rate vs RTT?

Started by
3 comments, last by frob 7 years, 3 months ago

This is actually much more confusing then I though when I started to write this thread...

I mean, the update rate and receiving rate is basically the same regardless of latency.. If I send 30 packets a second and the latency is 2 seconds, packets are all getting there 2 seconds late, but theyre all getting there at the 30 per seconds rate...That makes the whole prediction thing different than what Im used to think...Generally I just think in terms of a single event..

The latency influence in prediction is the elastic effect caused by the 'delay and catch up' effect you have when predicting (either slow start and catch up (if stopped to moving) or go beyond and back(if moving to stop). But the update rate is just the granularity of the movement performed by the players. A low rate will look like the player move in straight lines, a high rate will more accurate describe a movement "zigzaging" if any.. Is that right?

But thats not even what I was thinking..

Should one try to adjust update rates (packet sending) accordingly to RTT measures?

I understand that gameplay wise: the higher rate the better; bandwidth wise: the minimum necessary the better. (pretty obvious)

But sending less or more packets under different latencies doesnt change anything gamewise, its the latency itself that change stuff (worse prediction)..So its really just trying to soften the problem that MAY be cause by congestion? Cause if its not congestion the problem, youre lowering your rates for no reason right?

Is it best practice to step in the breaks in the case of bad connection?

Advertisement

Should one try to adjust update rates (packet sending) accordingly to RTT measures?

Not usually. You typically want to fix your simulation rate (30 Hz, 60 Hz, 90 Hz, 144 Hz, or whatever) and then fix your network rate (10 Hz, 15 Hz, 20 Hz, 30 Hz, or whatever) and send a number of inputs/commands for the simulation time steps that have happened in each of the network update packets.

Yes, network games have to adjust to there being a number of time steps of latency between the client and the server.

enum Bool { True, False, FileNotFound };
So its really just trying to soften the problem that MAY be cause by congestion? Cause if its not congestion the problem, youre lowering your rates for no reason right?

There are other reasons, but you are right that generally reducing what you transmit tends to fix assorted problems.

This is why many game clients tend to transmit player input state changes. Those happen less frequently.

A fast clicker may have 5 mouse clicks in a second. Someone focusing on a mouse-clicking test may reach 10 or 12 clicks per second. Five or ten messages per second is better than thirty. Most gameplay is low input. You press and hold the forward button ('W') for several seconds, that is a single event, the player moves the mouse to turn the character every second or so. You may run for 15 or 20 seconds transmitting 5 or 10 events.

Data from the server can also be minimized. Clients only need data they should be using. For example, you shouldn't transmit positions of things the player shouldn't know about to avoid things like invisible wall hacks. You can similarly transmit event-based updates for positions and actions rather than frame-by-frame position updates. A player who is a sniper looking over an empty hallway should get no updates other than general status messages and heartbeat events. A player overlooking a crowded battlefield observing everything happening will likely be at risk of overwhelming their network connection.

Computing power continues to increase far more rapidly than network communications. It is generally far easier to filter the data down and recompute whatever is needed, than the cost to transmit the data.

This is why many game clients tend to transmit player input state changes. Those happen less frequently.

...

Five or ten messages per second is better than thirty.

This sounds like you're recommending only sending packets when the user changes input. That's not how most games work, because that's generally not a great idea.

The client (and the server) will formulate and send a network packet to the other end at a fixed rate. This is the "net send rate" or "network tick rate" or whatever you want to call it.

This message will typically include the commands that the user has generated since the last send, as well as some additional look-back to compensate for possibly dropped packets in the past.

Typically, user input will RLE compress very well. As you say, once I press "W" I will keep it down for multiple simulation ticks.

Thus, if network tick rate is 30 Hz, and simulation tick rate is 60 Hz, the client will send input state for two sim ticks in each network packet. Typically, the client will reach back some number of additional ticks (say, up to 500 ms or so,) which meas that the RLE encoded data will look something like:

Tick 2025:

W was been down for 3 ticks

W<FIRE> was down for 2 tick before then

W was down for 10 ticks before then

...

enum Bool { True, False, FileNotFound };
Sorry about the confusion.

Yes, there is plenty of data that needs sending from time to time. I did'nt mean that it was the ONLY data, just that when you have an option of what data to send for the same results it is better to send less data.

But sending less or more packets under different latencies doesnt change anything gamewise, its the latency itself that change stuff (worse prediction)..So its really just trying to soften the problem that MAY be cause by congestion? Cause if its not congestion the problem, youre lowering your rates for no reason right?


Everything that goes across the wire gets some additional overhead. Sending a bunch of small packets has more overhead then sending fewer large packets.

There are many reasons to change what you send and when you send it.

Exactly how latency affects your game depends on your game. Transfer time for data is in addition to latency. It takes time for the message to travel between sites, and it takes time for the signal as it arrives. A very large packet (for various definitions of large) may take several milliseconds from beginning to end, plus more milliseconds in transit time. Smaller packets reduce the time spent on the useful data but don't really affect latency. Network congestion may be from things unrelated to the game, like someone else in the home streaming videos and your data getting lower priority.

Send less data generally. Send larger blocks generally. But if the situation warrants it, send blocks as frequently and as small as you need.

Is it best practice to step in the breaks in the case of bad connection?

Depends on the game. If the game client won't be able to have a good experience, it may be better to terminate the connection outright. Or it might make sense to notify the user and have a degraded experience. Or it might make so little difference in a game that the problem is ignored.

This topic is closed to new replies.

Advertisement