Client-Side Prediction & Packet Loss

Started by
40 comments, last by Heg 11 years, 6 months ago

[quote name='Heg' timestamp='1348052557' post='4981629']
If it feels too unresponsive, I will add some kind of indicator that the command has been registered by the machine.

You still should send the commands on a reliable channel, your UDP stack fits in here. Just remember the last X send commands + sequence number and the server will send acknowledge messages about the last valid sequence number from time to time (the client can clean up the command queue up to this number). Although the server should send resend-requests when it detects a gap asap.

You can even re-send the same command (unique sequence number) multiple times, using the space left in a UDP for best use and on the server side it doesn't matter at all.
[/quote]

I´ll keep that in mind. I´ll let you guys know how it turns out ;)
Advertisement
On problem with using TCP and UDP together is that UDP packets tend to be dropped if you start to saturate your bandwidth. If you are nowhere near, then sure. However I prefer a full UDP solution, as it makes things easier in the long run. In any case, you can re-engineer the lower layers if you want to move away from mixing both UDP and TCP.

Secondly TCP can fragment your communication. Say, you send 10 bytes message + 20 bytes message + 10 byte message. Then you may received that data in the form of 12 bytes + 12 bytes + 10 bytes + 6 bytes packets. So you will need some reconstruction at the receiving end. A custom reliable message system piggy-backed on UDP can stop that kind of fragmentation, but really, it's not a big deal to fix that problem.

Everything is better with Metal.


On problem with using TCP and UDP together is that UDP packets tend to be dropped if you start to saturate your bandwidth. If you are nowhere near, then sure. However I prefer a full UDP solution, as it makes things easier in the long run. In any case, you can re-engineer the lower layers if you want to move away from mixing both UDP and TCP.

Secondly TCP can fragment your communication. Say, you send 10 bytes message + 20 bytes message + 10 byte message. Then you may received that data in the form of 12 bytes + 12 bytes + 10 bytes + 6 bytes packets. So you will need some reconstruction at the receiving end. A custom reliable message system piggy-backed on UDP can stop that kind of fragmentation, but really, it's not a big deal to fix that problem.


Yeah, I agree with you. I already programmed a low level UDP layer including an acknowledgement system and all that stuff. So I will definitly use UDP only. I had just some issues around client-side prediction and packet loss. On important game decisions, like planting or exploding bombs, I´ll just wait for a server message and see how that goes. One thing I have to accept is that my game won´t work well when the internet connection is bad. I think there is nothing you can do about that.
Most games use a varying amounts of smokes and mirrors to mask internet issues.

However, if the connection is bad, it's generally better to let the user suffer for it rather than the game. The game should function well under a minimum requirement of conditions, and anything below that, best forget about it, as long as it doesn't break the game itself.

Planting stuff, or things more akin to 'events' (requesting pick up, climbing into a vehicle, planting a bomb, ...) in time are generally better dealt with with a reliable, but latency-heavy system. That isn't so much of a problem generally, these are often hidden behind client-side animations started at the moment of the request and aknowledged or cancelled by the server response.

Everything is better with Metal.

Hm, I could make my guys "bend over" or something in order to plant the bomb :-P A translucent bomb will most likely look odd!
I´ve got one additional quick question:

Best practice is to send out game updates from the server and input from the client at a fixed rate, say every 100ms. When we define the "effective latency" as the time between issuing a command and the response from the server, this technique does increase the latency by quite a big deal, doesn´t it? For example:

Server: |----------|----------|----------|-----

Client: --|----------|----------|---------|----

The | are the sending points and the time between them is 100ms. If the player on the client issues a command right after input has been send to the server, it would take almost 200ms to get a response, even when the packages arrive instantly. The first X marks the input time, the second one the response from the server:

Server: |----------|----------|----------|-----

Client: --|-X--------|---------X|---------|----

This is noticeable even on a listening server. I fear, when there is actually traveling time needed between the server and the client, that this will have a too big impact. What are your thoughts on that? Is a 100ms interval simply too big?
this technique does increase the latency by quite a big deal, doesn´t it?[/quote]

Yes, it does! This is why CS:S servers running at "tickrate 67" or "tickrate 100" are popular among the low-ping addicts.

On consoles, you typically can't get away with better than 20 times a second (every 50 ms) and many games do it less (every 67 or 100 ms.) Seems to work well for them anyway. The trick is to tune the display and gameplay to work well with whatever your chosen network parameters are.
enum Bool { True, False, FileNotFound };

this technique does increase the latency by quite a big deal, doesn´t it?


Yes, it does! This is why CS:S servers running at "tickrate 67" or "tickrate 100" are popular among the low-ping addicts.

On consoles, you typically can't get away with better than 20 times a second (every 50 ms) and many games do it less (every 67 or 100 ms.) Seems to work well for them anyway. The trick is to tune the display and gameplay to work well with whatever your chosen network parameters are.
[/quote]

Okay, so I rely heavily on my client-side prediction, lag compensation and interpolation then. Maybe it´s even a good thing that lag even affects local games. That makes it waaay more easy to test and requires me to work really hard on the networking stuff, otherwise the game will suck for everyone, not just for the guys with slow internet speed.

Okay, so I rely heavily on my client-side prediction, lag compensation and interpolation then. Maybe it´s even a good thing that lag even affects local games. That makes it waaay more easy to test and requires me to work really hard on the networking stuff, otherwise the game will suck for everyone, not just for the guys with slow internet speed.


Yes, you can also generate arbitrary latency for testing how it affects your game.

Usually your game update is staggered. First, Receiving packets, then updating your game (server or client), and finally sending your commands / states at the end of the update. It's one easy way to manage your game loop without adding unnecessary latency.

You can have a virtual network layer that can queue sending and / or receiving packets, as well as simulating packet loss, duplication and out of order (all of which should be invisible outside your lower network layer) for testing purposes, or if you lack the resources for proper internet testing. Then you can use renderless client-bots / servers, or just basic loopback networking with one player, like in Quake3 and many other Unreal / ID / Source games).

Everything is better with Metal.


[quote name='Heg' timestamp='1348338528' post='4982697']
Okay, so I rely heavily on my client-side prediction, lag compensation and interpolation then. Maybe it´s even a good thing that lag even affects local games. That makes it waaay more easy to test and requires me to work really hard on the networking stuff, otherwise the game will suck for everyone, not just for the guys with slow internet speed.


Yes, you can also generate arbitrary latency for testing how it affects your game.

Usually your game update is staggered. First, Receiving packets, then updating your game (server or client), and finally sending your commands / states at the end of the update. It's one easy way to manage your game loop without adding unnecessary latency.

You can have a virtual network layer that can queue sending and / or receiving packets, as well as simulating packet loss, duplication and out of order (all of which should be invisible outside your lower network layer) for testing purposes, or if you lack the resources for proper internet testing. Then you can use renderless client-bots / servers, or just basic loopback networking with one player, like in Quake3 and many other Unreal / ID / Source games).
[/quote]

Yeah, I already got a few testing functionality in my network layer, such as latency and packet loss. I am missing functions for duplication and out of order though. I´ll make sure to implement that.

While we are at it, how do you guys handle lost or out of order client input? I would say client input needs to be ordered, but that would also imply that you would have to wait for a resend of lost client packages, since it´s not ordered when you are missing one package.
Should I really wait for such packages, or should I just let the client input get lost? This would result in a correction of the movement on the client, since it predicted the movement wrong.

Current state of the game: Client-Side Prediction and Interpolation seems to work quite well in my current implementation, but I am not reacting to lost, duplicated and out of order packages yet. That´s my next step :-P

This topic is closed to new replies.

Advertisement