Jump to content

  • Log In with Google      Sign In   
  • Create Account


De-jitter buffer on both the client and server?


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
11 replies to this topic

#1 fholm   Members   -  Reputation: 262

Like
0Likes
Like

Posted 13 January 2014 - 10:35 AM

Is it common practice to place a de-jitter buffer on both ends of a server<->client connection? I have a de-jitter buffer in place on the client where it receives data, to smooth out packet delivery so i can de-queue them on-time, every-time (unless a lot of drop/ping fluctuations, but that's just something you have to live with).

 

My question is: Is it common to put a de-jitter buffer on each connection on the server also, that is the data which is sent from the client to the server. The reason I am asking is because of this case:

 

* Client A produces "MOVE FORWARD" commands to move it's avatar through the world by holding down the W key, every frame.

 

* Server A receives these commands, and processes them as they come in. Sometimes they are exactly on time, sometimes a little bit-early, and sometimes they are a bit late, sometimes when the move commands come in late the server simulates a step without moving the client.

 

* Client B receives the updated positions of Client A's avatar, as long as the move commands from Client A to the Server arrive early/on time, there's no problem. But when they are late, Client B will see this as the movements of Client A becomes a bit "snappy", it is very marginal but if you look hard enough you can see the speed of Client A's avatar vary just slightly for a few packets.

 

I have noticed the same networking artifact in AAA-titles also (BF4 for example), so is this a case that is generally just ignored? Or are there games which apply a de-jitter buffer on the server also?



Sponsor:

#2 hplus0603   Moderators   -  Reputation: 4979

Like
0Likes
Like

Posted 13 January 2014 - 01:24 PM

The buffer is typically implemented on a per-message basis, not a per-network-packet basis.

 

So, if a message says "at tick X, my input commands are Y," then the server will put that message in the input queue for tick X, and when the simulation gets to tick X, it will execute it. Same thing on the other (client) end.

 

Some messages can be processed as soon as you see them; for example chat messages.

 

Other "messages" are really part of your packet framing, such as values used to acknowledge packets (if using UDP,) measuring connection quality, or keeping time steps in sync.


enum Bool { True, False, FileNotFound };

#3 fholm   Members   -  Reputation: 262

Like
0Likes
Like

Posted 13 January 2014 - 03:14 PM

So, if a message says "at tick X, my input commands are Y," then the server will put that message in the input queue for tick X, and when the simulation gets to tick X, it will execute it. Same thing on the other (client) end.

 

This is what I was talking about, re-reading my post I realize that I could have worded things better. Things which can be instant "chat message", "load map", etc. are handled as soon as it arrives, the same with packet framing, ticks, etc.

 

But this specific part, assume that the client and server are both running at some fixed time step (60Hz) and the server sends its current step as the first 4 bytes of every packet, and the client tries to keep in sync. The client will almost always be behind the server. So when the client generates its own "at tick X my input was Y", that tick X will always be "behind" the server. So when it arrives at the server, it should always be processed instantly? Because tick X will always have elapsed on the server. This is what is causing the problem I'm having, basically since the delivery of the packets from the client to the server always fluctuates a bit you don't get an even stream of say "MOVE FORWARD" inputs, so some frames you apply 2 and some frames you apply 1, and some 0, which gives this slightly "jerky" movement on the other clients, when they receive the position updates for the first clients avatar.

 

That's what I'm trying to get around.

 

Edit: Or should the server keep a per-client tick also which synchronizes a tick-count local to the clients simulation to the "client" representation on the server? and step that tick count also, and in that way consume the inputs from the client?


Edited by fholm, 13 January 2014 - 03:16 PM.


#4 hplus0603   Moderators   -  Reputation: 4979

Like
0Likes
Like

Posted 13 January 2014 - 05:04 PM

So when the client generates its own "at tick X my input was Y", that tick X will always be "behind" the server


And this is THE MAIN PROBLEM with networked games. There are a few solutions that work pretty well, but you have to choose the right solution for your particular game.

For example, the client can send "for tick X in the future, my commands are ..."
The server will then execute those at tick X, and also let everyone else know that, at tick X, each client's commands were ...
Each client can then run the simulation entirely deterministically; only commands need to be sent. This is known as "lock-step synchronization" and is THE WAY to do real-time strategy games, but has also been used for FPS games, racing games, MMO games, and other kinds of games. The main draw-back is a latency between giving a command, and that command actually taking effect. For MMO games, it may be OK that there's a small lag between starting to move forward, and the character actually accelerating. For an RTS game, the "yes sir!" acknowledgement animation is used to cover the time span. Each game needs a way to deal with this.
The canonical article talking about this method is the "1,500 archers on a 28.8 kbps modem" article from lo so many years ago. Still good.

Another option is to immediately move the client, and let the server apply client movement "back in time" as it comes in, with some maximum limit to how far back in time commands will be accepted. Entities from other clients are then forward extrapolated to the same time step based on best-available information, which means entities may be shown in the wrong position. Shooting entities (for an FPS) may also result in the server having to rewind the simulation, and checking what the player actually should have seen, and determine whether it was a hit or not. This is approximately the "Source networking" model; there are many other games, especially twitch FPS games, that do something similar.

Finally, you can display the local client "ahead of time," to give immediate interactivity, and the remote clients "behind time," to give them correct (but late) positions. The main problem here is how to deal with actor/actor interactions; if I shoot you when I'm at time 110 and you are at time 90, by the time you hear about it, you will have gotten to time 130; thus you will be "snapped back in time." (The Source model has some of this problem too, but only half as much, IIRC.)

Those are the three main approaches; you have to pick one and live with the consequences :-)
enum Bool { True, False, FileNotFound };

#5 fholm   Members   -  Reputation: 262

Like
0Likes
Like

Posted 14 January 2014 - 01:39 AM

Another option is to immediately move the client, and let the server apply client movement "back in time" as it comes in, with some maximum limit to how far back in time commands will be accepted. Entities from other clients are then forward extrapolated to the same time step based on best-available information, which means entities may be shown in the wrong position. Shooting entities (for an FPS) may also result in the server having to rewind the simulation, and checking what the player actually should have seen, and determine whether it was a hit or not. This is approximately the "Source networking" model; there are many other games, especially twitch FPS games, that do something similar.


Finally, you can display the local client "ahead of time," to give immediate interactivity, and the remote clients "behind time," to give them correct (but late) positions. The main problem here is how to deal with actor/actor interactions; if I shoot you when I'm at time 110 and you are at time 90, by the time you hear about it, you will have gotten to time 130; thus you will be "snapped back in time." (The Source model has some of this problem too, but only half as much, IIRC.)

 

This is what I am doing, and this exact case is what my question is about, my current implementation works like this: Both client and server run a local simulation at 60Hz, the server sends data to the clients at 20Hz (every 3rd simulation frame), and the clients send data to the server at 30Hz (every 2nd simulation frame).

 

Every packet from the client to the server contains the move commands for 4 frames past, so N - 3, N - 2, N - 1 and N. My question is on how to smooth the application of this data out when it arrives at the server. Right now I apply all of the move commands the instant they come in on the server, this gives a slightly jerky movement on the other clients since the movement gets applied in "bursts" of 2+ movements at a time, and this does not align perfectly with the server send rate to the clients.

 

Since the move commands in a packet from the client always will be late on the server no matter what anyone does. My question is on about how to apply the movement commands so I don't get this jerky movement. My current solution (apply them all at once) is obviously not working properly. So my suggested solution was this: We keep a "per-client-clock" on the "client object" on the server, which tries to stay in sync with the clients clock, and use this clock to properly apply the input of each client in order, and also do the same stalling/fast-forwarding in case we get ahead/lag behind the client on the server.

 

Edit: I suppose there are also two ways we can do this "application" if we are using a "per-client-tick-clock" to time our movement commands:

 

1. We only use this "per-client-clock" to apply the movements in-order over a proper sequence of time steps, the movements are timed by the ticks received from the client but they are applied in the current step "in the future" on the server, and this is how the remote clients see them also.

 

2. We could also re-wind the server time to the current "per-client-clock", and then apply all of the movements, and then forward time again. I suppose this would give us a simulation which is a bit more accurate, but I'm not sure the extra hassle is worth it? What does it really gain us? 

 

 

 

 Entities from other clients are then forward extrapolated to the same time step based on best-available information, which means entities may be shown in the wrong position. 

 

I have one question on this also, the way I read this is that we always try to position all clients on the server according to the latest tick arrived from any client? If data has arrived for that tick from a client, we use that data, but if it has not we use forward-extrapolation to give an approximate position. Or are you talking in the context of a clients local simulation and mean the usual interp/exterp solution for displaying remote entities on a client? (basically what is described in the source engine networking article).


Edited by fholm, 14 January 2014 - 01:44 AM.


#6 hplus0603   Moderators   -  Reputation: 4979

Like
0Likes
Like

Posted 14 January 2014 - 10:46 AM

I have one question on this also, the way I read this is that we always try to position all clients on the server according to the latest tick arrived from any client?


What I meant was, on each client, each other client entity is drawn based on a forward guess of what data is available.
So if the position for time 140 and time 141 is available, and the time is now 146, then you could calculate the position do display as P141 + (P141 - P140) * (146-141)
This will "snap" when the player is zig-zagging; other options include applying some filtering to clamp the speed with which an entity is allowed to move in the local client; it will move towards the predicted position with no more than that speed per tick.

Personally, I prefer showing entities in the past instead, in positions you know they've been.
enum Bool { True, False, FileNotFound };

#7 fholm   Members   -  Reputation: 262

Like
0Likes
Like

Posted 14 January 2014 - 12:09 PM

 

I have one question on this also, the way I read this is that we always try to position all clients on the server according to the latest tick arrived from any client?


What I meant was, on each client, each other client entity is drawn based on a forward guess of what data is available.
So if the position for time 140 and time 141 is available, and the time is now 146, then you could calculate the position do display as P141 + (P141 - P140) * (146-141)
This will "snap" when the player is zig-zagging; other options include applying some filtering to clamp the speed with which an entity is allowed to move in the local client; it will move towards the predicted position with no more than that speed per tick.

Personally, I prefer showing entities in the past instead, in positions you know they've been.

 

 

Ah, then I follow. And Yes I agree, I prefer to show them in the past. Btw, did my general approach to dealing with simulation data using "client-local" seem sound?



#8 hplus0603   Moderators   -  Reputation: 4979

Like
0Likes
Like

Posted 14 January 2014 - 07:31 PM

I'm not sure that applying player simulation on an "arbitrary" time step is a good idea. It might work for very simple games, but anything with physical simulation, or other time-dependent simulation parameters (acceleration, trap doors, etc) may end up de-syncing and requiring excessive state updates to the clients. As long as only the server makes the final decision, it will still "work" but it may be sub-optimal.

Also, if you accept "a large number" of commands from the client at a time, then a client may forge packets that give it an advantage. Back in the days, people used to use Ethernet hubs with a disconnect switch. They'd flip the switch, shoot everyone on the screen (who would just keep moving straight,) and then flip the switch back on.
enum Bool { True, False, FileNotFound };

#9 fholm   Members   -  Reputation: 262

Like
0Likes
Like

Posted 15 January 2014 - 03:59 AM

I think I failed in my explanation, again. I will explain a simple case, and how *I would* solve it:

 

Lag spike happens for a couple of packets from the client to the server, so the server is "missing" input from the client for a couple of simulation frames, and then receive a burst of updates instead. There are two ways to solve this in my mind.

 

Solution 1: Keep a counter of "missed" simulation frames (frames where we had no input from the client) on the server for each client, and allow the server to fast-forward the client input for as many frames as we "missed" (due to lag, etc.). This requires us to be able to apply several input commands from the client during one simulation step on the server, to "catch up" to the clients input state. Limit this counter to something like ceil((RTT * 2) / stepSize) so that the client cant buffer inputs locally to cheat the way you explained it. This is the solution I am using at the moment. We could possibly "rewind" the entire state for the server when doing these "catch up" simulation steps for, and when I say "catch up" simulation steps I just mean applying several of the clients input commands/events/etc. which should be done over several frames in one frame, so that we "fast forward" the clients input a bit.

 

Solution 2: If it ends up being so that we get a large burst of updates from a client, just disregard some inputs from the client until the server and client are in sync again expected tick number from the client on the server is the same as the tick number in the next packet received from the client. Essentially just dropping all packets which are late. This seems to be by far easier to implement, and maybe this is the way to go? How have commercial games handled this?

 

Now if the reverse happens, and we get a lag spike from the server to the client, this is a bit easier to handle because we can "trust" the server. We can apply as many updates as we want, as fast as we want on the client to catch up to the server, basically for every local simulation step on the client we apply 1 + N extra updates for all remote entities from the server until our local simulation is up-to-date with what the last data from the server is. This is what I'm doing at the moment.

 

Maybe it would be easier to disregard updates received from the server on the client also to fast-forward up to the latest state? but this doesn't feel like a clean approach because it could lead to other weird artifacts when data just gets dropped, etc.


Edited by fholm, 15 January 2014 - 04:12 AM.


#10 fholm   Members   -  Reputation: 262

Like
0Likes
Like

Posted 15 January 2014 - 04:38 AM

To give an example of how my current solution works, here is some pseudo code with comments:

        // game loop
        while (true) {

            time = get_time();
            delta = current_time - time;
            current_time = time;

            // dont simulate more then half a second
            if (delta > 0.5f) {
                delta = 0.5f; 
            }

            acc += delta;

            // try to grab any data from the network
            poll_network_for_data();

            // run at a fixed step of 60fps
            while (acc >= (1f / 60f)) {

                // step our local simulation
                run_local_simulation();

                // for each step of the local simulation, try to step the remote simulated entities/input/etc.
                foreach (peer in peer_list) {

                    // increment skipped_frames with one
                    // if simulation is ticking along nicely, 
                    // skipped_frames should be just 1 after this call
                    peer.skipped_frames = peer.skipped_frames + 1;

                    if (peer.frames_available > 0) {
                        
                        // calculate how many frames we can have buffered up for a peer, 
                        // on the server it is ceil((peer.rtt * 2) / (1f / 60f)) and on the
                        // client we allow a full two seconds of buffered frames (120 frames)
                        var max_frames = is_server ? ceil((peer.rtt * 2) / (1f / 60f)) : 120;

                        // if we have ended up getting so much data in a "burst" from
                        // a remote peer that we have more buffered up then allowed, discard
                        // frames until we are at max capacity
                        while (peer.frames_available > max_frames) {
                            peer.discard_frame();
                        }

                        var frames_integrated = 0;
                        
                        // for each skipped simulation frame (which should be just "1" if 
                        // things are ticking along nicely) integrate it into our local simulation
                        // now do at most 5 integrations for each peer every local simulation frame
                        // which allows to run at max 5x normal speed to catch up to the remote
                        while (frames_integrated < 5 && peer.skipped_simulation_frames > 0) {
                            peer.integrate_frame_into_local_simulation();
                            peer.skipped_simulation_frames  -= 1;
                            frames_integrated += 1;
                        }
                    }
                }

                acc -= (1f / 60f);
            }

            // render
            draw_to_screen();
        }

Edited by fholm, 15 January 2014 - 04:43 AM.


#11 fholm   Members   -  Reputation: 262

Like
0Likes
Like

Posted 15 January 2014 - 10:00 AM

So I realized I had over-complicated things waaaay more than needed. I took a peak at the Quake 3 source code, to see how they dealt with bursts of packets, de-synced time, etc. and the basic algorithm is this:

 

1. On every incoming packet read out the remote tick (or time)

2. Compare the remote tick with our own local tick

3. If they are close, do nothing.

4. If they are tiny bit to far apart slightly nudge our local tick in the right direction

5. If they are very far apart, reset our local tick to the remote tick .

 

I implemented this myself, in < 5 minutes, and it seems to be working wonders, and handle all cases very well.



#12 fholm   Members   -  Reputation: 262

Like
0Likes
Like

Posted 15 January 2014 - 10:04 AM

The end result is this, straight copy-past from my game:

  • RemoteSendRate is the send rate of the other end of the connection, in ticks.
    void IFpsAppTick.Invoke () {
        if (packetsReceived > 0) {
            Tick += 1;

            if (buffer.Count > 0) {

                // our goal is to stay (RemoteSendRate * 2)
                // behind the last received packages tick number

                int diff = buffer.Last.Tick - Tick;

                if (diff >= (RemoteSendRate * 3)) {

                    // If we are 3 or more packets behind
                    // Increment our local tick, at most (RemoteSendRate * 2) extra ticks

                    int incr = Math.Min(diff - (RemoteSendRate * 2), (RemoteSendRate * 2));
                    Tick += incr;

                    FpsLog.Info("Incremented TICK by +{0}", incr);

                } else if (diff >= 0 && diff < RemoteSendRate) {

                    // If we have drifted slightly closer to being ahead
                    // Stall one tick, by decrementing the tick counter

                    Tick -= 1;
                    FpsLog.Info("Stalling TICK at {0}", Tick);

                } else if (diff < 0) {

                    // if we are ahead of the remote tick
                    // we need to step back in time

                    if (Math.Abs(diff) <= (RemoteSendRate * 2)) {

                        // slightly ahead (two packets or less), 
                        // step one packet wort of ticks closer

                        Tick -= RemoteSendRate;
                        FpsLog.Info("Decremented TICK by -{0}", RemoteSendRate);

                    } else {

                        // if we are way of, just reset completely
                        // and start over, there is no point in trying 
                        // to handle this case nicely

                        Tick = buffer.Last.Tick - (RemoteSendRate * 2);
                        FpsLog.Info("Reset TICK to {0}", Tick);
                    }
                }
            }
        }
    }

Edited by fholm, 15 January 2014 - 10:08 AM.





Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS