# Best practices for sending and receiving game state?

This topic is 2473 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

So I've been venturing into game network programming for the first time during the past week, and I've gotten the basic stuff up and running. One major question has been raised in my head though: What's the best way to send my game state? I understand that this probably is highly dependent on the game you're building, but assuming a 10-player FPS style game here.

I've been reading as much as possible on the internet, and searching on this forum. I've gotten it broken down to two options (at least from my point of view, I could be horribly wrong also, of course). I'll go into detail on both of them to explain my reasoning.

Option A: Send game state as soon as possible in small packages

With option a you would on the server (in your physics step) read one incoming message for some game entity, update your physics accordingly and then send the new state for that game entity back out to all the clients. You would then take the next message (if any), and do the same thing, etc. In pseudo-code it'd look something like this:

 while(anyNewMessages) { var msg = readMessage(); var entity = getEntity(msg.entityId); applyMessageToEntity(entity, msg); sendUpdatedEntityStateToClients(entity); } 

Option B: Send game state at the end of your physics step in a big package

Option b would be more along these lines, again in your physics step: read one incoming message, update the physics of the game accordingly, and then take the next message, update the physics, etc. until you run out of new messages and then serialize your entire game state and send it to the clients. In pseudo code it would be something like this:

 while(anyNewMessages) { var msg = readMessage(); var entity = getEntity(msg.entityId); applyMessageToEntity(entity, msg); } sendUpdatedFullStateToClients();
Some other, generic questions that have been raised in my mind:
1. Do you split up things like movement/rotation and commands like "cast spell", "fire", etc. into different packages or just send it as one big blob (no matter if you send it like described in option a or option b)
2. Is there any particular order that is considered "best" when applying the updated state received (either on the server or client), for example do you first apply _all_ movement/rotation changes for all entities and then apply all other actions like "cast spell", "fire", etc. ? Or do you just apply the state update T1 for entity 1, and then T1 for entity 2, etc. ?
3. Do you handle things that aren't directly related to a special character or player and that isn't as time critical (things like spawning, updating pings, scoreboard, etc.) in a different manner then the game state itself? Maybe it goes in a slower update loop that only is 5-10Hz instead of 20-30Hz? Do you group this with the main game state or send it separately?

Well, that turned out to be quite a few questions. Any help is most appreciated!

Regards,
Fredrik

##### Share on other sites
Also another question regarding server vs. client, no matter if you use "option a" or "option b" on the server, what's the best practice on the client? Serialize that whole state for your game object and send it (again a 10 player FPS as example) or serialize and send each action by itself? I'm assuming you would group things on the client depending on if it has to be reliably delivered to the server (like a spell cast) or can be delivered with unreliable mechanics (like movement) ?

##### Share on other sites
Start by sending the entire state for each entity which has changed.
Once you have that working, immediately implement "Delta Compression", whereby we send a "bitkey" which represents which fields have changed, and then the data for the fields themselves, packed linearly. Your app knows the size of the fields, so it can easily decode this data when it is received.
Now you're only sending the data that has actually changed, as well as a (relatively small) binary key describing exactly which fields are involved... one bit per field present in the original struct.

You can apply delta compression in both directions with respect to client/server topology (of course).

##### Share on other sites

Start by sending the entire state for each entity which has changed.
Once you have that working, immediately implement "Delta Compression", whereby we send a "bitkey" which represents which fields have changed, and then the data for the fields themselves, packed linearly. Your app knows the size of the fields, so it can easily decode this data when it is received.
Now you're only sending the data that has actually changed, as well as a (relatively small) binary key describing exactly which fields are involved... one bit per field present in the original struct.

You can apply delta compression in both directions with respect to client/server topology (of course).

So you're saying that sending the state for each entity individually is the way to go? (this is what I'm doing right now)
I already have delta messages implemented, it only sends the fields that actually have changed, the basic pattern is this:

1. Client presses the FORWARD key
2. Client sends FORWARD command to server, this i sent ASAP in my fixed update function
3. Client simulates the action of the FORWARD press, using the same code as the server will
4. Server receives (hopefully, it's all unreliable) the FORWARD command
5. Server processes the FORWARD command and moves the player forward

This is the part I'm not sure of:

6)
A: The server sends the updated position to the clients right away when it's done processing this game entity
B: The server processes all other game entities during the current fixed update and then sends the complete game state to the clients

No matter if A or B is done in step 6:

7.1: Remote clients (the one not sending the FORWARD press) receives the updated position from the server and uses that to interpolate (or if we're dropping a lot of packets, extrapolate) the position of the player
7.2: Owning client (the one that sent the FORWARD press) receives the update position and uses that as the base for all the (yet to be verified) commands when moving around

##### Share on other sites

6)
A: The server sends the updated position to the clients right away when it's done processing this game entity
B: The server processes all other game entities during the current fixed update and then sends the complete game state to the clients

No matter if A or B is done in step 6:

7.1: Remote clients (the one not sending the FORWARD press) receives the updated position from the server and uses that to interpolate (or if we're dropping a lot of packets, extrapolate) the position of the player
7.2: Owning client (the one that sent the FORWARD press) receives the update position and uses that as the base for all the (yet to be verified) commands when moving around

This part of the system is generally based on what you are balancing for. If you send the data right away you are balancing towards a lower latency system but with a generally large bandwidth requirement. On the other hand, if you wait till the entire frame processes you are more likely to use less bandwidth for a slightly higher latency. Having said that though, I would go with B as the ability to gather up more changed data per sent update message thus saving considerable bandwidth overtime versus a very minimal latency increase is well worth it. Additionally, since you must deal with variable latency anyway (i.e. updating the object to 3 different connections means 3 different latency timings) you may as well try to go for the longest period between update messages sent and fix the latency problems early on.

Of course when you go for the high latency solution you quickly find you need several things:
1. Interpolation/extrapolation for both client and server sides.
2. Ability to send immediate items such as the fire button being pressed outside of the normal update frequencies.
3. Correction systems such that if the player and server positions are too far off you don't want to just "snap" to the updated position, sliding into position etc is generally preferable.
4. Relevance systems. You don't generally want to duplicate all player data to all connected clients at all times. If I can't see a player for instance, I don't need to be informed that they are crouched at xyz position or that they fired something (unless I see the result of the fire of course). Only when I get near another player do I need to start receiving all of those updates.

Finally, keep in mind that when I say "high" latency, I'm still talking fractions of seconds, i.e. say 100ms instead of 10ms in terms of time to respond to a given player message.

##### Share on other sites

[quote name='fholm' timestamp='1313583432' post='4850274']
6)
A: The server sends the updated position to the clients right away when it's done processing this game entity
B: The server processes all other game entities during the current fixed update and then sends the complete game state to the clients

No matter if A or B is done in step 6:

7.1: Remote clients (the one not sending the FORWARD press) receives the updated position from the server and uses that to interpolate (or if we're dropping a lot of packets, extrapolate) the position of the player
7.2: Owning client (the one that sent the FORWARD press) receives the update position and uses that as the base for all the (yet to be verified) commands when moving around

This part of the system is generally based on what you are balancing for. If you send the data right away you are balancing towards a lower latency system but with a generally large bandwidth requirement. On the other hand, if you wait till the entire frame processes you are more likely to use less bandwidth for a slightly higher latency. Having said that though, I would go with B as the ability to gather up more changed data per sent update message thus saving considerable bandwidth overtime versus a very minimal latency increase is well worth it. Additionally, since you must deal with variable latency anyway (i.e. updating the object to 3 different connections means 3 different latency timings) you may as well try to go for the longest period between update messages sent and fix the latency problems early on.

Of course when you go for the high latency solution you quickly find you need several things:
1. Interpolation/extrapolation for both client and server sides.
2. Ability to send immediate items such as the fire button being pressed outside of the normal update frequencies.
3. Correction systems such that if the player and server positions are too far off you don't want to just "snap" to the updated position, sliding into position etc is generally preferable.
4. Relevance systems. You don't generally want to duplicate all player data to all connected clients at all times. If I can't see a player for instance, I don't need to be informed that they are crouched at xyz position or that they fired something (unless I see the result of the fire of course). Only when I get near another player do I need to start receiving all of those updates.

Finally, keep in mind that when I say "high" latency, I'm still talking fractions of seconds, i.e. say 100ms instead of 10ms in terms of time to respond to a given player message.
[/quote]

Thanks for your answer, this is exactly the type of answer I was looking for! The current (very rough) implementation uses option A, but I've been looking at as many comercial engines as I can find info about and option B seems to be a lot more common (Q3 and Source for example). The implementation seems cleaner and easier to grasp on a conceptual level, basically you get the game state every X milliseconds from the server instead of updates for different entities just smashing in constantly. It also seems to help with implementing certain techniques like lag compensation, as it's easier to "roll back" the current state on the server to make to-hit calculations in past-time, etc. if you have a set X states per second and they are all saved, instead of hundreds of small updates to different clients sent at will.

Although the difference between 100ms and 10ms seems pretty substantial? 100ms would mean a 10Hz update interval on the server? Currently it's running at 30Hz, and it seems fine. Although I know a lot of commercial engines run at lower speeds (Source 20Hz, Q3 15Hz IIRC). Is 30Hz to fast?

I have one question though:

I'm currently reading input in my fixed update function on the client, which ticks at 30Hz, but lowering this to something like 10-20Hz it would feel like the user would notice serious lag on his own machine when walking forward but then deciding to walk left instead it would take 100ms for the press to go to the left register, and if he's fast enough maybe even miss the press all together?

Or would you have a faster hz locally, say 60Hz and then 20Hz towards the server? So that you would move locally 1/3 of what you're sending to the server, and send the correct command to the server every 3rd update and have the server move the whole full length that would require 3 ticks locally?

I also wonder something about your second point: Ability to send immediate items such as the fire button being pressed outside of the normal update frequencies.

How would you treat this, especially with a slow tick rate like 20Hz - would you read this and send it in the update function, or would you read it in the fixed update function and send it separately, using immediate send/reply-forward on the client/server and not wait for the tick to complete before you forwarded this information to the (other) clients?

##### Share on other sites
I've done some bandwidth calculations based on AllEightUp's post, assuming the state we need to synchronize is this:

• Entity ID (integer, 4 bytes) (could possibly be a short)
• Position (vector3, 12 bytes)
• Rotation (2 x float, 8 bytes, only need rotation around 2 axes)
• Health (short, 2 bytes)
• State flags for firing, casting granades, etc. (byte, 1 bytes)
• Animation (1 byte)
Gives us a total of 28 bytes for data, and then we have to add the IP and UDP header of a 28 total bytes also. Gives us a total of 56 bytes, ignoring things like sending timestamps, etc. also. Again assuming a typical FPS with 10 players.

1. Bandwidth needed to "send it as it comes in"/low latency: 56 per entity * 10 entities * 30 times per second * 10 players/connections = 168000 = 164kb/s
2. Bandwidth needed to "send all the state at the end of each tick"/high latency: 28 per entity * 10 * 30 times per second * 10 players/connections + 28 byte UDP header: 82kb/s

A pretty substantial difference, I'm expecting I will need to send more data then the above and the difference will decrease, but this still shows that I'd have to assume sending "low latency" will consumes anywhere from 2x to 1.5x the amount of bandwidth.

However, assuming how much extra data I need to send I have to consider the frame size, which it's recommended to keep under 500 over the internet as one frame currently would be 56 bytes with the "low latency" approach and about 308 (28*10 + 28) with the "high latency" one.

Anyone got any pointers on this topic?

##### Share on other sites

Thanks for your answer, this is exactly the type of answer I was looking for! The current (very rough) implementation uses option A, but I've been looking at as many comercial engines as I can find info about and option B seems to be a lot more common (Q3 and Source for example). The implementation seems cleaner and easier to grasp on a conceptual level, basically you get the game state every X milliseconds from the server instead of updates for different entities just smashing in constantly. It also seems to help with implementing certain techniques like lag compensation, as it's easier to "roll back" the current state on the server to make to-hit calculations in past-time, etc. if you have a set X states per second and they are all saved, instead of hundreds of small updates to different clients sent at will.

Although the difference between 100ms and 10ms seems pretty substantial? 100ms would mean a 10Hz update interval on the server? Currently it's running at 30Hz, and it seems fine. Although I know a lot of commercial engines run at lower speeds (Source 20Hz, Q3 15Hz IIRC). Is 30Hz to fast?

Even if you hit 3-400ms latencies, as long as the engine smooths things out and handles it relatively well it is completely playable. I used to play a lot of Quake 1 CTF over 28.8 dialup with 250-300 average pings. What drives people to drink heavilly and quit is usually when you get rapid changes in latency where the player can't compensate. Many games keep three average ping times: server to client, client to server and round trip. And yes, client to server and server to client can be "very" different though not really common. Anyway, using those averages to keep the latency changing smoothly over time instead of randomly is what makes the networking "feel" good to the player even if they are seeing 300ms latency. This is also how you control interpolations and such so they feel/look good to the player, if the latency keeps changing the discontinuity in movement gets very annoying.

I have one question though:

I'm currently reading input in my fixed update function on the client, which ticks at 30Hz, but lowering this to something like 10-20Hz it would feel like the user would notice serious lag on his own machine when walking forward but then deciding to walk left instead it would take 100ms for the press to go to the left register, and if he's fast enough maybe even miss the press all together?

Or would you have a faster hz locally, say 60Hz and then 20Hz towards the server? So that you would move locally 1/3 of what you're sending to the server, and send the correct command to the server every 3rd update and have the server move the whole full length that would require 3 ticks locally?

While professional gamers will all scream if they hear this, human perception of smooth movement starts at 10FPS and is fully active at 15FPS. You can tell there is a difference between 15 and 30 and it looks better but given a solid 15FPS (absolutely NO drops below 15 allowed) people are generally content with the frame rate. For a server to run much faster than that given all the latency and related problems generally is not required. Especially with latency, no input from the client or output of the server is likely to ever be used directly without a fair amount of modification to hide latency issues. As such, even if the server is running at 100ms a loop (10Hz of course) you are never going to be able to see/feel that difference on a client. Heck, you can go even lower than that and because of all the inbetween layers it is generally unnoticable.

Of course the slower the rate the server runs at the higher round trip latency variation the player will feel. So, this is really going to be a ballancing act between latency variation and server utilization. I wouldn't go any lower than 10Hz anymore but I wouldn't likely go higher than the 15 mentioned above. It just doesn't make much sense since you are going to be seeing delayed results based on latency anyway.

I also wonder something about your second point: Ability to send immediate items such as the fire button being pressed outside of the normal update frequencies.

How would you treat this, especially with a slow tick rate like 20Hz - would you read this and send it in the update function, or would you read it in the fixed update function and send it separately, using immediate send/reply-forward on the client/server and not wait for the tick to complete before you forwarded this information to the (other) clients?

So, I tend to run the input, sound, renderer and simulator all in completely separate threads/thread teams etc at different rates. For input as an example I crank it up to 120Hz when I have devices which are purely polled because I hate missed inputs. Given that though and say my simulator runs at 15Hz, I might get two fire button presses in a single frame, you have to choose which is most appropriate to your game but I tend to act on both inputs and just offset them in time appropriately otherwise you can just absorb them into a single fire event if it makes sense. Anyway, how ever you end generating the event you need to send it to the server immediately and not wait around for your standard network update. For UDP you can send a packet immediately with the fire event in it all by itself, or you can grab some known out of date data and package it up with the fire event. Either way though you bypass the rate at which you normally send packets and send this as soon as the state on your character changes. For TCP, just send the fire message down the OOB channel and it generally gets there quickly enough to be viable.

Hope this helps further.

##### Share on other sites

I've done some bandwidth calculations based on AllEightUp's post, assuming the state we need to synchronize is this:

• Entity ID (integer, 4 bytes) (could possibly be a short)
• Position (vector3, 12 bytes)
• Rotation (2 x float, 8 bytes, only need rotation around 2 axes)
• Health (short, 2 bytes)
• State flags for firing, casting granades, etc. (byte, 1 bytes)
• Animation (1 byte)
Gives us a total of 28 bytes for data, and then we have to add the IP and UDP header of a 28 total bytes also. Gives us a total of 56 bytes, ignoring things like sending timestamps, etc. also. Again assuming a typical FPS with 10 players.

1. Bandwidth needed to "send it as it comes in"/low latency: 56 per entity * 10 entities * 30 times per second * 10 players/connections = 168000 = 164kb/s
2. Bandwidth needed to "send all the state at the end of each tick"/high latency: 28 per entity * 10 * 30 times per second * 10 players/connections + 28 byte UDP header: 82kb/s

A pretty substantial difference, I'm expecting I will need to send more data then the above and the difference will decrease, but this still shows that I'd have to assume sending "low latency" will consumes anywhere from 2x to 1.5x the amount of bandwidth.

However, assuming how much extra data I need to send I have to consider the frame size, which it's recommended to keep under 500 over the internet as one frame currently would be 56 bytes with the "low latency" approach and about 308 (28*10 + 28) with the "high latency" one.

Anyone got any pointers on this topic?

Well, keep in mind that much of your data can be significantly compressed. For instance, do you "really" need floating point accuracy in your two rotations? I've compressed axis based rotations to 256 angles because I simply didn't need it any more accurate. And, again, with latency and the fact that you will have various interpolations happening, no one will be the wiser on other machines.

Additionally, if you are doing delta compression and only sending changed data, position/orientation are generally the only items changing constantly and everything else will be occasional modification. So, you can cut both of the above in half with only a little bit of extra work.

Finally, I tend to work things out in terms of the throughput ability of a given connection. While nearly everything I do anymore is TCP, UDP is quite a bit of fun because you can constantly trade latency versus through put ballancing on demand. Say a person goes and hides in a corner with a sniper rifle, breaks out the zoom view and starts looking for targets. Since they are no longer moving they don't need the 12 bytes of position being sent, so they have pleanty of throughput that you can use to give them a higher rate and more accurate view of other players. As soon as they go back to moving, slow the packets back down and pack more data into each one. I had an insanely complicated variation of this at one time, it was experimental but it was fun to play with.

##### Share on other sites

Option A: Send game state as soon as possible in small packages

Generally, that's a bad idea, because each packet has some packet header overhead, and some processing overhead.

In general, you want to establish a packet send rate -- somewhere between 10 and 30 packets per second is common -- and then pack as many messages as you can into each packet, within whatever your bandwidth budget is. The role of the server and the client networking and simulation code is to make those messages give the client the best possible experience given the available resources.

• 16
• 9
• 13
• 41
• 15