# Sending slightly different data to many clients efficiently

This topic is 2898 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

# The Problem

In each update cycle the server does, various data needs to be sent to one or more clients.
State updates for actors, remote procedure calls and various house keeping data are just some
examples of the data that needs to be sent.

A lot of the data that is sent needs to be sent to one or more clients, but seldom do all clients require
the exact identical data every time. So the bulk of the data is most of the time the same for all or several
clients, but with small variations that almost always exist.

# Example

We have three clients connected: C1, C2 and C3
There are five actors on the scene: A1, A2, A3, A4 and A5

All three clients are in range of actor A1 and A2, so their state updates
need to be sent to all clients.

• A3 is only in range of the third client
• A4 and A5 are in range of the second client
• A5 is in range of the first client
• There are also outgoing RPCs to C2, which only he needs.
With this we end up with the following data breakdown:

C1 needs: A1, A2, A5 data
C2 needs: A1, A2, A4 and A5 data
C3 needs: A1, A2, A3 data and the RPCs for him

# Solution 1 (naive way)

Send each package as an individual piece of data, this has very low CPU overhead but a very high Network overhead and will not work for anything remotely big.

# Solution 2

Keep a separate "outgoing message" attached to each client, so when we do the call to send "state for A1" this is written to the "outgoing message" of each client that needs this data.

This is cheap on the network, since we dont waste anything and dont send any additional UDP headers we dont need to
But it's expensive on the CPU as the same data needs to be written to several messages

# Solution 3

This is the most complex, but probably the best performing solution also. When we need to send the "state for A1", write this data once to a temporary buffer. Give this buffer a sequence number (which is reset to zero every time the server completes one update/send loop), so state for A1 gets sequence number 1, state for A2 gets sequence number 2, and so on.

If a client needs the state for A1, we set a bitflag in a 256 bits wide bitmask (4 x 64bit ints).

When it's time to send the data, group all clients on their bitmask which tells us which message chunks this client needs.

Create one message for each group of clients, send it to every client in that group.

# Conclusion

I'm looking for feedback, ideas or just straight up solutions on how to solve this. The three solutions above are the ones I've managed to come up with on my own, how is this normally solved in any remotely big/complex game?

##### Share on other sites
You never send messages in separate packets. Batch what you can.

But it's expensive on the CPU as the same data needs to be written to several messages[/quote]
Even the most naive implementation of per-client queue will not show up as a blip in profiler. It's just memcpy for each message.

class Client { List<Messages> queue; int pending; void send(Message m) { if (pending > THRESHOLD) warn("Client is saturated"); available = THRESHOLD - pending; Packet p = new Packet(available); while (p.hasRoom()) { p.append(queue.getFirst()); queue.popFirst(); } p.send(); } }

Naive approach, but it shows the basics. Bandwidth control can be somewhat more elaborate, maybe one should measure bytes or packets. If client cannot keep up, higher level logic may change how many and what type of messages it receives. Or they might be disconnected altogether. Send() is complemented by receive, which indicates the most recent packet that was received by peer.

A more elaborate method keeps sent messages in different queue until they are acked. This allows resending and also per-message latency tracking. It could be also used to divine certain networking characteristics (possibly amount of buffer combining/fragmentation over TCP) to limited usefulness and extent.

Queue flush might also need to be performed periodically even if nothing is sent to avoid long delays. In combination with per-message tracking, queue can be a priority queue and messages put into packet are selected based on their type, importance or age.

##### Share on other sites
Each connected client should have a separate outgoing message queue. You can use reference counting for the messages if you want to save a bit of RAM. Typically, your network code wants to have "pulses," where, each "pulse," you collect the most important messages in the queue and batch them into a UDP datagram to send. If a message waits too long in the queue, you simply drop it (this typically happens if you have very busy areas and strict networking limits). Also, this queue may need to hold on to messages, with some management information, until you get acknowledgement from the other end, if you support reliable messages over UDP.

Now, the problem of "it's too much CPU load to copy the messages once for each client" was true for some network/CPU combinations back in 1985 (4mbit token ring on a VAX 11/750? that's some load!), but it isn't actually true anymore. If you have a 100 mbit internet connection (which will likely cost you between $1k and$3k a month in a typical co-location facility), then you can send, at most 10 MB/sec of data. Throughput to RAM on a typical server is 50 GB/sec. This means you can copy each byte that gets sent 50,000 times before you run out of RAM throughput -- for each client!

• ### Game Developer Survey

We are looking for qualified game developers to participate in a 10-minute online survey. Qualified participants will be offered a \$15 incentive for your time and insights. Click here to start!

• 10
• 18
• 13
• 9
• 9