How to keep network traffic down?

Started by
5 comments, last by Mirris 3 years, 3 months ago

Hello everyone

I have created a ASP.NET server with a RESTful API to store Unity transforms in an SQL database.

It is possible for different clients to sync data (transforms) with each other.

The problem which occurs is that, when scaling the scene up to several thousands of objects or at times even merely a couple hundred, it results in sluggish performance. Monitoring the network traffic shows that up to 10 Mbps is transferred while the same test reveals that online multiplayer games send data at around 256 Kbps, if not less.

Clearly, sending updates in json format is NOT the way to go, what is the usual way of synchronising data and how can I keep my traffic down?

Advertisement

That's not how game networking works.

First, REST requests have significant per-request overhead. You'll want a custom protocol that discards all of the overhead you don't need.

Second, sending all the scene transforms as a “snapshot” is super inefficient, both because some objects don't move, and because some objects move more or less predictably, and thus can be simulated on the client and the server and come to (close to) the same result on both sides.

Thus, a typical networked game doesn't have the client “request” what should be seen, but instead has the client run a general game simulation, and have the server “push” corrections (inputs from other users, server-decided tie breaker outcomes, etc.)

There's tons of write-ups on this kind of structure, including older threads in this forum, but chances are, you need to take a step back and start over, assuming that all data lives in RAM (not database,) and you only send occasional updates from the server to the client (plus the “events” that really affect simulation, including the inputs from other players.)

enum Bool { True, False, FileNotFound };

Thank you for your input. Regarding the simulation you describe, I applied some of the described techniques already like simulating movements and interpolating in between updates, so it seems that the request-overhead you mention is where the main culprit is at. Could you share some keywords, books or links about the custom protocol you mention?

One of the previous topics on this forum had multiple answers suggesting to build and ASP.NET API, which did seem to work for at least some people. Others posts mention Winsock, although those topics are on average 20 years old and therefore might be superseded by more modern techniques. More recently I've heard about influx, kafka and apache pulsar. I have no experience whatsoever with any of these. Do they have a place in game networking as well?

What technologies would an ideal system consist of and how would they relate to each other in an an environment where multiple spectators should sync with one base game?

“UDP streaming protocol” (although that will get you a lot of RTP type video streaming)

“UDP state replication” would work.

Read the code for the Quake games – it's open source, and fairly easy to read, assuming you know C.

Which technologies you use, depend entirely on what your game is.

If your game is turn-based and asynchronous, a simple REST system might be good enough. A game like chess is turn-based, but synchronous – time matters – and thus something slightly better is likely needed. If your game is a racing game, or a first person shooter, or an action-adventure, or a platformer, you probably need a streaming state replication approach, not request/response.

If your game HAS to be played in a browser, then you will need to use Websockets. You can “stream” over Websockets, but TCP head-of-line blocking when packets are dropped will cause lag spikes. There's nothing you can do about that.

If you have a mobile or native client, then UDP sockets are the way to go. On Windows, that's using the Winsock API. On MacOS and Linux and iOS, that's using the UNIX sockets API. On Android, you can choose between their “flavored” sockets (Java) or the underlying UNIX sockets API. Saying the sockets API is “old” is like saying that file I/O is “old” – it's a function, it's performed by the computer, it hasn't changed in forever.

That being said, for large scalability, you will want to use overlapped/asynchronous/evented I/O for your sockets on the server – but you're nowhere near the point where you need to worry about per-socket per-player overhead.

In addition to the Quake source, you can look for popular networking libraries for games, like ENet (C), Raknet (C++), or Lidgren (C#). Or run through the networking tutorials in the Unreal Engine youtube series, which will give you an idea of the higher-level concepts, and then you can map those to UDP sockets underneath.

Finally, influx, kafka, and pulsar, are systems to deal with streaming at the “seconds of latency” level, for bulk data. They are not suitable for a client/server networking setup for games, both because their latencies are way too high, and because their security models aren't suitable for the needs of most game networking setups. They can, however, be used on the back-end, for doing things like collecting metrics, or notifying “important” game events to central systems, or maybe even forwarding in-game text chat, at least if it needs to travel across server instances.

enum Bool { True, False, FileNotFound };

For internet enabled games played over a WAN, the specifics of endpoint connections generally don't matter.

The one case that might be interesting is if you connect through a mobile phone endpoint, AND if that mobile phone endpoint roams such that it gets assigned a new IP address. In that case, you need to either re-establish the session, or support some mechanism for session migration to a new IP on the server. This is, generally, a pretty rare case, though.

enum Bool { True, False, FileNotFound };

This topic is closed to new replies.

Advertisement