Using concurrent TCP/WebSockets to mitigate head-of-line blocking

Started by
3 comments, last by jacksaccountongamedev 5 years, 3 months ago

 

Hello. I’m looking for some advice on how my C++ multiplayer shooter should handle WebSocket connections. The game is fast-paced, but movement is slower than in many first-person shooters (and could potentially be slowed further if necessary).

Background/basic design:

My game has two versions. The first is a standalone desktop version. The second is a web version compiled via Emscripten/WebAssembly and running in the browser (see above link). The basic idea is to support both casual players wanting (or needing) to run the game in the browser and players wanting to download and run the standalone version.

Players using the web/browser version connect to the server via WebSockets (i.e. TCP). Players using the standalone version usually connect via UDP. The same server can handle both kinds of connections. In other words, players playing via the browser/WebSockets can currently play in the same server as those playing via the standalone application/UDP, though a server can be configured to only accept one kind of connection (e.g. UDP-only servers).

A server may be dedicated, or it may be hosted by a (port-forwarding) player using the standalone application and participating in the game.

Questions re. TCP/WebSockets:

Obviously, the UDP players can expect less lag issues than the WebSocket players. To mitigate the problem of head-of-line blocking for the WebSocket players, my plan is that each client connecting from the browser will connect to the server through several concurrent WebSockets. A similar system was used in Airmash, which was a pretty successful action game using WebSockets that didn’t seem to suffer from lag issues. According to its creator, each client in Airmash connected via two WebSockets, with important information being duplicated and sent on both connections.

My plan is that rather than sending important information across all open socket connections between the server and a client, the two would simply cycle through the open connections. That way, if a packet is lost, the blocked connection will have some time to recover before it is used again. Also, even if the connection is still blocked by the time it needs to be used again, other data would still be arriving through the other connections.


1. Is this actually a viable way to mitigate head-of-line blocking? Obviously, nothing can be done about a hiccup in the client’s connection to the internet, but would this technique actually help reduce the effect of occasional lost packets? Or would a packet dropped from one connection somehow also block the others such that there's no point in using multiple concurrent connections?

2. If it is viable, then what would be a reasonable number of WebSocket connections to establish between each client and the server? The amount of data that needs to be transmitted would remain mostly the same irrespective of the number of connections, so more connections means less load on each one. If updates are sent at 20hz and there are only two connections between the client and the server (as in Airmash), then one connection would get used every 100ms, which probably isn’t enough time to recover from a dropped packet. On the other hand, if there are ten connections between a client and the sever, then each connection has half a second to recover from a lost packet before its turn to be used again comes around.

3. Finally, are there any glaring issues regarding the design outlined above? The idea of accepting both UDP and TCP connections on one server to accommodate players in different environments seems novel, but could these different kinds of connections somehow interfere with each other?

Thanks for any input/advice! To be clear, I have a basic understanding of game networking but am certainly not experienced in this area.

 

Advertisement

Sounds interresting, I would imagine writing the connection handling bits on the server could get a bit heavy.  I'm curious to hear these answers myself.

I played around with this in the past, and there are pluses and minuses.

Yes, you can work around head-of-line blocking for a single packet lost when multiplexing over two connections.

Yes, it's not a lot of overhead to send "the last X inputs" in each packet, if you use RLE encoding of the inputs.

No, you're not guaranteed that the browser will create multiple connections; HTTP2 and QUIC allow browsers to multiplex as much as they want across as many connections as they want. I have no idea what actual browsers and servers end up doing in practice these days.

No, this won't solve all problems, because packet loss often comes in bursts, in which case all of your connections end up being delayed at more or less the same time.

It's super simple to set up once you have the basic system going, though (assuming your code is well structured,) so it's certainly worthwhile testing out for your particular use case.

enum Bool { True, False, FileNotFound };

Thanks for the comments.

I'm slow in responding to this thread because a discussion has been going on between me and another forum user here. The discussion covers a few points, including the code to handle the connections and the possibility of implementing a system of acknowledgements in order to avoid sending packets down potentially blocked connections.

I've been testing, on a small scale, the idea of rotating across five sockets and it has been working well so far.

This topic is closed to new replies.

Advertisement