Network developers will agree that TCP and TCP+UDP are not optimal for real-time game/simulation applications. Custom UDP protocols can be more efficient. However, if the networked application sends too much data, any advantage provided by a custom UDP protocol is lost. In all cases, if the reliable queues fill up faster than they can drain, lag will be present (beyond network latency), up to the point the channel must be closed due to queue overflow.
A well designed network game can work fine using TCP or TCP+UDP. Beginning network programmers will have a much easier time using TCP+UDP than trying to create a custom UDP protocol from scratch. The argument that TCP+UDP is overly-complex is invalid, as creating a custom UDP protocol is much more complicated. In the context of marketing material for an existing, well-tuned and debugged custom UDP protocol, such an argument can make sense (more in terms of efficiency than complexity: the developer must now become familiar with third party code).
The out-of-order (early) unreliable data arrival before reliable data argument (from the TNL site) is easily dealt with by tracking state. Example: a reliable activation message is sent to an object via the reliable channel and gets lost in transit. An unreliable position update packet arrives before the activation message. The object state is checked, and since the object is not active, the position update is ignored.
How valuable is out-of-order reliable support (OOORS)? In a game/simulation where objects move and stop for long periods of time, or when turning on/off non-state-affecting effects/props, bandwidth can be saved. However, in a game where objects are constantly moving (or stop for very short periods of time), OOORS provides little to no benefit, and if extra packet bits are required to allow for support of OOORS, it's a bandwidth loss.
|Original post by markf_gg|
I'm still amazed that people even debate this. TCP alone is a poor solution for any kind of a realtime game, if only because even a single dropped packet causes a stall in all network data delivery until that data loss is noted and retransmitted.
TCP alone is the only option for games operating in restricted environments (for example, when only HTTP/HTTPS is open at the firewall). As long as the TCP queue is effectively monitored (different methods for *nix and Win32), and decent client-side prediction is implemented, it is possible to work around stall issues.
While TCP will never be ideal for a high-data-rate FPS, it can work fine for RTS and slower moving MMOG's. If the game is primarily running lock-step, where everything must be delivered in order, guaranteed, TCP alone will work fine. During high congestion periods, TCP may, by design, slow down faster than a custom UDP protocol. However, this may be an advantage for a MMOG with thousands of players, where a poorly designed custom UDP protocol may fall apart (keeps sending data at a high(er) rate, preventing the network from recovering). I suspect this is one reason why existing UDP-based MMOG with thousands of players can fall apart. TCP is well designed to efficiently handle this case.
|Original post by markf_gg|
Hybrid TCP/UDP systems are needlessly complicated and suffer from problems like bandwidth overconsumption by the TCP stream, maintainance of seperate channels for UDP and TCP, misordered delivery of updates, etc.
Bandwidth over-consumption is going to come from the unreliable channel, not the reliable channel. During periods of high congestion, the unreliable channel should be cut until the reliable channel(s) queue(s) can drain (data that is not truly state-critical should never be added to the reliable queue). All other arguments can be ameliorated at the network game design layer.
|Original post by markf_gg|
I was going to go into greater detail, but then I realized I already have in the design fundamentals section of the Torque Network Library reference. The packet loss section gives a good explanation for why neither UDP nor TCP provide the right abstraction level for realtime game network programming.
The Torque Network Library looks like a good network toolkit (and to be fair, so do RakNet and ReplicaNet). While the arguments given do well to support licensing/purchasing a pre-made, well-tested custom UDP network toolkit, the biggest problem, by far, is network game design as opposed to the underlying network protocol.
I created a custom reliable UDP protocol in a case where the TCP implementation wasn't quite finished. The new UDP protocol ended up being more efficient than a TCP+UDP model (due to packet overhead savings and retransmit optimizations). Even so, the game would grind to a halt during high reliable state data sends. This required a significant redesign of networked game elements. Thus, while every bit of bandwidth helps, the burden of efficiency and game play quality resides in the game design, not the network protocol.
This was for the first full, XBox Live enabled game, and it was finished early (network-enabled games tend to ship late due underestimation of networking issues). While the game only supported 4 players, many more flying and moving objects were active, as well as many rapid reliable state changes (a nature of the game: too late to completely remedy by the time I joined the project). Voice was enabled for all players, all the time (as opposed to only hearing players near each other). The game played with little to no perceptible lag, even below 64kbps (voice took ~32kbps).
Again, I recommend that developers look into developing or licensing/purchasing custom reliable UDP protocols (RakNet, TNL, and ReplicaNet appear to be good choices). However, TCP and TCP+UDP can work fine: the real work in making a game play well under all internet conditions is centered around the network game design itself, not the network protocol. Likewise, if a game plays well/poorly on the internet, it can’t be attributed-to/blamed-on TCP, TCP+UDP, or a custom UDP protocol. It’s the network game design itself.[Edited by - John Schultz on May 16, 2005 5:55:13 PM]