1) No. Yes. Faster is the wrong word. UDP as such has less latency because it does not implement a stream with guaranteed in-order delivery. It also has fewer bytes in the header, but this does not matter much. It may take a microsecond less to go through the DSL modem, but on the internet it doesn't really matter. Routers forward packets-per-second, not bytes-per-second. Their queue depth is measured in packets, too. Insofar, all packets, TCP or UDP, large or small, are generally the same and equally fast.
However, as Orwell taught us, all animals are equal, but some are more equal. In our perfect world, there exists QoS to make the world even more perfect. Which means that any router on the internet may (and usually will) favour one or the other packet type in a more or less fair or unfair manner. It may delay TCP packets in order to deliver the "usually realtime" UDP packets faster. The router in my office does that. When I pick up the phone while a download is running, the router drops TCP packets in favour of VoIP (and congestion control kicks in). I don't even notice that this is happening, other than by the fact that it "just works". Normally, given the fact that the download already fills the cable, doing a phonecall should be troublesome.
Of course it may as well work the other way around. A router may give UDP a very short queue depth and drop your packets under the assumption that if they're queueing up, then they're not delivered fast enough anyway. Or, something else. You don't know. A router might even drop both TCP and UDP based on your sender address because you've exceeded your bandwidth guarantee. Many hosting plans have something like "10Mbit/s guaranteed, burst up to 100Mbit/s". Which means no more and no less than you have a plain normal 100Mbit/s uplink, and the router will usually just forward your packets, and at some more or less unpredictable time (at its own discretion, based on some secret, obscure metrics) start dropping your packets!
2) UDP gives you exactly one thing. It sends out complete, discrete, independent "messages". The only guarantees that you have is that a complete "chunk" of data up to a specific maximum size will go out, and in case it arrives at the other end
, the same complete "chunk" of data will be received (never less, either all or nothing). That's all the guarantees you have.
Everything else you need, you must do yourself. This includes acknowleging (or negatively acknowledging) and maintaining a connection. The easiest way of maintaining a connection is to remember when you've last sent and last received a packet. If you've not sent a packet for, say, a second, send a keepalive packet. This usually won't happen because you usually send some data once or several times every second (but you want to be sure that you're not dropped only because you don't have anything to send for a moment!). If you've not received a packet for, say, 2 seconds, then the other end is "dead". You know this because even if they have nothing to send, they should have sent a keepalive. Consider the connection dropped. More fine grained control is of course possible, but that's the basic thing.
3) No. Yes. That's exactly the problem. Packets can
arrive out of order. And not only that, the same packet can in theory arrive twice (or more often), or not at all. You just got shot. Twice. You've died and lose reputation. Twice. Charming.
If this is a problem (usually it is!) you need something like a sequence number to detect both
missing and duplicate packets. Note that by implementing packet ordering, you are losing most of UDP's "extra speed" advantage.
4) Packet loss is relatively rare (around 0.5% here), but it happens. To everyone, all the time. Packet loss is a normal condition, not an error. You must
be able to deal with it (or it must be OK to ignore it) because it will happen.
Packet loss not only happens because someone uses a wireless connection or because of noise in a cheap copper cable. Packets are dropped both unintentionally (because a router queue is full) and intentionally as a means of congestion control. For example, TCP uses a rather complicated congestion control algorithm that uses packet loss as the indicator of how much it can push onto the wire at a given time.
5) Making the server authorative is the only safe way of ensuring that people can't trivially
cheat. Otherwise, I could just send to you "I've knocked out your guy, I win" when I'm in fact 15 meters away and could not possibly reach him.
Doing some correctness checks on the client first can greatly reduce your server load. If you rule out every impossible move before sending it to the server, the server does not need to process what's not possible anyway. Regardless, everything that comes from a client machine must be validated and must not be trusted.
Edited by samoth, 21 November 2012 - 07:41 AM.