UDP network layer must-haves?

Started by
26 comments, last by hplus0603 10 years, 8 months ago

For metrics, you want various kind of parameters for gross trouble-shooting:

- customer class (if you have it)

- session state (not-established, negotiating, lobby, game, etc)

- size of packet

- number of payload messages per packet

- payload message types

- packet direction (to server or from server)

- number of dropped packets detected

- number of duplicate packets detected

- number of reordered packets detected

- measured round-trip latency

- number of malformed packets

In the best of worlds, you chuck all of this at some reporting infrastructure, and generate an online cube where you can slice your traffic across arbitrary dimensions (each of the things above classify packets across some dimension.) This doesn't necessarily need to be real-time, because it's useful for finding things you didn't know about your game. Each individual dimension could separately be real-time, and the drill-down would be relegated to a baked cube, for example.

At that point, you can get reports like "number of packets that are re-ordered, containing the 'fire weapon' payload."

Now, there's a second level of diagnosis, where you can get this data sliced, in real-time, based on specific identifiers. Specific IP. Specific game instance. Specific player. Random sampling. Etc. Being able to have a customer on the line and turn on tracing of that particular customer's traffic is super helpful while debugging.

Another useful feature is the ability to capture and play back traffic, both for analysis, and for actual system analysis. If you can capture all packets that go in since server start, then you can reproduce any server state offline for debugging!

enum Bool { True, False, FileNotFound };
Advertisement

hplus0603: Thanks, that's a great list to start out with.

Such a system should also have, ideally, an ability to simulate various network conditions and failures. Examples include:

  • Being able to simulate various forms of latency (client to server, server to client, bidirectional)
  • Dropped packets
  • Partial or corrupted packets (it can happen, even with TCP, The CRC only has so many bits)

Obviously, all these things should work with the metrics that are gathered, to allow you to diagnose and mitigate any issues found.

In time the project grows, the ignorance of its devs it shows, with many a convoluted function, it plunges into deep compunction, the price of failure is high, Washu's mirth is nigh.

it can happen, even with TCP, The CRC only has so many bits


Although, luckily, pretty much all physical transport layers have their own detection/correction codes, that significantly improve the robustness. TCP over a 1/10000 reliable physical link would be terrible. Luckily, typical links have bit error rates much, much lower than that.
enum Bool { True, False, FileNotFound };

it can happen, even with TCP, The CRC only has so many bits


Although, luckily, pretty much all physical transport layers have their own detection/correction codes, that significantly improve the robustness. TCP over a 1/10000 reliable physical link would be terrible. Luckily, typical links have bit error rates much, much lower than that.


Yes, and no...

Older data link layer protocols have quite significant error checking capabilities... being from a time when data lines were usually noisy and not very good. However newer data link layer protocols, due to the increased quality of the lines and equipment, have significantly reduced error correction, preferring to defer that to higher layer protocols. Although they haven't completely eliminated it, you usually have fairly simple error checking (like parity bits). But yes, there is error checking on many different levels. Nevertheless, it is something you should be able to test on your platform, to ensure that your code is handling it correctly.

In time the project grows, the ignorance of its devs it shows, with many a convoluted function, it plunges into deep compunction, the price of failure is high, Washu's mirth is nigh.

Protocol prefixes exist for a couple of reasons (neither of which is to block a malicious attacker).

One is to easily filter out another program which happens to be squatting on your registered port (you are going to register your port, right?). People should not re-use IANA registered ports, but they do.

The other is to allow you to do breaking revisions to your own protocol later, and have that simply filtered out.

you are going to register your port, right?


You're kidding, right? If every custom protocol invented was registered, and ports not re-used, we would have ran out of ports in 1982. (Or earlier.)

And, given that you provide the server, and manage the server, how could another service be squatting on the same port? The only reason that could happen would be if some third-party user accidentally puts in the wrong hostname/port information in some other program. Or they did it maliciously -- this is known as a "fuzz attack."

It does make sense to include protocol version information in your preamble, though, so you can reject clients that are too old. This may not need to be part of every packet -- making it part of the initial credentials packet, and making successive packets just rely on successful authentication, might be good enough.

enum Bool { True, False, FileNotFound };

And, given that you provide the server, and manage the server, how could another service be squatting on the same port? The only reason that could happen would be if some third-party user accidentally puts in the wrong hostname/port information in some other program. Or they did it maliciously -- this is known as a "fuzz DDoS attack."

It is more of an issue for LAN play, where broadcast packets become problematic if multiple applications are using the same port. But I've seen lots of cases where companies (including large companies) decide to use a port that is already registered by someone else for some completely different purpose and just set up shop.

As to your first question, if you go look at the IANA list, you'll see that I registered a set of ports for all the Red Storm games back in the '90s.

Both Q3 networking code and Enet has their own fragmentation code, fragmenting things below an arbitrary guessed MTU you can set up. Isn't the advantage that if you do your own fragmentation, then you can be fairly sure (given that you've taken care to select a good MTU) there won't be any unnecessary defragmentation->fragmentation happening anywhere except for the final networking layer. If each fragment is correctly tagged, then it might be possible to avoid wasting time to wait for remaining fragments of an out-of-date fragment.

Maybe there are other reasons as well.

There are mainly two reasons why you would implement your own fragmentation layer:

1. You have no clue (rather unlikely for Q3)

2. You know that IP does fragmentation but want to avoid it

Why would you want to avoid it? There is at least in theory one good reason. IP discards the whole datagram if one of its fragments is lost. Assume you send a 4 kilobyte datagram with a MTU of 1280, respectively 4 fragments. If you do your own fragmentation, those "fragments" are complete datagrams. If one is lost, you still get the other three. Relying on IP fragmentation means that if one is lost, you lose all four.

So much for the theory. In reality, you do not lose datagrams at all. Except when you lose them, and then you lose them in dozens, not just one.

Losing individual datagrams because of "noise" just doesn't happen nowadays (except maybe on a pathetic low signal wireless, but you wouldn't want to play a game in such a setup anyway). When you lose packets, it's because some router's queue is temporarily full and it discards every incoming packet until it gets a breather, maybe 0.1 seconds or so later. Insofar, there is no visible difference between losing one fragment and losing all of them, as it's the same observable end result either way.

then you lose them in dozens, not just one

Very true!

In fact, most networking hardware seems to have too much buffering, rather than too little, these days, which leads to all kinds of bad oscillation behavior. (Google "buffer bloat" for examples from the TCP world.)

you wouldn't want to play a game in such a setup anyway

You might not want to, but your users are quite likely to try. (And over a 3G mobile connection. And over satellite internet. And over tin-cans-with-string using a carrier pigeon back-up.)

Guess who they will blame when the game doesn't work? Unless you have a very clear meter for quality, and very clearly shows how the quality impacts the game working or not, they will blame you.

enum Bool { True, False, FileNotFound };

This topic is closed to new replies.

Advertisement