Jump to content
  • Advertisement
Sign in to follow this  
lerno

UDP network layer must-haves?

This topic is 2176 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

For metrics, you want various kind of parameters for gross trouble-shooting:

- customer class (if you have it)

- session state (not-established, negotiating, lobby, game, etc)

- size of packet

- number of payload messages per packet

- payload message types

- packet direction (to server or from server)

- number of dropped packets detected

- number of duplicate packets detected

- number of reordered packets detected

- measured round-trip latency

- number of malformed packets

 

In the best of worlds, you chuck all of this at some reporting infrastructure, and generate an online cube where you can slice your traffic across arbitrary dimensions (each of the things above classify packets across some dimension.) This doesn't necessarily need to be real-time, because it's useful for finding things you didn't know about your game. Each individual dimension could separately be real-time, and the drill-down would be relegated to a baked cube, for example.

At that point, you can get reports like "number of packets that are re-ordered, containing the 'fire weapon' payload."

 

Now, there's a second level of diagnosis, where you can get this data sliced, in real-time, based on specific identifiers. Specific IP. Specific game instance. Specific player. Random sampling. Etc. Being able to have a customer on the line and turn on tracing of that particular customer's traffic is super helpful while debugging.

 

Another useful feature is the ability to capture and play back traffic, both for analysis, and for actual system analysis. If you can capture all packets that go in since server start, then you can reproduce any server state offline for debugging!

Share this post


Link to post
Share on other sites
Advertisement

Such a system should also have, ideally, an ability to simulate various network conditions and failures. Examples include:

  • Being able to simulate various forms of latency (client to server, server to client, bidirectional)
  • Dropped packets
  • Partial or corrupted packets (it can happen, even with TCP, The CRC only has so many bits)

Obviously, all these things should work with the metrics that are gathered, to allow you to diagnose and mitigate any issues found.

Share this post


Link to post
Share on other sites

it can happen, even with TCP, The CRC only has so many bits


Although, luckily, pretty much all physical transport layers have their own detection/correction codes, that significantly improve the robustness. TCP over a 1/10000 reliable physical link would be terrible. Luckily, typical links have bit error rates much, much lower than that.

Share this post


Link to post
Share on other sites

it can happen, even with TCP, The CRC only has so many bits


Although, luckily, pretty much all physical transport layers have their own detection/correction codes, that significantly improve the robustness. TCP over a 1/10000 reliable physical link would be terrible. Luckily, typical links have bit error rates much, much lower than that.


Yes, and no...

Older data link layer protocols have quite significant error checking capabilities... being from a time when data lines were usually noisy and not very good. However newer data link layer protocols, due to the increased quality of the lines and equipment, have significantly reduced error correction, preferring to defer that to higher layer protocols. Although they haven't completely eliminated it, you usually have fairly simple error checking (like parity bits). But yes, there is error checking on many different levels. Nevertheless, it is something you should be able to test on your platform, to ensure that your code is handling it correctly.

Share this post


Link to post
Share on other sites

Protocol prefixes exist for a couple of reasons (neither of which is to block a malicious attacker).

 

One is to easily filter out another program which happens to be squatting on your registered port (you are going to register your port, right?). People should not re-use IANA registered ports, but they do.

 

The other is to allow you to do breaking revisions to your own protocol later, and have that simply filtered out.

Share this post


Link to post
Share on other sites

you are going to register your port, right?


You're kidding, right? If every custom protocol invented was registered, and ports not re-used, we would have ran out of ports in 1982. (Or earlier.)

And, given that you provide the server, and manage the server, how could another service be squatting on the same port? The only reason that could happen would be if some third-party user accidentally puts in the wrong hostname/port information in some other program. Or they did it maliciously -- this is known as a "fuzz attack."

It does make sense to include protocol version information in your preamble, though, so you can reject clients that are too old. This may not need to be part of every packet -- making it part of the initial credentials packet, and making successive packets just rely on successful authentication, might be good enough.

Edited by hplus0603

Share this post


Link to post
Share on other sites

And, given that you provide the server, and manage the server, how could another service be squatting on the same port? The only reason that could happen would be if some third-party user accidentally puts in the wrong hostname/port information in some other program. Or they did it maliciously -- this is known as a "fuzz DDoS attack."

 

It is more of an issue for LAN play, where broadcast packets become problematic if multiple applications are using the same port. But I've seen lots of cases where companies (including large companies) decide to use a port that is already registered by someone else for some completely different purpose and just set up shop.

 

As to your first question, if you go look at the IANA list, you'll see that I registered a set of ports for all the Red Storm games back in the '90s.

Share this post


Link to post
Share on other sites
Both Q3 networking code and Enet has their own fragmentation code, fragmenting things below an arbitrary guessed MTU you can set up. Isn't the advantage that if you do your own fragmentation, then you can be fairly sure (given that you've taken care to select a good MTU) there won't be any unnecessary defragmentation->fragmentation happening anywhere except for the final networking layer. If each fragment is correctly tagged, then it might be possible to avoid wasting time to wait for remaining fragments of an out-of-date fragment.

 

Maybe there are other reasons as well.

 

There are mainly two reasons why you would implement your own fragmentation layer:

1. You have no clue (rather unlikely for Q3)

2. You know that IP does fragmentation but want to avoid it

 

Why would you want to avoid it? There is at least in theory one good reason. IP discards the whole datagram if one of its fragments is lost. Assume you send a 4 kilobyte datagram with a MTU of 1280, respectively 4 fragments. If you do your own fragmentation, those "fragments" are complete datagrams. If one is lost, you still get the other three. Relying on IP fragmentation means that if one is lost, you lose all four.

 

So much for the theory. In reality, you do not lose datagrams at all. Except when you lose them, and then you lose them in dozens, not just one.

 

Losing individual datagrams because of "noise" just doesn't happen nowadays (except maybe on a pathetic low signal wireless, but you wouldn't want to play a game in such a setup anyway). When you lose packets, it's because some router's queue is temporarily full and it discards every incoming packet until it gets a breather, maybe 0.1 seconds or so later. Insofar, there is no visible difference between losing one fragment and losing all of them, as it's the same observable end result either way.

Share this post


Link to post
Share on other sites
then you lose them in dozens, not just one

 

Very true!

 

In fact, most networking hardware seems to have too much buffering, rather than too little, these days, which leads to all kinds of bad oscillation behavior. (Google "buffer bloat" for examples from the TCP world.)

 

 

 

you wouldn't want to play a game in such a setup anyway

 

You might not want to, but your users are quite likely to try. (And over a 3G mobile connection. And over satellite internet. And over tin-cans-with-string using a carrier pigeon back-up.)

Guess who they will blame when the game doesn't work? Unless you have a very clear meter for quality, and very clearly shows how the quality impacts the game working or not, they will blame you.

Edited by hplus0603

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!