• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.
Sign in to follow this  
Followers 0
Gumgo

Deciding when packets are dropped

12 posts in this topic

I've implemented a system on top of UDP similar to the one described [url="http://gafferongames.com/networking-for-game-programmers/"]here[/url]. I have the system working well, and now I'm trying to add some "higher level functionality" on top of it to make it easier to do things like send guaranteed messages. I have a few questions about how dropped packets are handled. (Sorry they're a bit lengthy, but network code has lots of subtle issues that can be hard to explain!)

[b]1) [/b]First of all, what is the best approach to detect likely dropped packets? The article's suggestion is basically the following: if a packet can't be ACKed anymore (i.e. the ACK bitfield window has moved past that packet), it's probably been dropped. So for example, suppose packets are being sent at a rate of 30 per second and the ACK bitfield contains 30 ACKs. If you send packet A at time t = 1, it will take about 1 second to receive ACKs for the last 30 packets. If packet A is never ACKed in this 1 second, it can NEVER be ACKed (the ACK bitfield window no longer contains packet A), so A should be considered dropped.

But this seems weird to me. If you were sending packets at a rate of 10 per second and the ACK bitfield was still 30 long, it would take 3 seconds for packet A to fall off the ACK bitfield. But 3 seconds seems too long to wait for detecting dropped packets. Or if you were sending packets at a rate of 60 per second and the ACK bitfield was 30 long, packet A would only be on the bitfield for 0.5 seconds.

The article doesn't mention using RTT to determine dropped packets, but that seems like a better indicator to me. Obviously if a packet is no longer in the ACK bitfield window it should be dropped (since it CANNOT be ACKed), but maybe the additional criteria of, say, a maximum ACK time of RTT*2 should be included as well? How is this usually done?

[b]2) [/b]When you detect a packet that's likely dropped, how is that usually treated? I can see two options here:
a) Decide that the packet IS dropped and ignore it, even if it comes in later
b) If the packet comes in later, still process it
Obviously, option a) could be more wasteful, since once you've decided that a packet is dropped, you can't ever "change your mind" upon receiving an ACK. However, the reason I'm considering it is that it simplifies things a lot.

For example, suppose you send guaranteed message M in packet A, and after RTT*2 (or whatever your criteria is), you haven't received an ACK and decide it's probably dropped. You then resend M in packet B. If you go with option a), you can forget all about packet A and you only need to wait for an ACK from packet B. The downside is that if you end up receiving an ACK for packet A after you've determined it's been dropped, you can't actually use this ACK. A potential issue I see with this is if the RTT suddenly jumps up a lot, you might go through a brief period where you decide a bunch of packets are dropped when in reality they're just a bit more delayed. This would occur until the dropped packet cutoff time is adjusted to take the new RTT into account. I'm also not sure how this would work with network jitter - if there was high jitter then I guess more packets would be considered dropped.

But if you go with option b), it's still [i]possible[/i] that you could receive an ACK for packet A! So the issue described above wouldn't occur, and you'd have the ACK information available sooner than if you waited for packet B's ACK. But in order to respond in this case, you'd need to keep some information for packet A around; i.e. you'll need to store "message M is pending in both packet A and packet B"; for each guaranteed message M, you need to store a [i]list[/i] of packets it's been sent in, rather than just a single packet (the most recent one). You don't want this list to grow forever though, so you need to eventually "forget" about old packets, and deciding when to do this isn't exactly clear.

Right now what I'm doing is keeping track of each packet containing guaranteed messages until all guaranteed message in that packet have been ACKed. So for example, message M has a reference to packets A and B, and packets A and B have references to message M. If B is successfully ACKed, I notice that B contains M, and so I remove M from both A and B, and now I notice that A is empty, so I free A. But this just seems... "messy" and somewhat unbounded. If there are unexpected network issues and I end up resending message M several times, I'd suddenly be keeping track of a lot of old packets which probably didn't get though. And the worst case is if a (virtual) connection is dropped - if I have a 10 second timeout and a bunch of packets containing M never get through, I'd have to keep track of 10 seconds worth of resent packets until the connection timeout is detected!

Does anyone have a suggestion on this issue? To summarize: when a packet is likely dropped, should I a) decide that it IS dropped, or b) still allow it to change its status to "delivered" if it's ACKed at a later time?
0

Share this post


Link to post
Share on other sites
even though you are utilizing UDP, and packets aren't going to reliably arrive in-order, you could do a simple check comparing packet IDs..

so say, for instance, that you just received packet #10, the next one you receive is #12... now either 11 was dropped, or it is just a little late.. if the next packet is #13, then the chances that #11 was dropped increase dramatically, and it will continue to do so with each successive packet received that isn't #11. basically, you would be using an "out-of-order tolerance" of say 3-5 packets, instead of a time-based dimension of packet-loss determination that relies on the acking setup that's in place..
0

Share this post


Link to post
Share on other sites
I use a much simpler mechanism.

Each packet has a sequence number. This is just a byte that increments and wraps over.
If I receive a new packet in the range [last received+1 , last received + 127] inclusive then I treat that as a new packet, and treat every packet between the previously last received and new last received as dropped.
If I receive a packet in the range [last received - 128, last received] inclusive, then I treat it as a late/duplicate packet, and simply drop it on the floor.

On the other end, I ack all packets I receive using a simple bit packing scheme. If I send some number of packets (say, 60,) without getting a single ack back, I treat the remote end as dead, and shut down. I also always send packets at a fixed rate -- say, 5 per second when there's nothing to send (other than acks,) and 20 per second while game is playing.
2

Share this post


Link to post
Share on other sites
Thanks for the replies. From your posts, it sounds like UDP "isn't as bad" as I had expected. I was under the impression that packets are out of order and delayed fairly regularly. But hplus0603, if you're simply dropping packets any time they come late and it hasn't been an issue, then that probably doesn't happen too often. I know the answer is probably dependent on a lot of things, but about how often do issues such as out of order/dropped packets occur with UDP under a regular network load?
0

Share this post


Link to post
Share on other sites
[quote name='Gumgo' timestamp='1355359508' post='5010027']
Thanks for the replies. From your posts, it sounds like UDP "isn't as bad" as I had expected. I was under the impression that packets are out of order and delayed fairly regularly. But hplus0603, if you're simply dropping packets any time they come late and it hasn't been an issue, then that probably doesn't happen too often. I know the answer is probably dependent on a lot of things, but about how often do issues such as out of order/dropped packets occur with UDP under a regular network load?
[/quote]

At the bottom line it all comes down to good and bad connections. Like UDP TCP is also based on packages going from A to B, which means that if the packages don't get there both UDP and TCP will run poorly. The only difference being that TCP tells you if packages doesn't arrive at all.

I did a small very unscientific test from my laptop on the [s]shitty[/s] less than well implemented wireless at my university in Denmark to my old virtual server in Germany. I send 100 packages each with four bytes. From me to my server all packages arrived perfectly in order every time (I think I tested 5-8 times) while on the way back there were always one or two that came in out of order. None of the packages where lost in any of the tests. While this is (as said) not scientific at all, it still gives a good indication, that on a someway good connection UDP works just fine :)

I would recommend hplus0603's way of doing it :)
Also I got some help regarding kind of the same topic in this thread http://www.gamedev.net/topic/624187-mixing-reliable-and-unreliable-messages/ maybe you can pickup some useful tips from it as well.
0

Share this post


Link to post
Share on other sites
Some circuits let you send a billion UDP packets without a single loss. Others will lose up to 20% of your packets.

The point is: With UDP, you want to quickly and proactively declare failure where there is not success. Late data is as bad as no data at all. If that weren't the case, you would be using TCP.

The upper level of the UDP system will have to deal with re-transmitting reliable messages where needed, and doing whatever it does for non-reliable (typically, "last state") messages like user input. And, yes, this may mean that a user on a very poor (typically wireless, like mobile or microwave link) connection will get more server corrections than a user with a nice wired connection. Life is not fair :-)
0

Share this post


Link to post
Share on other sites
Last time I implemented a reliability layer, I included in each outgoing packet the "largest" sequence number received and a bit pattern representing Acks and NAcks for the X preceding packets (this redundantly Acks/NAcks incoming packets, since these outgoing packets could get dropped as well).

Just like in other posters' schemes, as soon as a NAck was received, the packet was considered dropped and was resent.

Time-outs were only used to declare connections broken.


It should be noted that this approach is intended for low bandwidth data and quick response times. If you were making a scheme for downloading large amounts of data, you would want to take a different approach which was more careful to no re-send prematurely.
0

Share this post


Link to post
Share on other sites
[quote] I included in each outgoing packet the "largest" sequence number received and a bit pattern representing Acks and NAcks for the X preceding packets [/quote]

Ditto!

[quote]the packet was considered dropped and was resent[/quote]

A single UDP packet generally contains a number of messages, some of which are reliable, and some of which are not. The ideal implementation will just re-schedule the reliable messages that were dropped for the next outgoing packet.
0

Share this post


Link to post
Share on other sites
[quote name='hplus0603' timestamp='1355436303' post='5010353']A single UDP packet generally contains a number of messages, some of which are reliable, and some of which are not. The ideal implementation will just re-schedule the reliable messages that were dropped for the next outgoing packet.[/quote]

In this case we would need some further reliability infrastructure at the "message" level. That is, we would need to buffer out-of-order messages until dropped messages arrive, and we would need to reject message duplicates in the instance that the original "dropped" packet eventually makes its way across the wire.
0

Share this post


Link to post
Share on other sites
[quote name='VildNinja' timestamp='1355360969' post='5010033']
I did a small very unscientific test from my laptop on the [s]shitty[/s] less than well implemented wireless at my university in Denmark to my old virtual server in Germany. I send 100 packages each with four bytes. From me to my server all packages arrived perfectly in order every time (I think I tested 5-8 times) while on the way back there were always one or two that came in out of order. None of the packages where lost in any of the tests. While this is (as said) not scientific at all, it still gives a good indication, that on a someway good connection UDP works just fine [img]http://public.gamedev.net//public/style_emoticons/default/smile.png[/img][/quote]
Anecdotally, in our networks course in university, our professor had us write a UDP-based file transfer utility, and test it over a variety of networks. Even streaming between cable connections on opposite coasts, I had a hard time achieving 2% packet loss, unless I was overloading the local wifi/router.

TL;DR - by and large, packet loss is a lot less of a problem than it once was.
2

Share this post


Link to post
Share on other sites
[quote name='VReality' timestamp='1355437355' post='5010359']In this case we would need some further reliability infrastructure at the "message" level. That is, we would need to buffer out-of-order messages until dropped messages arrive, and we would need to reject message duplicates in the instance that the original "dropped" packet eventually makes its way across the wire.
[/quote]

Yes, absolutely!

This is why there's typically several classes of delivery guarantee across different logical channels over a physical UDP connection:

- maybe delievered, maybe duplicated, maybe out of order -- implementation is to just deliver as soon as a packet comes in (this is least useful)
- maybe delivered, not duplicated, not out of order -- delivered assuming the incoming packet is later than the previous incoming packet (this is most useful)
- guaranteed delivery, not duplicated, maybe out of order -- each guaranteed message is delivered when the packet shows up, but de-duplicated if there's re-sends (lost acks)
- guaranteed delivery, not duplicated, not out of order -- this is similar to TCP, and requires the most buffering and state management

Only the last reliability type requires buffering actual messages on the receiving end; the others can make delivery decisions based only on sequence numbers (and, for the case of reliable, per-channel sequence numbers.)
Also, with reliable channels, you typically keep the reliability information on a per-message-channel basis, rather than a per-packet-stream basis.
1

Share this post


Link to post
Share on other sites
Thanks, this information has been very helpful! I ended up going with hplus0603's method and just dropping packets which come out of order, and resending the moment I detect the missing ACK - greatly simplifies things.

On the topic of reliability/ordering of messages, I had a thought today. This would probably have way too much overhead/be impractical, but have any games implemented a dependency graph for message ordering, rather than simply an ordered stream? An approach might work somewhat as follows:
- Each message would have an ID (probably 2 bytes, maybe with some upper bits being used as flags)
- Each message would be sent with a list of IDs it depends on (this would be the overhead-y part!)
- When messages are received, their dependency graph is evaluated, and they are only executed after all dependencies have been executed
- When the sender determines that all messages that a message is dependent on have been processed, the dependencies are removed to reduce dependency list length. To do this, examine incoming ACKs and figure out which messages must have been executed.

Example of why the last point is important: suppose you spawn an enemy and then much later it gets killed. The "enemy killed" message should be dependent on the "enemy spawned" message, but in order for that to work the receiver needs to keep track of the "enemy spawned" message for a long time! But if you can confirm using ACKs that the "enemy spawned" message was already executed, you can just forget about it. And if you hang onto dependencies for a long time, you'll have issues when the 2-byte message IDs wrap around.

This is just the more general extension of having multiple message channels each with independent ordering. Of course, the cost is probably extremely high. Anyone done something like this before?
0

Share this post


Link to post
Share on other sites

That's an interesting idea, and I have not seen it done before. I wonder how different in practice it would be compared to a per-channel ordering dependency.

Implement it and ship a large game and then let us know how it turns out :-)

0

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now
Sign in to follow this  
Followers 0