Jump to content

  • Log In with Google      Sign In   
  • Create Account


Different methods of ack:ing packets?


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
4 replies to this topic

#1 fholm   Members   -  Reputation: 262

Like
0Likes
Like

Posted 03 August 2013 - 11:37 AM

I spent some time thinking about different ways of ack:ing packets and have come up with three possible solutions, which all have their respective pros and cons. I'm looking for some feedback and recommendations on picking an ack:ing strategy.

 

First, lets lay out some ground rules. Each packet is prefixed by a 16 bit sequence number which wraps around at 65535 to 0. Each peer has a local "window" which represents all the packets currently in transit from itself to a remote which have not been acked or nacked. Each peer also has a local append-only list of received sequence numbers from their remote. Packets that are "late" are dropped by the networking layer.

 

Option A: 

The first option is the most obvious one, when you receive a packet you look at its sequence number and you send a response back to the remote peer which says "I received this sequence number". Lost packets are inferred by the receiver by looking at the distance between the previously received sequence number and the current received sequence number. NACK's are sent with the same type of messages "I did not receive this sequence number".

 

Pros: 

  • Allows arbitrarily large send windows 
  • In the optimal case (zero or very little packet loss) it uses the least bandwidth

Cons:

  • If the ACK message gets lost it needs to be resent, and the side that sent the original packet which ACK got lost needs to be be able to tell the remote that an "an ACK/NACK got lost for this sequence number".
  • During heavy packet loss it can take a long time to get an ACK/NACK for a packet.

Option B:

The second option is similar to Option A, but instead of sending each ACK/NACK individually the receiving end keeps a history of all received/lost sequence numbers up to the size of the send window and in the header of each packet that goes out from every peer to a remote the entire receive history of that peer is written by first writing the "last received" sequence number with 16 full bits and then one bit that is either 0/1 for each of the packets in the received window depending on if they were lost/received.

 

Pros:

  • Easy to implement
  • No need to send messages saying "I have not gotten an ACK for this sequence number" as the entire receive history of the peer is in the packet header for every packet sent.
  • Very robust, no risk for "loss" spirals which causes several messages to to back and forth just to get an ACK/NACK for a packet, etc.

Cons:

  • Restricts the size of the packet in-transit window as it will use 1 bit for each packet in the window plus 16 bits for the latest sequence number, so for a 128 packet window it would be 128 + 16 = 144 bits in the header of each packet. Could still work for fast paced games which are not built to handle a lot of packet loss (say FPS:es, etc.)

Option C:

The third option is a combination of A and B, basically in each packet header you write the same type of structure as in option B with a sequence number and then a bitmask of N bits to represent lost/received packets. But instead of sending the entire received-history up to the size of the send window like done in option B you send only the most recent part of the send history (say the last 16 or something).

 

Pros:

  • Allows arbitrarily large send windows 
  • Has some redundancy on ACK/NACKs compared to option A, makes it a bit more durable

Cons:

  • More complicated to implement then B as it still needs to be able to handle the "I have not received an ACK/NACK for this packet" in the case of heavy packet loss.

 

Conclusion:
I am basically looking for input on what is a good ACK/NACK:ing strategy, what you guys have used in the past, what you know works, etc.
 

 



Sponsor:

#2 PeterStock   Members   -  Reputation: 375

Like
0Likes
Like

Posted 03 August 2013 - 01:51 PM

Assuming you're using UDP (otherwise you wouldn't need to ack anything), packets might arrive out of order. If you're doing Quake 3 style networking, the server sends *everything* since the last received ack (which will likely re-send data already sent), and the client just sends an ack for the latest sequence number, ignoring all earlier ones. This trades using more bandwidth (re-sending data) for lower latency and simplicity.

 

You can also piggyback the ack on packets where you need to actually send data anyway (like the client sending control inputs), to avoid sending multiple packets.

 

I might have misunderstood you, but it sounds like you might be wanting to make a TCP-like system for ensuring reliable (in-order?) delivery over UDP. If this is the case, it might be easier/better to just use TCP? If not, and you want Quake 3 style network code, there's an excellent description here:

 

http://fabiensanglard.net/quake3/The%20Quake3%20Networking%20Mode.html

 

http://fabiensanglard.net/quake3/network.php



#3 AllEightUp   Moderators   -  Reputation: 4165

Like
1Likes
Like

Posted 04 August 2013 - 06:28 PM

Acking for a game oriented UDP protocol should be exceptionally simple and there should be no need to ever use a Nack as you call it.  What could be causing you problems is that you are writing code too much like TCP instead of game functionality and from your description that sounds like a fairly high probability.  The primary difference between TCP and UDP should be making your UDP system multi-channel.  Where TCP only sends a single reliable and ordered stream of data, a game can break it's data into a number of variations such as the following:

 

Critical: Data which absolutely MUST arrive as soon as possible and in proper order.  As PeterStock says, you can generally just pack this into every packet until you know it has been received.  Given the nature of this style of data, things like "fire button pressed", "jump" etc, you won't have very much of this data and even in high packet loss you may average 5-10 bytes per packet while such urgent send data is active.

 

Unreliable, latest only: Movement data, can use this.  We don't care if packets arrive out of order, only use the most recently received data.  If it is lost, so what, there is another update on the way already.

 

Reliable, ordered, but not critical:  Chat messages for instance.  So, who cares if they show up on the other side even 1000 ms late, who's going to notice?

 

So, why does this make the ack system simple?  The first two types of data don't have explicit receipt tracking required.  In fact, the second one doesn't even require to be tacked as you just keep sending the updates at a steady rate unless you try to get fancy with error predictive bandwidth reduction.  The first item just needs to know that a packet with the data in it arrived, not which one or when.  Only the third channel needs actual per packet tracking so, the way I did the ack previously and probably will again is a simple little item:

 

struct Ack

{

  uint16_t   SequenceId;         // Most recently received series of packets start.

  uint16_t   PrePostFlag : 1;   // 1 means pre, 0 means post.  More later.

  uint16_t   Mask : 15;            // Bits represent packets +- SequenceId.

};

 

Used in conjunction with congestion control, this is more than adequate to give you all the information you need.  Keep a circular bit buffer matched to the send rate of your packets such that 1 bit matches each packet.  In the chat channel keep a sequence id and the number of bytes packed into each UDP packet.  As bits are marked, you can free up the data, if the start sequence id in the circular buffer goes beyond one of the stored as sent items which was never marked, just resend it.  Nothing fancy, no worry about nack etc, just go ahead and pack it into the next packet.  If it is even 5 seconds late, so what, who cares about timing in the chat?  Obviously if the unusual case of the packets you keep trying to pack it into keep getting lost, you might start sending it like urgent data to make it get through before say 8 seconds..

 

This all assumes a pull based packet dispatch system.  I.e. you don't simply send data into a buffer, you pull the data when building the packet.  So a packet may be built with the following pseudo code:

 

BuildPacket()

  For each channel or until packet >= averageSize

    If channel critical

     pack all data into packet.

    else

     if a complete message will fit

      take it from the channel.  (movement etc)

     else

      take as much data as possible from remaining channels

 

Obviously a real system would be more complicated but the idea is to pull the data when creating the packet instead of pushing data into buffers.  Of course it is backwards from standard TCP work, but that's another benefit of UDP, not everything is packed into a single buffer to eventually get sent, the most recent data is packed into the packet so it is as close to latest as possible.

 

Hope this verbose walk through helps out.  I know you only asked about the ack but the questions seem to indicate that you are recreating TCP instead of leveraging the advantages of UDP.

     



#4 VReality   Members   -  Reputation: 436

Like
0Likes
Like

Posted 04 August 2013 - 08:20 PM

I am basically looking for input on what is a good ACK/NACK:ing strategy, what you guys have used in the past, what you know works, etc.

Last time I wrote a reliability layer, it was essentially your option C - but without an explicit missing-nack notification.

 

Let's say a packet gets lost on the way from Alice to Bob.  If Bob ever receives another packet, he'll send around 16 nacks for the missing packet before it's ready to fall off his ack/nack mask. 

 

That means at least sixteen consecutive packets would have to be dropped/delayed on the way back from Bob to Alice in order for Alice to then get a packet with an ack/nack history that didn't cover the packets waiting for acks in her sent-cache.

 

In the context of high performance, quick response networking, it seems reasonable to consider that a broken connection.



#5 hplus0603   Moderators   -  Reputation: 5104

Like
0Likes
Like

Posted 08 August 2013 - 11:07 AM

Yes, I've had good experience with option C, too. Although I used 8-bit sequence numbers. A typical send window is only about 8-16 packets anyway, so 256 sequence numbers are plenty good enough. Also, I sent "last sequence number received" plus a bitmask of the previous 8 sequence numbers received or not. Anything before the "edge" of that "ack window" is considered lost if not previously acked.

 

Also, the system was set up to send packets at a fixed rate, and the received/acked/sequence numbers were part of the framing of each message sent. Thus, when receiving a packet, no "ack message" is generated. instead, each time an outgoing packet is generated, the "ack info" is included in the headers for that packet. All in all, each packet uses three bytes for framing at this level:

 

1: The sequence number of this packet going out

2: The last sequence number seen of a packet coming in

3: A bitmask of sequence numbers seen before that sequence number

 

Could you use 16-bit sequence numbers, and enough bitmask to cover the entire send window? Yes, these days, a few bytes won't matter very much in a single header like this. But if you make that same trade-off for each and every message within a full packet, you'll end up with packets that are 2x or 3x the size they need to be, and at that point, it starts to matter :-)


enum Bool { True, False, FileNotFound };




Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS