Jump to content

  • Log In with Google      Sign In   
  • Create Account

Interested in a FREE copy of HTML5 game maker Construct 2?

We'll be giving away three Personal Edition licences in next Tuesday's GDNet Direct email newsletter!

Sign up from the right-hand sidebar on our homepage and read Tuesday's newsletter for details!


We're also offering banner ads on our site from just $5! 1. Details HERE. 2. GDNet+ Subscriptions HERE. 3. Ad upload HERE.


how do i deal with packet loss?


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
6 replies to this topic

#1 Ussyless   Members   -  Reputation: 142

Like
0Likes
Like

Posted 29 September 2012 - 09:27 PM

greetings, i've written a flowchart on a uh, i guess "network model" that i have come up with and want to hear some feedback/get some answers on some things
Posted Image




OK, now, first things first, i believe this model would be better suited to an RTS game than what i have it planned for , which is a platformer-type game and that is why i'm posting it here
a few questions i have are:
*how can i transform this model to better suit a game over UDP that requires decent response times
*how can i account for packet loss?
best ways i can think of would be

find the next known good packet after the lost one, then loop through the actions up to the current time
which it has its pros/cons such as beign able to deal with complex actions, such as throwing a rock, you could bind the rock to a packet, and if the packet is known to be bad, then it disappears
and on the other end, it would cause the game to be more jittery

or sending the players position with each packet, and interpolating the current position to the new one if it's off


anyway, i'd be glad to hear other peoples thoughts on this

Sponsor:

#2 KnolanCross   Members   -  Reputation: 1334

Like
-1Likes
Like

Posted 01 October 2012 - 06:17 PM

My best advice is: use TCP.

Seriously, our networks are fast enough to handle it. If you are reading an old article saying: USE ONLY UDP!!! Please notice that our networks have improved a lot in the last few years and also, AFAIK windows no longer use that stupid tcpackfrequency option on 2 packages by default (from windows 7 on).

If you use UDP you will run into one of those three situations:
1) You need confirmation on every single packet, in this case, use TCP, it will basically be the same thing.
2) You need confirmation on a few packages. If you use UDP here, you will either have to implement confirmation in every package or treat half of you packages as confirmation needed and the other half as not needed. This end up in lists over lists of packages you are waiting, packages you ask for retransmission (and you will have to check if you don't need to send the retransmission), ckeck for packages that you receive but don't need anymore, reorder the packages, and so on. In other words, it is not easy to implement, you probably are able to code this, but you won't do it fast, so you are wasting time you could be using coding your game.
3) You need no confirmation. In this case it is worth using UDP, since you have no need to ask for retransmission (aka: you can interpolate lost packages easily).

Currently working on a scene editor for ORX (http://orx-project.org), using kivy (http://kivy.org).


#3 0BZEN   Crossbones+   -  Reputation: 2021

Like
1Likes
Like

Posted 02 October 2012 - 06:25 AM

Don't mix and match UDP / TCP.

I'm more of an advocate of UDP, giving more flexibility, less latency and more control over the flow.

*how can i transform this model to better suit a game over UDP that requires decent response times
Client prediction, which will get corrected by server updates. That needs the client player to have the ability to 'roll back' its state in case it is corrected by the server.

*how can i account for packet loss?
Either queue your inputs in a reliable buffer, duplicate them into several packets to reconstruct a broken stream.

Look up gaffer networking physics articles.

Everything is better with Metal.


#4 hplus0603   Moderators   -  Reputation: 5519

Like
1Likes
Like

Posted 02 October 2012 - 04:56 PM

our networks are fast enough to handle it.


The problem with TCP has nothing to do with the "speed of the network." The problem with TCP is that the receiving end will withhold newer information from the application, while it waits for dropped, older data to re-transmit, so that it can present all the data in order. For many applications (including many games,) timeliness is more important than order and completeness, and TCP is just the wrong solution for that.

enum Bool { True, False, FileNotFound };

#5 0BZEN   Crossbones+   -  Reputation: 2021

Like
0Likes
Like

Posted 02 October 2012 - 05:12 PM

Instead of downrating me, provide a cogent argument.

Everything is better with Metal.


#6 KnolanCross   Members   -  Reputation: 1334

Like
0Likes
Like

Posted 02 October 2012 - 05:35 PM

our networks are fast enough to handle it.


The problem with TCP has nothing to do with the "speed of the network." The problem with TCP is that the receiving end will withhold newer information from the application, while it waits for dropped, older data to re-transmit, so that it can present all the data in order. For many applications (including many games,) timeliness is more important than order and completeness, and TCP is just the wrong solution for that.


I meant realible, not fast, my bad on this part.

My point is: for a non-FPS game, it is highly likely that TCP will be fast enough, I don't think that the gain you have by using UDP is worth the time of implementing it.

Edit:
One important thing I noticed, we may be going off-topic here, my advice is: use TCP, if anyone want a TCP vs UDP discussion on my arguments, please PM me, I will gladly defend my point/change my mind.

On topic:
*how can i transform this model to better suit a game over UDP that requires decent response times
*how can i account for packet loss?

First, I would create an id for each message type, the I would create two categories: messages that can be interpolated and messages that cannot. This way you can process messages different data you received without the need to wait the retransmission of othe messages.
The program should keep one sequence number for each message id (for instance, message of id 0 is start moving while id 1 is stop moving, each has a diferent sequence number).

To find lost packages I would create a list for each message id, order them by sequence number and act accordingly (either ask for retransmission or interpolate results).

Things may be a little more tricky on messages sent to the server, as a few pakages lost may seen like the client is trying to cheat, so you need to implement a prediction and a rollback capability. For messages as moving character, stop moving, jump and so on you need to either make then a message that requires confirmation or keep sending a message to tell the server that you are still moving, you're not moving, etc, so if the server lost some packages it can sync with the client again.

Edited by KnolanCross, 02 October 2012 - 06:08 PM.

Currently working on a scene editor for ORX (http://orx-project.org), using kivy (http://kivy.org).


#7 evillive2   Members   -  Reputation: 694

Like
0Likes
Like

Posted 21 October 2012 - 08:31 PM

For dealing with packet loss take a look at how VoIP protocols deal with RTP and packet loss. In some cases a conversation can sustain up to 15% packet loss before audio quality begins to sound really diminished. This has to do with whether the packet loss comes in clumps or is relatively evenly spread out. In scenarios where UDP is preferable individual packets are generally unimportant and can be discarded with very little impact. Issues really only arise when packet loss comes in the form of losing a number of packets in a row. At which point the decision is how much packet loss is acceptable before we can no longer cover up and have to "start over".

So that all being said, "dealing" with packet loss is more or less an application decision on how much packet loss can you sustain (without doing anything) before it begins to be a problem. A jitter buffer can be used to "smooth" the transition between the last received packet and the first one received after packet loss at the expense of added latency to an extent. A standard jitter buffer for VoIP is 150-200ms with a standard packet rate of 50pps @ 20ms per packet (yes some vary but the common g711/g729 CODECS use this).

I guess I didn't really answer anything but I thought your question was interesting as I deal with VoIP and various packet loss/latency issues all day at work. The solution may just be - figure out a rate of tolerable loss and how your client/server should react to that which will be application specific.
Evillive2




Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS