UDP - Packet loss, client and server on same computer

Started by
5 comments, last by hplus0603 12 years ago
Hi,

my multiplayer-game ran quite well until I increased the viewrange for the forests quite a bit.
When testing this, I got an incredible high packet loss.

Usually this would not confuse me, since I´m using UDP and packet loss is expectable.
The problem is that

  1. The loss is quite high, about 17% are lost (without increased viewrange: max. 0,8% over the internet!)
  2. I´m running server and client on the same machine, how can this loss be caused?!

I thought of the buffer of the Client not being big enoug, but changing the size from 8kB to 50kB doesn´t have any effect on the loss...
So I cannot think of a way a UDP-Packet can be lost, especially that many.

I´m programming in Java, using the common UDP-Sockets: http://docs.oracle.com/javase/1.4.2/docs/api/java/net/DatagramSocket.html
My initialization code (client) looks like this:
//CPORT is the port I´m using
socket = new DatagramSocket(CPORT);
//setting the time the receive-call blocks
socket.setSoTimeout(1);


So, do you have any idea how those UDP-Packets can be lost?
thx in advance
Advertisement
So, do you have any idea how those UDP-Packets can be lost?[/quote]

Wrong question.

UDP packets are unreliable. They may be more reliable or less reliable, but they may get lost, regardless of the path they take.

They might get lost on the sending side, trying to send too much and kernel discards them. They might get lost in kernel, where anti-virus suddenly decides to preempt some thread and kernel does something funny.

They could get lost in packet storm, such as application sending >64kb of data in less than a timeslice allocated to it.

This, for example: 'socket.setSoTimeout(1);'

Application waits for 1 millisecond, but nothing arrives so application continues with the loop. But the very next CPU cycle, socket receives 64kb of data, which is done locally, so it's at DRAM speed. It takes ~0.064ms to send this data, during which the receiver will not clear the buffer, so 14kb of data will get discarded.

One way is to set the reading loop to short enough for such events to not occur. Buffer should ideally be large enough to be able to store all data that arrives at least during one time-slice or ~20ms. It's far from a guarantee, receiver thread could be sleeping for longer than that. If sending data locally through DRAM (~1GB/sec), buffer would need to be 20 megabytes large, which simply cannot be done.

But it doesn't matter.

UDP loses packets. You either compensate for that or not. Any case where all arrive or they arrive in order or similar are just luck.
ok, first of all: thx for the reply
second: you´re not an optimist, right? : P

I of course know that UDP is unreliable and maybe my question really was wrong.
The thing is that I thougt that such great jumps in packet loss could have some very clear reason; something like a beginners mistake.
Overall I decided to try to regulate the flow of data coming from the server and try to figure out a rate of transmission that suits my needs.

second: you´re not an optimist, right? : P

Programming in general means you must review code in terms of everything that might go wrong. If something can potentially go wrong then with enough use it eventually WILL go wrong. Network code is even more susceptible to this since communications systems are inherently unstable.

Robust code will act as though everything might fail and will take an appropriate response in all possible failure cases.

Part of that includes checking return values for error, possibly propagating the error codes if you cannot handle it, and catching exceptions where appropriate.

/Edit: You said you are in Java, remove c-centric error code stuff.
second: you´re not an optimist, right? : P[/quote]

UDP can lose packets.

Now it's up to you to decide what to do about it.

Current code doesn't do anything, it is simply dumbfounded. That is one way to solve a problem.

Alternatives involve recovering missing packets. There are different strategies, each with strengths and weaknesses.

In all of the above cases, there will come the moment of truth: Does the method used provide sufficient amount of packets for the rest of application to work.

If yes, the approach is sound. If no, another approach must be chosen.

Current approach, considering high packet loss is a problem, is not sound. Solution to this problem is to incorporate a recovery scheme. Perhaps make packets bigger by adding redundancy information from previous and later data. Perhaps attempt to resend the data. Or, layer a protocol on top of it that conveys more information about which packets get lost and how to compensate for that.

The thing is that I thougt that such great jumps in packet loss could have some very clear reason; something like a beginners mistake.[/quote]

Possible. But UDP still won't provide any guarantees. So even if implementation were perfect (proven via some math stuff), unicorns might go on a strike over 1% fairy bonuses and suddenly no packets would arrive anymore.

Basic premise of assuming that UDP will deliver a message is flawed. So regardless of how good or bad the code is, it's just a gambling game with no guaranteed returns.
Though i will say you can get UDP to do some awesome stuff if you use a TCP control stream between endpoints and UDP as the data stream (kinda like the 'ol FTP setup, but smarter). I've seen some UDP data accelerators push data through horrid connections (High latency, Bitrate errors, etc.) that a TCP data stream alone would have had a heart attack with. But they usually have a nice prediction/learning algorithm behind them, so they are not simple.
If you get 17% loss, then something else is wrong, unless you're on the worst wireless network connection possible.
Perhaps you're sending too much data for the link?
enum Bool { True, False, FileNotFound };

This topic is closed to new replies.

Advertisement