Jump to content
  • Advertisement
Sign in to follow this  
KODOnline

Interesting packet loss situation

This topic is 3992 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hello... I am writing (at points have been near completion) a client/server engine capable of handling massive numbers of clients. I have adequate client to server communication on movement, etc, and have benchmarked the speed of the server to the point of being comfortable that bandwidth will be the first thing to slow it down, regardless of content size. However, I am running into a very interesting packet error. As long as the server updates the clients at a somewhat slow interval (every 200ms) and the client sends packets at a similar speed, there seem to be no problems. But randomly throughout the programming of the server, I have noticed packet loss errors. They seem to be occurring only when the server sends more than one packet to the client at any kind of speedy interval. IE: if I speed up the client updating to 100ms, I notice "message" packets not showing up. Like in battle, it will only inform you of some of the hits, and the health of the player will seem to jump down, as it is finally catching up with the latest update packet that it DID receive. I have worked around them so far by minimizing the size and number of packets, as well as trying to normalize the number of packets sent per "loop" to one, but it is starting to prove insufficient, and the error unavoidable as the game progresses. In all my debugging, I have not been able to discern the source of this. The happening of the loss is on a seemingly random interval, even while the updates are at a constant speed. Perhaps I have a misunderstanding of the nature of my sockets and packets, but as it is, I'm lost. To try to help you help me, I have written the entire thing in C# 2005, using .NET 2.0 (asynchronous sockets) and a totally original packet system. I don't want to make this post 30,000 words, so if you need more information, code posted, etc, I would be more than happy to oblige. But as it stands, this error is completely enigmatical to me and can cause the project to fall apart altogether. Any help would be greatly appreciated. Thanks. Steve

Share this post


Link to post
Share on other sites
Advertisement
Quote:
Original post by Antheus
Did you try increasing incoming and outgoing socket buffer size?


I have browsed, yes... but to clarify, do you mean a property of the "Socket" class? Or in my personal packets? Of course I will check both again now, but that may help.

Again, this is only when there are multiple packets. The size so far does not seem to affect this, only the number of packets received in any given time period. If more than one packet arrives, does it all go to the same Socket buffer? I was under the impression the buffer would be per packet...

Share this post


Link to post
Share on other sites
Okay, just for the hell of it I set every socket, client and server side, to have a 64k buffer. No help. Not really sure where to go from here... If anyone has run into anything like this, or is rather familiar with .NET asynchronous sockets, I'd be more than happy to share some code. Maybe it's just something simple I'm missing, or maybe I need a re-write altogether. I hope not.

Share this post


Link to post
Share on other sites
Set both to 1-4Mb.

I had the issue of dropped UDP packets when I was testing, and hundreds of clients all sent their data at the same time (give or take a millisecond), spiking the network buffer before I could read it.

I'm guessing, assuming there's no problem in your networking code, that send buffer could suffer from similar problems.

Also, how big are your packets? Larger than 530 bytes?

Share this post


Link to post
Share on other sites
Quote:
Original post by Antheus
Set both to 1-4Mb.

I had the issue of dropped UDP packets when I was testing, and hundreds of clients all sent their data at the same time (give or take a millisecond), spiking the network buffer before I could read it.

I'm guessing, assuming there's no problem in your networking code, that send buffer could suffer from similar problems.

Also, how big are your packets? Larger than 530 bytes?


Well I tell you what... after setting it to 2mb for anything ( I had it set to 64k originally), it seems to work. And quickly. But to the best of my biggest packet is only 4000 bytes, with the default size being 256. It all still seems a bit fishy to me. Perhaps it has to do with the Socket.Stream enumerator duplicating packets? I even had an issue early on of random clients randomly getting more that one copy of a packet. Interesting. I'll keep you posted, but thanks for your help so far.

Share this post


Link to post
Share on other sites
And one last thing while I have you here... :P

I'm a pretty strong programmer, but fairly new to network programming, and definitely have not experienced mass clients connected to an engine I've written myself. What kind of interval/size packet should I be shooting for (both client and server side) to ensure no loss, and no lag at high numbers?

Share this post


Link to post
Share on other sites
Quote:
Well I tell you what... after setting it to 2mb for anything ( I had it set to 64k originally), it seems to work. And quickly. But to the best of my biggest packet is only 4000 bytes, with the default size being 256. It all still seems a bit fishy to me. Perhaps it has to do with the Socket.Stream enumerator duplicating packets? I even had an issue early on of random clients randomly getting more that one copy of a packet. Interesting. I'll keep you posted, but thanks for your help so far.


The problem is in spikes. 100Mbit card, 10Mb/sec, 10kb/ms.

Now let's say you process data every 15 ms. During that time, you can receive up to 150kb. With UDP, if you don't process it, extra data goes poof. Even if your average data rate shows less than buffer size, clients will, from time to time, synchronize when sending data (by probability).

This is much easier to reproduce on LAN, but it happens over internet as well.

Quote:
What kind of interval/size packet should I be shooting for (both client and server side) to ensure no loss, and no lag at high numbers?


100Mbit card, 10Mb/sec, 10kb/ms.

For each client, do the same with their connection. 3kb/sec, 3 bytes/ms - should be maximum for usual dial-up class users. Work out the limits from there for your parameters. The number of packets for internet based games should be somewhere in the 1-5/second range, not higher, due to latency reasons.

Of course, it's always possible to stretch things either way, these aren't hard numbers.

Quote:
I even had an issue early on of random clients randomly getting more that one copy of a packet.


With UDP you can get duplicate packets, corrupt packets, packets out-of-order, or not get any at all.

Share this post


Link to post
Share on other sites
Quote:
Original post by Antheus
Quote:
Well I tell you what... after setting it to 2mb for anything ( I had it set to 64k originally), it seems to work. And quickly. But to the best of my biggest packet is only 4000 bytes, with the default size being 256. It all still seems a bit fishy to me. Perhaps it has to do with the Socket.Stream enumerator duplicating packets? I even had an issue early on of random clients randomly getting more that one copy of a packet. Interesting. I'll keep you posted, but thanks for your help so far.


The problem is in spikes. 100Mbit card, 10Mb/sec, 10kb/ms.

Now let's say you process data every 15 ms. During that time, you can receive up to 150kb. With UDP, if you don't process it, extra data goes poof. Even if your average data rate shows less than buffer size, clients will, from time to time, synchronize when sending data (by probability).

This is much easier to reproduce on LAN, but it happens over internet as well.

Quote:
What kind of interval/size packet should I be shooting for (both client and server side) to ensure no loss, and no lag at high numbers?


100Mbit card, 10Mb/sec, 10kb/ms.

For each client, do the same with their connection. 3kb/sec, 3 bytes/ms - should be maximum for usual dial-up class users. Work out the limits from there for your parameters. The number of packets for internet based games should be somewhere in the 1-5/second range, not higher, due to latency reasons.

Of course, it's always possible to stretch things either way, these aren't hard numbers.

Quote:
I even had an issue early on of random clients randomly getting more that one copy of a packet.


With UDP you can get duplicate packets, corrupt packets, packets out-of-order, or not get any at all.


Fun! :) For testing I went to 20 packets a second and it ran smooth, but I guess I will go back to the original 200ms. Only issue is anything that happens on the server more often than that wont be shown. I guess I need to focus on keeping all actions to 200ms or more!

Thanks again for all your help.

Share this post


Link to post
Share on other sites
Quote:
Original post by KODOnline
Quote:
Original post by Antheus
Did you try increasing incoming and outgoing socket buffer size?


I have browsed, yes... but to clarify, do you mean a property of the "Socket" class? Or in my personal packets? Of course I will check both again now, but that may help.

Again, this is only when there are multiple packets. The size so far does not seem to affect this, only the number of packets received in any given time period. If more than one packet arrives, does it all go to the same Socket buffer? I was under the impression the buffer would be per packet...


I'm guessing you are using TCP? You haven't said yet. TCP is a stream, so when you send two packets on the server end, they could arrive in a single call to recv on the client end. You need to account for this in your recv buffer code and packet structure. (Passing size of packet as part of a header is a good way to handle this). To me it sounds like this is your problem. When you decrease the delay between packets, you are increasing the chance that the server or client TCP stacks bulk the data you've sent as a single frame to cross the interweb.

With TCP if your recv buffers get full, data transmission is paused, as that is part of the TCP spec, it will resume when the buffers are emptied. You should avoid this though as it will stall the pipe causing extra latency, as control packets need to be sent to restart the flow.

For UDP a single call to recv will get 1 packet, provided you did not go over the maximum size of a udp packet. If you are using UDP and want to send large amount's of data, you need to develop your own protocol to break up the data and re-assemble. You also need to handle the possiblity that a packet will get lost. TCP handles this for you.

--Zims

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!