• Advertisement
Sign in to follow this  

How can I optimize Linux server for lowest latency game server?

This topic is 671 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hey,

I'm running a game server for my mobile game. It runs as a Java 8 application and uses Kryonet. It uses both UDP and TCP quite oftenly - UDP for spamming update packets and TCP for shooting and a few other events. TCP and UDP uses different ports, not the same one.

 

I'm currently running 3 game server application instances, every on different ports of course.

 

Linux server runs on VPS, stock Ubuntu 14 with a few packages installed. Some players are crying, that they have high ping(> 100), even though they can play other games online well. I am aware, that location plays a big role, this question is not really about it.

 

What could I do? Is my OS choice bad?

Edited by Gintas Z.

Share this post


Link to post
Share on other sites
Advertisement

I don't know any linux specific tricks, but I can say for certain that you OS is not the problem.

 

It is very difficult to come with any suggestions, other than the enable tcp nodelay.

 

Your problem might be related to your code running on a VPS, remember that most are not running on a dedicated core (unless you pay for that) so you have no guarantee that you will get constant access to the processor. It might come in small bursts, rather than a constant average, this could lead to slightly higher ping, as the server is simply "paused" for a few milliseconds once in a while.

 

Have you tested how your game works running from a dedicated server? I think the optimization is either in your code, or your host.

 

Also just remembered that there are multiple different versions of Java on Linux, try to google which is the fastest and make sure the one you are using does not interrupt everything when collecting garbage. (I stopped doing Java when Oracle tried to make me use the Ask toolbar, so my Java knowledge might be a bit outdated)

Edited by VildNinja

Share this post


Link to post
Share on other sites

How frequently are you sending the packets over UDP and TCP? I've done test where sending too many UDP packets even over a wired network causes many of them to be dropped.

 

If it's a mobile game I'll assume that your users are connecting over wireless. Perhaps many packets are being dropped for some reason and TCP needs to resend data or the UDP packets just aren't getting to the destination?

Share this post


Link to post
Share on other sites

How frequently are you sending the packets over UDP and TCP? I've done test where sending too many UDP packets even over a wired network causes many of them to be dropped.

 

Server sends UDP update packets 20 times a second. Player sends UDP update packets 30 times a second. Client sends TCP event on every shot, then server sends TCP shot event to every client.

 

 

Many successful action games are hosted on Linux and it works fine, so I wouldn't think so.

 

By OS choice I meant Ubuntu. Would CentOS or any other be better?

 

 

The port number is like a street number. TCP and UDP are different streets. Whether the two sockets live on "123, TCP Street" and "123, UDP Street" or "1234, TCP Street" and "1235, UDP Street" doesn't matter. Even if they happen to have the same port number, they will not get confused in any way.

 
Not sure how Kryonet works in back-end, but I thought this would help avoid TCP congestion. If it is using two socket connections, then that might be true.

 

 

VPS virtualization is not a low-latency technology. Running Linux on bare bones hardware would likely improve the wost-case jitter, and worst-case jitter turns into worst-case latency.

 
I am considering this option.
 

It is very difficult to come with any suggestions, other than the enable tcp nodelay.

 
This option seems to be enabled on Kryonet by default.
 

Also just remembered that there are multiple different versions of Java on Linux, try to google which is the fastest and make sure the one you are using does not interrupt everything when collecting garbage. (I stopped doing Java when Oracle tried to make me use the Ask toolbar, so my Java knowledge might be a bit outdated)

 

Which version is it? I cannot seem to find anything useful.

Share this post


Link to post
Share on other sites

This part was worrying me too. I was using TCP, so that I don't lose packets, because shot events are important.

Are you suggesting to send 5 same UDP packets after each shot event? That sounds quite a nice and cheap hack, I really like it. If some of them gets lost, others should arrive. And I save massive amounts of time implementing packet-loss detection and retransmission. What could be potential cons of using this method? I understand this will increase network consumption(player could play on 4G/LTE), but that does not really matter much at this moment. Also, was 5 just a random number? Packet loss rate should be < 10%, so 2 should work fine, right?

 

 

EDIT:

Damn, I was a little bit wrong. Player does not send TCP event on each shot, I forgot...... I have made isShooting flag in UDP packet, that player spams 30 times a second. However, server still sends to all players TCP event after each shot and each kill, so same should still apply, right?

Edited by Gintas Z.

Share this post


Link to post
Share on other sites

Are you suggesting to send 5 same UDP packets after each shot event?


I'm saying you include the same game-level event ("user U fired a shot in direction X,Y at time T") in the next 5 UDP datagrams you send. Presumably, each datagram contains many events, and the exact set of events included in each datagram will be different for each.

Share this post


Link to post
Share on other sites

Also just remembered that there are multiple different versions of Java on Linux, try to google which is the fastest and make sure the one you are using does not interrupt everything when collecting garbage. (I stopped doing Java when Oracle tried to make me use the Ask toolbar, so my Java knowledge might be a bit outdated)

Yup, it is. Also download the JDK instead of the JRE, no toolbar.

 

You got one open source GPL licenced codebase for both the standard libraries, and the VM: OpenJDK, with HotSpot VM inside. Thats the one Oracle, Red Hat, Google (backend, not Android), etc put their code in. 

 

Oracle grabs it, compiles it, bundles a few of their tools n stuff, and releases it as OracleJDK. They distribute binaries for Windows, Linux, your mom's toaster, etc.

 

Now on Linux land, repo maintainers grab OpenJDK's sources too, compile them, and provide them as OpenJDK package in whatever package management system they use (.deb, .rpm, etc).

 

In short, they're the same VM, same libs, same performance. The only case where it gets tricky is if you're running on ARM (there is no JIT yet for ARM last time I checked, so you get the "zero" vm on Linux, which is a plain interpreter). Also Google will be grabbing the "standard libraries" part of OpenJDK and supplying them with their own VM in future Android versions for example.

 

 

I'd be more worried about using Java, because the Garbage Collector may cause unpredictable jitter (which, again, turns into unpredictable latency.)

 

You'd need to do something horribly wrong to get many 100ms pauses on GC alone. Minecraftian levels of wrong. Then again, if the pauses are client side, then you're on either Dalvik or ART, and thats a different issue.

 

To me sounds OP needs to measure exactly whats going on. If its client side pauses, network related pauses, and if the server really takes so much time sending packets.

Share this post


Link to post
Share on other sites

In short, they're the same VM, same libs, same performance. The only case where it gets tricky is if you're running on ARM (there is no JIT yet for ARM last time I checked, so you get the "zero" vm on Linux, which is a plain interpreter). Also Google will be grabbing the "standard libraries" part of OpenJDK and supplying them with their own VM in future Android versions for example.

 

So as I see, if there is no difference in performance, it's not really worth to change it, since server is not running on ARM.

 

I'm saying you include the same game-level event ("user U fired a shot in direction X,Y at time T") in the next 5 UDP datagrams you send. Presumably, each datagram contains many events, and the exact set of events included in each datagram will be different for each.

 

 

Hmm, this kinda breaks a little bit. Could I just send 2-3 repetitive UDP packets instead of your solution? This method would actually be the simplest to implement without breaking too much stuff.

Edited by Gintas Z.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement