Jump to content

  • Log In with Google      Sign In   
  • Create Account


Member Since 03 Jun 2003
Offline Last Active Yesterday, 07:04 PM

#5291048 How can I optimize Linux server for lowest latency game server?

Posted by hplus0603 on 10 May 2016 - 05:38 PM

Are you suggesting to send 5 same UDP packets after each shot event?

I'm saying you include the same game-level event ("user U fired a shot in direction X,Y at time T") in the next 5 UDP datagrams you send. Presumably, each datagram contains many events, and the exact set of events included in each datagram will be different for each.

#5290993 Browser strategy game server architecture (with pics)

Posted by hplus0603 on 10 May 2016 - 11:13 AM

Most games probably look like this:


Especially for turn-based, asynchronous multiplayer games, there just isn't that much to do, and you go very far on a single monolith.

Once the simulation cost of the game becomes higher, you'll start scaling out the application servers, yet keeping a single database.

Keeping a separate "login server" function (and database) from the "world server" and databases is reserved for the very largest MMOs, where there is both significant per-world instance state, as well as a very large user base.

Authentication tokens are pretty typical; this is akin to certain kinds of session cookies used for web services. An excerpt from Game Programming Gems about this is here: http://www.mindcontrol.org/~hplus/authentication.html

#5290977 How can I optimize Linux server for lowest latency game server?

Posted by hplus0603 on 10 May 2016 - 09:38 AM

Client sends TCP event on every shot, then server sends TCP shot event to every client.

Isn't shots some of the most timing sensitive data? Why do you send that over TCP? If TCP sees packet loss of even a single packet, you have a guaranteed stall in delivery time.
It would likely be much less laggy to just send multiple "I shot at time step X" messages inside each of the next five UDP packets you send, for each shot fired.
If you drop five UDP packets in a row, then a bunch of other things will also be obviously wrong, so it's probably OK to lose the shots at that point.

#5290691 How can I optimize Linux server for lowest latency game server?

Posted by hplus0603 on 08 May 2016 - 01:10 PM

So, first:

TCP and UDP uses different ports, not the same one

The port number is like a street number. TCP and UDP are different streets. Whether the two sockets live on "123, TCP Street" and "123, UDP Street" or "1234, TCP Street" and "1235, UDP Street" doesn't matter. Even if they happen to have the same port number, they will not get confused in any way.


Linux server runs on VPS

VPS virtualization is not a low-latency technology. Running Linux on bare bones hardware would likely improve the wost-case jitter, and worst-case jitter turns into worst-case latency.

Is my OS choice bad?

Many successful action games are hosted on Linux and it works fine, so I wouldn't think so. I'd be more worried about using Java, because the Garbage Collector may cause unpredictable jitter (which, again, turns into unpredictable latency.)

Here is a simple C++ program you can build and run in a terminal on your Linux server, to measure scheduling jitter. (Run it under load, of course)

#include <iostream>
#include <iomanip>
#include <algorithm>

#include <unistd.h>
#include <time.h>

double now() {
    timespec sp = { 0 };
    // Try CLOCK_MONOTONIC if you're not on modern Linux
    clock_gettime(CLOCK_MONOTONIC_RAW, &sp);
    return sp.tv_sec + (double)sp.tv_nsec * 10e-9;

int main() {
    double avg = 0;
    double num = 0;
    double max = 0;
    double min = 1e12;
    double first = now();
    for (int i = 0; i != 1000; ++i) {
        double ts = (rand() & 255) * 0.0001 + 0.001;
        double start = now();
        double end = now();
        double duration = end - start;
        num += 1;
        avg += (duration - avg) / num;
        max = std::max(duration, max);
        min = std::min(duration, min);
    double last = now();
    std::cout << "total measurement interval: " << (last - first) * 1000 << " milliseconds" << std::endl;
    std::cout << "measurement latency: " << min * 1000 << " milliseconds" << std::endl;
    std::cout << "average above measurement: " << (avg - min) * 1000 << " milliseconds" << std::endl;
    std::cout << "worst case (this is what matters): " << (max - min) * 1000 << " milliseconds" << std::endl;
    return 0;
Build it with:
g++ -o jitter jitter.cpp -O3 -Wall -Werror

#5290558 Collisions between players in multiplayer racing game

Posted by hplus0603 on 07 May 2016 - 11:47 AM

That's correct. Handling player/player collisions is one of the hardest things to do in networked game design.
The higher the latency, the more you have to be able to "hide" the bad effects of client mis-predictions.
If you're using a web browser and websockets over TCP, you're almost guaranteed to have a higher latency than a native, UDP-based game would, too, making the problem more noticeable.

#5289897 sending and receiving struct using eNet and boost.serialization

Posted by hplus0603 on 03 May 2016 - 10:58 AM

I think you really need to step up your debugging here and dive in.
First: What is the text/message of the exception?
Second: Set a breakpoint at the beginning of your closure. Figure out what it's doing when it's serializing, and why it decides to throw.
Third: I have no idea what "g" is because you're not showing that. Is it a reference to some local variable in the outer scope? If so, has that outer function already returned?

#5289780 sending and receiving struct using eNet and boost.serialization

Posted by hplus0603 on 02 May 2016 - 03:19 PM

You can serialize like that, with two changes:

1) You need to write the length of the array first

2) calling strlen() for each comparison in that loop will be unnecessarily inefficient

#5289742 sending and receiving struct using eNet and boost.serialization

Posted by hplus0603 on 02 May 2016 - 08:54 AM

Compile your code with -O0 and -ggdb, and then set a breakpoint before the crashing line, and step through it.

std::string is safe, even though it uses pointers internally, because boost::serialization has a specialization for std::string (and most of the other STL containers.)

Did you look at the binary object and array support functions I recommended? Do you understand how they work and why they're needed? If not, you probably need to read up a bit on how C++ and pointers work in general before you'll be successful doing networking and other systems programming.

#5289595 router programming

Posted by hplus0603 on 01 May 2016 - 11:39 AM

You can open each network interface in the SOCK_RAW mode, which will give you raw network interface data.
You can also just do open() on each of the network interfaces (open("/dev/eth0") and whatnot)
Make sure that Linux does not care about those interfaces -- from Linux's point of view, those interfaces should be closed/down/unconfigured!
Because you will now be doing the job of ARP and ICMP and IP and UDP and TCP and the "route" command and the DHCP client and all the rest.
There are also various intermediate solutions. You might want to check out the results of "linux user space networking."

#5289594 sending and receiving struct using eNet and boost.serialization

Posted by hplus0603 on 01 May 2016 - 11:34 AM

why boost.serialization doesn't serialize pointers?

Because the value of the pointer makes no sense outside your process, and boost::serialization can't know how many elements to serialize if it were to serialize the pointed-at data.

To do that, you have to tell it what to do. Look at:
boost::serialization::binary_object(void * t, size_t size);

template <T> boost::serialization::make_array(T* t, std::size_t size);

#5289439 sending and receiving struct using eNet and boost.serialization

Posted by hplus0603 on 30 April 2016 - 11:33 AM

You can't serialize raw pointers. How would Boost know how long each pointed-to array is?
Use std::string if you want to use boost::serialization for strings.

#5289374 Does server's location affect players' ping?

Posted by hplus0603 on 29 April 2016 - 10:04 PM

Satellite "broadband" is still in use in some parts of the US, with 1600 ms downstream ping, and 56kbps modem upstream and a few hundred ms ping (depending on packet size.)
It is becoming less common, but the US is a large country and some areas are quite sparsely populated. Satellite internet is better than no internet in those locations ...

#5289310 Does server's location affect players' ping?

Posted by hplus0603 on 29 April 2016 - 02:06 PM

To answer the original question:

Yes, if you're hosting a highly latency-sensitive game worldwide, you should prepare for at least five server zones. In typical order of introduction for a US-based product:
- North America. (Pros put one one each coast)
- Europe
- East Asia
- Australia
- South America

If you're based somewhere else, you'll typically use the same list, except put your own home zone at the top of the list :-)

For example, Brazil is a highly-online market, that currently doesn't monetize very well, but a "hit" in that country can be very profitable.
Their data center infrastructure is not as mature, though. You might want to look to places like Uruguay or Argentina, if you decide to do something for that continent.

East Asia could be many countries such as South Korea, Japan, Philippines, could even India -- depending on your specific requirements and market contacts.
Don't attempt to go into China unless you have a native partner that you trust, and that preferably is connected in the government half of the country.

#5289308 Does server's location affect players' ping?

Posted by hplus0603 on 29 April 2016 - 02:00 PM

the speed of light is roughly 300.000 km/s

The speed of electricity in copper is actually more like 150,000 km/s.
(Plus the latency of routers, which is often not insignificant.)

A lot of current backbone infrastructure does in fact use lasers, because it uses fiber optics.
The speed of lasers in fiber optics is more like 200,000 km/s.

This concludes our approximate-trivia-Friday event :-)

#5289156 Network Library that supporst RPCs

Posted by hplus0603 on 28 April 2016 - 04:45 PM

you have to build up your whole message in a temporary buffer first, then measure its length immediately before sending

You want to do that anyway, because you don't want to call send() multiple times for as single packet, because system calls have a cost.
Just leave a few bytes empty at the beginning of your buffer, and after you know the size, pre-fill that at the head of the buffer, then call send() on the entire thing.

I still have to work with a buffer

Yes; any TCP receiver has to call recv() into the end of whatever buffer it keeps around, and then try to decode as many whole packets as possible out of that buffer, and then move whatever is left to the beginning of the buffer.
(Cyclic buffers are sometimes convenient for this.)

how do I know which bytes belong to which parameters?

Decode each argument in order. You presumably know the type of each argument.
The simplest method: If the type is an int, it's 4 bytes. If the type is a double, it's 8 bytes. If the type's a string, then make the rule that strings are less than 256 bytes long, and send a length byte, followed by that many bytes.
When you have decoded the correct number of arguments, you know that the next part of your message follows.
If you support variable number of arguments, first encode the number of arguments to decode, using another byte.

Less simple methods will decode integers as some kind of var-int, floats using some kind of quantization (based on what the particular value is,) strings using a dictionary that gets built up over the connection, etc.