Understanding a grid algorithm for 2D multiplayer game which distributes workload to multi-core CPU

Started by
9 comments, last by frob 6 years, 7 months ago

I've come across a multiplayer game framework https://github.com/jondubois/iogrid and it features an algorithm to efficiently handle the movement and collisions of multiple players on a huge map. What it tries to do is to divide the map into smaller cells and create child processes (because it is single thread in javascript) for those cells. Then the algorithm distributes the game workload to different processes of the CPU. So basically each cell will be handled by a child process but everyone of them will share the same game context. This way if I use a 4 core or 8 core CPU I can use all the cores of the CPU by creating new workers processes. And supposedly it will reduce the burden of the master process. In C++ or Java we may choose to create new threads but in javascript there is only one thread in the code so creating processes is very similar.

However, I am not familiar with the algorithm. I am not very sure if this is a standard way or a better way to deal with the multiplayer game scenario. But by looking at the way it describes this should be more efficient than using a single core to run the game server on a modern computer. I guess if this is the case there must be some kind of algorithm using by the game development community that does the same thing. What is the name of this algorithm and how can I find articles/materials about it? At the moment I find it hard to understand the algorithm. Particularly in how to deal with the cell overlapping and avoid duplicate calculations when players frequently moving between two adjacent cells.

I hope this is not too programming language specific because we all need to deal with multi-core CPU and distribute the workload to improve game performance at some point. 

Advertisement

The "sharing of context" for JavaScript engines may be a significant performance bottleneck, because any mutations need to be somehow shared with the other processes, preferrably in a way that is thread safe and free from race conditions and fair. In general, multi-threading, javascript, and mutable shared state don't all go together.

Splitting workload on multiple cores (and, by extension, multiple physical hosts) is what all games do to scale their server capacity. Exactly how this works is dependent on gameplay. For vast virtual worlds and MMOs where the draw is many players together, the goal is to make player-simulation cheap, to make entity-simulation very cheap, and to make the "physics" needed on the server as simple as possible. Additionally, entities that are "on the border" of some simulation area are "mirrored" to the neighboring server, so that interactions can still happen. For fast-paced, physical, first person shooters, they'll rather split the wold in lots of small processes with 2-50 players, each of which "fits" in a single process, and you can't "see" the people from the different processes. For social games, the shared state is stored in some database, and only mutates seldom -- check out state from database, work on it, check it back in. This scales infinitely, but doesn't allow very frequent or low-latency updates.

Look at this from another angle: In general, Javascript is a terrible language for servers. Not only is it run-time type checked (so you can't verify correctness ahead of time,) but its type system is also extremely weak; you don't even get tag checks at runtime. Want to pass a Rect into a function that takes Point, and assume that "x" and "y" do the "right thing"? Go right ahead! Want to pass a Player into a function that expects an NPC, and assume that the "name" and "chat" properties do the right thing? Works fine! Until you need to figure out who does what with what data, and you can't, because it's one big mess of interacting, un-traceable data. The fact that you have to "do special things" just to take advantage of multiple cores is another weakness of the language -- it's much better to choose a language where correctly using machine resources is not special.

I have written high-performance multi-core systems in C++, Java, C#, Erlang, and Haskell. I've also jammed Python and Javascript into multi-process systems, but they're much harder to do that with, because they are single-threaded by their nature. The overhead is bigger, and the risk of bugs is higher. If I had to do it again, I'd probably choose C++ or Erlang first, and Haskell or C# second.

 

enum Bool { True, False, FileNotFound };
1 hour ago, hplus0603 said:

The "sharing of context" for JavaScript engines may be a significant performance bottleneck, because any mutations need to be somehow shared with the other processes, preferrably in a way that is thread safe and free from race conditions and fair. In general, multi-threading, javascript, and mutable shared state don't all go together.

Splitting workload on multiple cores (and, by extension, multiple physical hosts) is what all games do to scale their server capacity. Exactly how this works is dependent on gameplay. For vast virtual worlds and MMOs where the draw is many players together, the goal is to make player-simulation cheap, to make entity-simulation very cheap, and to make the "physics" needed on the server as simple as possible. Additionally, entities that are "on the border" of some simulation area are "mirrored" to the neighboring server, so that interactions can still happen. For fast-paced, physical, first person shooters, they'll rather split the wold in lots of small processes with 2-50 players, each of which "fits" in a single process, and you can't "see" the people from the different processes. For social games, the shared state is stored in some database, and only mutates seldom -- check out state from database, work on it, check it back in. This scales infinitely, but doesn't allow very frequent or low-latency updates.

Look at this from another angle: In general, Javascript is a terrible language for servers. Not only is it run-time type checked (so you can't verify correctness ahead of time,) but its type system is also extremely weak; you don't even get tag checks at runtime. Want to pass a Rect into a function that takes Point, and assume that "x" and "y" do the "right thing"? Go right ahead! Want to pass a Player into a function that expects an NPC, and assume that the "name" and "chat" properties do the right thing? Works fine! Until you need to figure out who does what with what data, and you can't, because it's one big mess of interacting, un-traceable data. The fact that you have to "do special things" just to take advantage of multiple cores is another weakness of the language -- it's much better to choose a language where correctly using machine resources is not special.

I have written high-performance multi-core systems in C++, Java, C#, Erlang, and Haskell. I've also jammed Python and Javascript into multi-process systems, but they're much harder to do that with, because they are single-threaded by their nature. The overhead is bigger, and the risk of bugs is higher. If I had to do it again, I'd probably choose C++ or Erlang first, and Haskell or C# second.

 

Thanks. Although you don't directly explain the algorithm I get what you mean and I totally agree with you. At the beginning of writing my game server I just want to see how far can javascript go on server side. The curiosity is driving me forward and there are a lot of tools and libraries available to keep me going. At first I was very happy that I can code a game server with javascript but it is becoming harder and harder as I try to improve the performance of the server. Maybe I can just use other typed languages which have better support for multi-threading to rewrite my game in a much shorter time. I am not very familiar with multi-threading programming though. The other language that I have used before is Java. Maybe I should pickup Erlang, or Go, then rewrite my game server?

If your goal is high performance multi-core game servers, then yes, any of C++, Erlang, Java, or Go will probably be better than Javascript, assuming your general approach is sound (i e, no inefficient searches across all objects each time you need a property, no n-squared collision/distance look-ups per entity each step, and so forth.)

 

enum Bool { True, False, FileNotFound };

On the flip side, if you're working on a small project that doesn't need multiple compute cores or threading, something that doesn't particularly tax a single processor, and if you're already comfortable with JavaScript, then the language can work fine.

It is quite common to read about people working on beginner's projects describing how they are struggling to make their system extensible, trying to include tools like Redis just in case their project becomes successful. Their program might suddenly take off and require millions of transactions per second, their program might suddenly become popular and require enormous bandwidth, or enormous memory requirements.  So the spend their time on complex systems designed for enormous businesses rather than the small solution that would meet the project's needs.

Are you certain that processing your map is really a problem? Have you already used the traditional tools of quadtrees for spatial partitioning, and broad-phase/narrow-phase processing of your collisions?  Are you sure they won't work in your case?  They worked for a lot of 3D games even back when computers were running at 600MHz, 300MHz and even slower, so are you really sure they'll overwhelm your 4GHz processor?

Only you know your situation and project details, but I'd caution that planning for the abstract future is usually a bad idea.  Write code that addresses the problem before you, and don't underestimate the raw processing power of a single 4GHz processor.

9 hours ago, frob said:

On the flip side, if you're working on a small project that doesn't need multiple compute cores or threading, something that doesn't particularly tax a single processor, and if you're already comfortable with JavaScript, then the language can work fine.

It is quite common to read about people working on beginner's projects describing how they are struggling to make their system extensible, trying to include tools like Redis just in case their project becomes successful. Their program might suddenly take off and require millions of transactions per second, their program might suddenly become popular and require enormous bandwidth, or enormous memory requirements.  So the spend their time on complex systems designed for enormous businesses rather than the small solution that would meet the project's needs.

Are you certain that processing your map is really a problem? Have you already used the traditional tools of quadtrees for spatial partitioning, and broad-phase/narrow-phase processing of your collisions?  Are you sure they won't work in your case?  They worked for a lot of 3D games even back when computers were running at 600MHz, 300MHz and even slower, so are you really sure they'll overwhelm your 4GHz processor?

Only you know your situation and project details, but I'd caution that planning for the abstract future is usually a bad idea.  Write code that addresses the problem before you, and don't underestimate the raw processing power of a single 4GHz processor.

My game runs extremely smooth on local dual-core computer. But on a remote 4-core VPS after running for 20 hours it is just slow as hell, CPU usage is 100%. I can't even login to play on the next day. It is hard to diagnose because working on a remote server is very inconvenient. 

Sounds like you have a bug!

Presumably you can diagnose it sooner on a local machine with less RAM and slower CPU. Try running it on a Raspberry Pi, that ought to make it hit the limit sooner.

 

enum Bool { True, False, FileNotFound };

The "after 20 hours" sounds like a bug to me as well. That's typical of a resource leak. If it runs fine at first, that's evidence that it can run just fine in the general case.

29 minutes ago, frob said:

The "after 20 hours" sounds like a bug to me as well. That's typical of a resource leak. If it runs fine at first, that's evidence that it can run just fine in the general case.

Yes it is using more than 90%  CPU most of the time. Sometimes 126%. I don't know how come CPU usage can be 126% though. Memory usage never changes. However because V8 engine is built on top of C++, when profiling the code of my game it often boils down to how deep I understand how the V8 engine works. A bit off-topic now but I think I am standing at a fork in the road, whether to continue development in javascript or use a more concurrent-friendly language to rewrite the backend.

4 hours ago, hplus0603 said:

Sounds like you have a bug!

Presumably you can diagnose it sooner on a local machine with less RAM and slower CPU. Try running it on a Raspberry Pi, that ought to make it hit the limit sooner.

 

I wish I had so many devices at hand. Not to mention I have to learn to face new problems installing things to a Raspberry Pi.

10 hours ago, caymanbruce said:

Sometimes 126%. I don't know how come CPU usage can be 126% though.

On Windows task manager, it means your CPU has some extensions that allow it to automatically overclock when it won't risk the hardware and there is processing work to do. 

On *nix process tools like ps or top, it means you are doing more than 1 CPU's worth of work, and is rather common in parallel processing.

 

This topic is closed to new replies.

Advertisement