Jump to content
  • Advertisement
Sign in to follow this  
WombatTurkey

Instance based game with multiple nodejs instances

This topic is 915 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hello again!

 

I have your average nginx load balancer with 4 nodejs instances behind it, and one Redis server to share certain memory.

 

My game is an instance game, similar to Diablo 2. Players can create "games" if you will and other players can join them.

 

The game data is stored locally on the players node of which who created the game (I was going to store the game data in Redis, but have the following dilemma):  

 

When a game is made from a user that is on node instance #1; and a user that is on node instance #2 wants to join that game. This is where I get confused. I want the players to communicate to each other locally (but they cannot because they are not on the same instance). How exactly can I counter this?

 

These are some solutions I found:

 

1) Use ZeroMQ or Nanomsg to send the data locally between processes 

 

2) Use the native nodejs IPC protocol to send the data locally between processes

 

Now, this worries me more. Because I have a feeling I am doing it wrong.  When a player is moving around in the map, he needs to send data to all other 5 players in that game. (simple POS packet).  This now has to be sent through ZeroMQ (pub subbed) -- or through local sockets (IPC)? Isn't this going to be a lot of overhead? Are there other ways to scale an instanced based game with multiple node instances? Or maybe, this is the right way, and I'm underestimating how fast local sockets are? (Or ZeroMQ's speed)

Edited by WombatTurkey

Share this post


Link to post
Share on other sites
Advertisement

There are many solutions to the "how do I talk to the right thing" problem.

Easiest is to declare that users connect to some random node. Then there's a registry of instances. Instances run on some random node. When a user talk to an instance, you create a connection from the user's process, to the game instance process, by looking up the game instance in the registry.

This is a pretty well researched area. You mention node-IPC (which I'm not a fan of) and ZeroMQ (which I'm a bit more a fan of) but there are tons of other connection methods, including bare TCP sockets, Thrift, etc.

Technologies outside the Node ecosystem may solve this even more elegantly, such as Erlang which has the entire concept built-in.

The nice part here is that, while you can start out by saying "game instances and user connections are served by the same kinds of processes," you actually have two separate kinds of service that could be split apart if you need to, assuming that you actually stick to the clean separation, rather than hard-coding things like "the game instance is always local to a process of at least one player."

 

The next question is "what do you use as the registry of the mapping from game-instance to process-address?"

Common distributed solutions include etcd and zookeeper or even just plain DNS SRV records (using a private network.) You can also use a central solution like a simple database.

Share this post


Link to post
Share on other sites

Too smart for me hplus, some stuff went over my head on that one! :P

 

I just don't understand this part: "Instances run on some random node. When a user talk to an instance, you create a connection from the user's process, to the game instance process"

 

I can store the games instance id in Redis for that user, and use that before the player is joining? Then I can connect the user to that instance? Is that kind of what you're saying?

 

The problem then since players will be re-creating games and joining them 24/7 -- What if a player is sending a player a message, etc. While they are reconnecting, or connecting to that instance. Seems like nodejs is not right for me for this type of game then to be brutally honest? Which is a bummer because I've dumped over 100k LOC's in our gameserver so far. I just never really thought about the scaling thing until now. And at this point it would take atleast half a year to convert I don't have the drive anymore, I've looked at Elixir....

Edited by WombatTurkey

Share this post


Link to post
Share on other sites

Hm. Someone told me on Reddit that opening up two connections would fix the "player sending a message while reconnecting" problem. Basically, a "central server" for notifications / chat. And then connect to the proper node instance in the background on the core game server.  Not sure about this, but sounds ideal? Could use Redis to store what instance each game is on (as hplus said, a simple database could do).

Edited by WombatTurkey

Share this post


Link to post
Share on other sites

So, I'm assuming that you already have a load balancer, which spreads incoming user connections to some number of nodes (hosts) that run processes which "deal with connected users" (call these "user processes")

 

Also I assume that those same processes can also talk to each other, presumably on some other port than the main incoming-user-connections port. (And presumably firewalled off!)

 

Also I assume that you manage many user connections in a single process, because that saves on overhead per-process.

 

So, far, there exists:

- incoming connections from users

- going to some of some number of user-serving processes

- a mapping between "user" and "user-serving process"

 

Now, a user wants to create some game instance. I propose that the simplest way to do that is to create a second kind of process, a "game instance process."

I assume that each "game instance process" can manage more than one game instance at the same time -- again, because that's typically how you build Node services.

You would have some function that "selects one game instance node/process, and create a new game instance on it." It would also register that instance in some database.

You then return that game instance ID to the creating player, and the creating player's user-process would make a connection between that player, and the game instance on the game-instance-process server.

 

Now, when a second user wants to join the same game instance, the user-server-process that manages that user would find the game-instance-process for the game-instance-id, and connect that second user to that process.

In-game chat would go through the game-instances.

If you want to support 'disconnect/reconnect' then you would have another database of "user id" to "game instance currently in," and when a user connects, the user-connection process would look this up, and if it's not empty, immediately (re-)connect the user to the game instance.

 

If you now want to add arbitrary user-to-user chat, then you need a separate database of "user-id" to "user-server-process."

When user B wants to send a message to user C, user B's user-server-instance will look up user C, and send a message to that user-server-instance.

 

The main problem with keeping this data in Redis is that, if a process crashes, Redis doesn't clean up after you.

And if you set an expiry time on this data, then you have to keep refreshing the data while the user is connected. Let's say you expire data after 5 minutes -- this means you have to refresh the data every 4 minutes or so, which adds a not insignificant additional write load on your Redis instance.

This is why I prefer something like Zookeeper, which can create "ephemeral" keys, which go away if the connection to Zookeeper that created the key goes away.

But, either way can work.

 

Now, it turns out that, in most systems like these, each "user-server-process" will have to talk to each "game-instance-process" on average, and if you do cross-system chat, each user-server-process will also talk to each other user-server-process, as well as everything talking to the central database. This will scale as N-squared in number-of-processes. Luckily, because you can typically do thousands of users per process, and N=100 processes still keeps N-squared at a reasonable size, you should be able to do 100,000 online players without too much trouble, and if you make sure to optimize the implementation of the various bits, you can probably do 10,000 users per process and N=1000 processes, to support games that are the largest in the world :-)

 

Node has the draw-back that you can only run a single thread per process. This means that you'll need to run multiple processes (and thus multiple server-instances) per physical host, to best use available cores. This is generally accomplished by mapping each server-instance to a separate port. Thus, the look-up table to find a particular server-instance, needs to return both a host (internal IP) and a port number. Similarly, the load balancer for incoming user connections will be configured with multiple back-end processes to load balance to, re-writing the publicly exposed port to whatever the internal port number is for each of the instances ("reverse" or "destination" NAT if your LB is a router; just an internal TCP stream if your LB is something like HAProxy.)

 

One of the best features of Erlang/OTP is that almost all of the features I talk about above (except for the load balancer) are built-into the software already!

You make sure to configure the different Erlang nodes appropriately with their roles, and find each target server using the built-in Erlang process discovery/registry functions, and you'll do great!

With Node, you have to build a bunch of this yourself (as you already discovered.)

 

typical-game-server-cluster.jpg

Share this post


Link to post
Share on other sites

If there was a gold like system on this site I would of bought you it! This is starting to make sense now, thank you so much for explaining everything! I'm learning a lot -- and this information is invaluable.

 

*I just wrote about 600 words than deleted it because I read your post wrong* I didn't realize you were actually adding a second physical game process that is linked between the user processes. This was my reading comprehension fault

 

 

 

Now, a user wants to create some game instance. I propose that the simplest way to do that is to create a second kind of process, a "game instance process."

I assume that each "game instance process" can manage more than one game instance at the same time -- again, because that's typically how you build Node services.

You would have some function that "selects one game instance node/process, and create a new game instance on it." It would also register that instance in some database.

You then return that game instance ID to the creating player, and the creating player's user-process would make a connection between that player, and the game instance on the game-instance-process server.

 

BOOM! This is the solution! I'm just not sure how to do this part: "You then return that game instance ID to the creating player, and the creating player's user-process would make a connection between that player, and the game instance on the game-instance-process server."

 

When you say a connection between that player, you're not talking about a P2P system right? You're talking about the node instances connecting to each other? An example, I edited the graph:

 

HLlGINe.png

 

I'm just not sure how Turkey could connect to Muffin's instance? What do you mean by "between that player"? A reconnect? If a reconnect, why the separate game processes in the first place? I'm a bit lost, as you can most likely tell, but just trying to grasp all of this.

I'm really starting to  think twice about node and just move to Elixir before my head explodes   :lol:

Edited by WombatTurkey

Share this post


Link to post
Share on other sites

I didn't realize you were actually adding a second physical game process

 

So, technically, you can actually have the same physical process both play the role of "user-instance-runner" and the role of "game-instance-runner," and that process would be registered twice in the registry (once for each role.) Personally, I find it easier to keep them apart, because that makes each process do less, and thus be easier to debug.

 

Regarding your questions -- the "connection between players" for gameplay purposes should happen in the game process. Because Muffins and Turkey are both connected to Game Instance 47, they will both see what happens in game instance 47. For example, if Muffins say "I am frosted" then that could be an event emitted by that game instance, and each connected player receives the event "Muffins says 'I am frosted'"

 

The user/user connection only needs to happen for some messaging channel that is not game-specific, such as a cross-game "whisper" system or whatnot.

 

In practice, you will have each of the user-instance-processes listen to two ports: one port for incoming connections for users, and one port for incoming connections for other user-instance-process servers.

Each game-instance-process will listen on one port, for incoming user-instance-connections to the game-instance-process.

You only really need one TCP connection between process A and process B, and you can funnel the actual intended target of a message ("message for user X" or "input to game Y") as part of the packet being sent along the connection.

 

The kinds of registry information you will need include:

 

user-server-process-id -> host-and-port

game-server-process-id -> host-and-port

game-instance-id -> game-server-process-id
user-id -> game-instance-id

user-id -> user-server-process-id (if you support whisper)

 

So, in the case where user A wants to join "the game of user B" (as opposed to "game 47") then the look-up would be:

user-id(B) -> game-instance-id(47)

game-instance-id(47) -> game-server-process-id(1234)

game-server-process-id(1234) -> host-and-port(10.1.2.3:4567)

 

Typically, you will have a registry of connections already made in each server process, so if someone wants to connect to a server that you're already talking to, you just forward their messages over that connection.

In Node.js, the way to implement "raw" TCP sockets is to use net.createServer() and then server.listen(), and for each incoming connection you get a socket, which you can bind data events to.

To connect to "raw" TCP sockets you use net.Socket(), socket.connect(), and binding data events.

Share this post


Link to post
Share on other sites

Aww. That's funny how you mentioned net.createServer(), I was just doing testing with that earlier for process communication. With raw unix sockets on some random VPS node. Sad part is, I am only getting around 9k msgs/per second which seems pretty low and am scared that would cause latency issues in the long run. But then again, the VPS I'm testing it on is around 3$ lol. (using this benchmark with setImmediate)

 

 

Because Muffins and Turkey are both connected to Game Instance 47, they will both see what happens in game instance 47. For example, if Muffins say "I am frosted" then that could be an event emitted by that game instance, and each connected player receives the event "Muffins says 'I am frosted'"

 

 I totally forget they are both linked to that game instance... That's what I think was confusing me. Again, you deserve more credit for what you do here... This type of information is invaluable I really appreciate it.

 

Edit: I have bookmarked this thread as well so I can look back and double check stuff.

Edited by WombatTurkey

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!