Redis Scaling with Monster AOE

Started by
4 comments, last by WombatTurkey 8 years, 5 months ago

I am using nodejs and going to use the Redis pub sub system to scale (just in case). As you know already from games like Path of Exile, Diablo 3 and Devilian.. when you pull 20, 30 or even half of the map of monsters and use your skills to kill them all.... you can imagine the amount of data that needs to be sent over the pipe.

My dilemma is this:

Since my game is instance based, the monsters will not be loaded in all at once when a player creates a game. This was a bad design choice I made earlier and got help from this community. I learned here that we need to load them in dynamically based upon the player's position in the map. So that's fine and I got that part working now great. (Saves memory usage as well because if a player creates a game and just stands in town, there is no need to load all the rest of the 155 mobs in memory). -- I think this is the right way to do it?

My second dilemma is this:

When a player does use those aoe skills that hit multiple monsters and the game being server-side, the data across the pipe will be kind of intense? All the damage numbers that need to be sent to all 4 or 6 players in the player's range (could just be an array of 40 different numbers), but still, it's a lot of data being sent over. Factor in multiple games being opened and a high volume of online players. I'm pretty sure nodejs will need to be scaled. If you go to play.treasurearena.com and check out their websocket frames that's exactly how I am going to do it basically in the end.

This post is kind of all over the place and not even sure what category it belongs to. I'm just curious for games like Diablo 3 or Path of Exile that utilize high aoe skills; is a in-memory data structure store system like Redis used?

Advertisement

From what I can tell with 3 minutes of reading, Redis Publish/Subscribe is a means to let DB clients listen for change sets published on certain channels. This is a feature you might use in your game-server's imlementation to keep on top of changes to the world which are relevant (i.e. needs sending to) connected clients; it is not something you should ever expose to the game client.

Independent of your server database, your server needs to send information to your cilents such as:

* Mob X of description D is Relevant (includes all info like type, skin, position, AI state, current target) - indicates a mob has entered the client's data frustrum

* Mob X is Irrelevant - indicates a mob is gone, whether it moved out of range, the client moved out of range, its lootable body has dissolved, etc

* Mob X Updated to D - new information about the mob. Generally, include all the high-priority fields like position, AI state, current target... and then generally include all fields that have changed in the past couple seconds, regardless of whether they've changed since the last Mob-Update packet sent to this particular client. Keeping precise monitor-change lists can get really expensive.

* And chat-channel messages, response messages (as in responses to queries, such as "what's on Lootable Mob X I can take?"), cutscene or dialog or conversation messages, etc.

So, you need to figure out how to encode each message, how to gather that information from the server data and relay it to clients, and how to fill in the gaps at the client's end so it looks like a smooth, cohesive experience.

Generally, individual mobs never actually hit the disk server side; each server keeps the mobs within its frustrum in memory only. So, yes, they'd be created, updated, monitored, live, and die in memory structures, but not in a clusterable database like Redis. If the server is shut down, all clients are thrown out anyways; you can re-spawn the mobs when the server comes back up. Obviously unique mobs which never respawn, especially ones which need to be in only one place at a time, are an exception.

RIP GameDev.net: launched 2 unusably-broken forum engines in as many years, and now has ceased operating as a forum at all, happy to remain naught but an advertising platform with an attached social media presense, headed by a staff who by their own admission have no idea what their userbase wants or expects.Here's to the good times; shame they exist in the past.

Your first dilemma is on point. Dynamically allocating mobs as you reach a specific partition of the world map would seem like the best way to do things, as you stated yourself it makes no sense to load the entire map when you are just sitting in town. AKA Not really a dilemma for you anymore =).

As for number two, I am not familiar with the technology you are using, but I would first ask you the reason for calculating your hits ect... server side? My initial reaction would be to do your calculations client side, and send a subset of those events(represented by a much smaller and compact structure), to the server to delegate to the other connected clients. I have no idea how D3 or PoE work, but logic is logic.

From what I can tell with 3 minutes of reading, Redis Publish/Subscribe is a means to let DB clients listen for change sets published on certain channels. This is a feature you might use in your game-server's imlementation to keep on top of changes to the world which are relevant (i.e. needs sending to) connected clients; it is not something you should ever expose to the game client.

Independent of your server database, your server needs to send information to your cilents such as:

* Mob X of description D is Relevant (includes all info like type, skin, position, AI state, current target) - indicates a mob has entered the client's data frustrum

* Mob X is Irrelevant - indicates a mob is gone, whether it moved out of range, the client moved out of range, its lootable body has dissolved, etc

* Mob X Updated to D - new information about the mob. Generally, include all the high-priority fields like position, AI state, current target... and then generally include all fields that have changed in the past couple seconds, regardless of whether they've changed since the last Mob-Update packet sent to this particular client. Keeping precise monitor-change lists can get really expensive.

* And chat-channel messages, response messages (as in responses to queries, such as "what's on Lootable Mob X I can take?"), cutscene or dialog or conversation messages, etc.

So, you need to figure out how to encode each message, how to gather that information from the server data and relay it to clients, and how to fill in the gaps at the client's end so it looks like a smooth, cohesive experience.

Generally, individual mobs never actually hit the disk server side; each server keeps the mobs within its frustrum in memory only. So, yes, they'd be created, updated, monitored, live, and die in memory structures, but not in a clusterable database like Redis. If the server is shut down, all clients are thrown out anyways; you can re-spawn the mobs when the server comes back up. Obviously unique mobs which never respawn, especially ones which need to be in only one place at a time, are an exception.

Yeah, my apologizes for not explaining how I use Redis. I do exactly what you explained so far but I'm actually using Redis as a memory cache & pub/sub. That's probably not a good idea though? Should I us Memcache for the in-memory storage instead and just leave Redis for pub sub?

The problem I think is nodejs's ram allocation limit is 1.5gb per isolate and I don't want to hit that limit. So I need a global memory cache sad.png.

Edit: Thanks for your response by the way! I guess I need to just use a global in-memory cache storage and stop worrying!

Your first dilemma is on point. Dynamically allocating mobs as you reach a specific partition of the world map would seem like the best way to do things, as you stated yourself it makes no sense to load the entire map when you are just sitting in town. AKA Not really a dilemma for you anymore =).

As for number two, I am not familiar with the technology you are using, but I would first ask you the reason for calculating your hits ect... server side? My initial reaction would be to do your calculations client side, and send a subset of those events(represented by a much smaller and compact structure), to the server to delegate to the other connected clients. I have no idea how D3 or PoE work, but logic is logic.

Yeah definitely all server-side. When a player equips a weapon or whatnot is when it does the heavy calculations of stat/item modifiers and updates the player's damage.

Looking more into Redis pub/sub... I think I follow you. You're not using it for persistence, you're using it for inter-process communication between NodeJS processes. That sounds reasonable, so long as you don't mind the low per-node scalability that using Node as your server runtime is going to offer compared to a native-compiled runtime (C/++ for example).

With your memory limit and Node's single-threaded model in mind... I'd personally set up several separate daemons. One class of daemons would be client I/O groups. You spawn one or more NodeJS processes whose job is to accept connections from up to N clients each, parse client commands, and feed those commands out as Redis publishes. These listen for published information relevant to their player and serialize it out to the client. The second class of daemons would be processor queues. You spawn a bunch of these and delegate to them any incoming input events (from client I/O daemons) and world-change events (from other processor daemons). These would classify and validate each event, take appropriate action against the game world datamodel, and then publish any new events that need responding to for those changes.

The separation of these layers makes you initially strongly dependent on serialize/deserialize, and on Redis performance for dealing with the event transfer, but having those three layers separated (client I/O, event transfer, processing) means you can bugfix, update, and even wholesale replace each of the three layers separately. You can start re-implementing your processor daemons in a faster language without even touching the client or I/O layer. They could all live on separate servers without any code modification. If serialization becomes a big problem, you can replace the event transfer mechanism with your own shared memory arrangement to move data directly from I/O processes to processing processes.

RIP GameDev.net: launched 2 unusably-broken forum engines in as many years, and now has ceased operating as a forum at all, happy to remain naught but an advertising platform with an attached social media presense, headed by a staff who by their own admission have no idea what their userbase wants or expects.Here's to the good times; shame they exist in the past.

Looking more into Redis pub/sub... I think I follow you. You're not using it for persistence, you're using it for inter-process communication between NodeJS processes. That sounds reasonable, so long as you don't mind the low per-node scalability that using Node as your server runtime is going to offer compared to a native-compiled runtime (C/++ for example).

With your memory limit and Node's single-threaded model in mind... I'd personally set up several separate daemons. One class of daemons would be client I/O groups. You spawn one or more NodeJS processes whose job is to accept connections from up to N clients each, parse client commands, and feed those commands out as Redis publishes. These listen for published information relevant to their player and serialize it out to the client. The second class of daemons would be processor queues. You spawn a bunch of these and delegate to them any incoming input events (from client I/O daemons) and world-change events (from other processor daemons). These would classify and validate each event, take appropriate action against the game world datamodel, and then publish any new events that need responding to for those changes.

The separation of these layers makes you initially strongly dependent on serialize/deserialize, and on Redis performance for dealing with the event transfer, but having those three layers separated (client I/O, event transfer, processing) means you can bugfix, update, and even wholesale replace each of the three layers separately. You can start re-implementing your processor daemons in a faster language without even touching the client or I/O layer. They could all live on separate servers without any code modification. If serialization becomes a big problem, you can replace the event transfer mechanism with your own shared memory arrangement to move data directly from I/O processes to processing processes.

I imagine the native C++ Websocket server would have to be scaled as well, anyway right? So the nodejs issue has it's benefits, but fallback because of the memory limit and obvious (single threaded) performance. I know it seems like pre-mature optimization babble since my game is not even released, but I think scalability is very important to be discussed and appreciate your daemon group idea. I'm definitely thinking that would be a great way to utilize nodejs for scaling and help free up the event loop. I really do appreciate the insight.

This topic is closed to new replies.

Advertisement