Small scale distributed architecture questions

Started by
11 comments, last by hplus0603 6 years, 5 months ago

Hi,
i am trying to find a good (server) architecture and some implementation hints for my indie multiplayer game i'm designing.

I have previously worked on as small scale restoration project for an old mmorpg. The intend was not to have thousands of players connect, but to provide a manageable amount of players with a consistent view of the world; with this goal in mind, and because of not having much practical experience in designing such systems, it became a monolithic server that clients directly connected to. It turned out though that the biggest bottleneck was the simulation of thousands of npcs (active regions increasing on players spreading).
For my current/own project (a space-sim kind of game) i intend to have similar requirements: small amounts of players (10-100 max), but big amounts of npcs (thousands, moving along paths most of the time, but observing their environment), a basic physics system (mostly queries, no complex simulation), indirect (chat, trading), and direct/spatial interactions (some form of combat) between players and npcs (trading, conversations, some form of combat). Apart from that, i need to be able to implement changes to the game design without too many cascading steps (some of the interactions are not yet mapped out completely).
So, trying to learn from the previous project, i now want to find a better architecture for the systems involved in handling this.

Having mostly worked with unity before, my basic way of thinking is in loops (or synchronous events), but the more i read about the different server designs, the more i see asynchronous messaging between distributed systems as a way to tackle such problems.
I found valuable resources on 'Engines of Delight', but most of them seem to target the challenge of high amounts of players/connections or just a bigger scope in general, so some of their parts 'feel' unnecessary for my goal (a frontend server for example).
But splitting the systems into interconnected, and the game world into separately simulated zones, seems like a way to go (with which i have no experience yet, unfortunately).

So my questions are:
If i intend to separate the game systems, is this reasonable (amount of work/benefits, 'secure enough', iteration-friendly)? :
Authserver (login, version check) - public
Gameserver (zone management/character transfer, message relay, persistence(DB saving/loading)) - public
Zoneservers[] (physics, gameplay, ai, chat? etc) - internal
And if so, the questions i'd have were: where does the character data (players/npcs) live (gameserver, zoneserver); do i send relevant data back and forth between them (each has a sliced copy of the data only covering their concerns) or do they each own a synced full copy for easier cross-system communication / zone transfer?
Or are there alternative or even better solutions for my requirements?

Advertisement

You could probably run "authserver" and "gameserver" on the same application server, talking to the same back-end database.

Running each "area" of the game on a separate "zoneserver" is typical, and easy to manage; exactly how hand-off happens between "gameserver" and "zoneserver" is up to you.

When the "zoneserver" needs services from the "gameserver," you might want an "serviceserver" that is similar to gameserver in code structure, but only accepts requests from the zoneservers, to avoid players faking requests. De-coupling zoneservers from the database by jumping through a serviceserver is a pretty useful pattern. And, assuming you can keep authorization straight, keeping all of authserver, gameserver, and serviceserver in the same application server is totally doable.

The other thing you'll probably want to do is have a separate chat server. Global chat, trades, guilds, and anything that might span multiple zone servers, runs better on its own hardware optimized for the task. Zone servers and chat servers run as persistent processes that "stay up" and keep persistent state. auth/game/service servers probably use the web service model of stateless request/response, and any state that's needed is kept in the database (or perhaps in network attached RAM, like memcached or Redis or whatnot.)

Regarding where data lives: Persistent data lives in a database. If some change happens that must live on (player trade, etc,) then that should be done as a transaction to the database. Typically, if it comes from the game, the zone server will fire an asynchronous request to the serviceserver, which in turn will apply service logic rules, and update the database as appropriate, and then respond back to the zoneserver. If two players trade, this model is extra important, because you don't want to create a situation where one player "crashes" and the other player loses a thing without getting the other, or somesuch.

Zoneserver is going to have most of the "character/player" data checked out from the database, typically as mostly read-only. Ephemeral state, like "where am I in this zone" might never get written back to the database -- you want to avoid writing to the database too much! (This is why most games log you in at the last save point -- quit-and-save happens seldom and can save your location; moving around happens all the time and shouldn't generate database load.)

It's totally OK to talk HTTPS with JSON to the service/game/auth servers, as long as the API you use is asynchronous -- start the request, and then keep doing whatever the game/zone server is doing. Once the response is back, either fire an event to whoever started the request, or just have the request starter poll for result every so often.

 

enum Bool { True, False, FileNotFound };

That already answers a lot of questions, thanks.
About the persistence part:

3 hours ago, hplus0603 said:

If some change happens that must live on (player trade, etc,) then that should be done as a transaction to the database.

My naîve understanding of persistence until now was that the gameserver (or serviceserver) runs a recurring timer that just updates the database with newly piled up changes, but your description sounds like i should handle all corresponding changes (even moving items in an inventory) as transactions with the database where i only let that change apply to the active, simulated data if that transaction succeeded (and not the other way round - updating the db state after it registered as a change in the simulation)?
My guess is that this would create longer response times and that the load on the database goes up. But i haven't tested this, so i may just be overreacting here.
When you say 'Ephemeral state', does that mean the player data may indeed be split between the servers and a zoneserver only holds the part it needs for its simulation (as in, the inventory items for example are kept on the gameserver as trading etc are handled by one of its services)?
A trading request from one player to another would mean, if they are only able to do so in a certain distance to each other, that the gameserver/trading service must hold some positional data too, or should it just query the zoneservers in that case; how to go about this shared data?
(I'm thinking that such a request should not need to pass through the gameserver just to become a request from the zoneserver back to the gameserver/its services, meaning in my imagination it should be able to somehow directly be handled by the former, to cut some internal roundtrip time)
 

Different events in the game have different levels of importance.

Some are minor, some are major. If they crash while out in the MurderFields and lose the loot from a few monsters, that isn't too bad.  If they crash after completing a major quest and pick up the Epic Loot of Awesomeness, that is bad.  If they crash in the middle of trading goods with someone where the transaction is halfway finished, that is very bad.

For unimportant events you need to serialize at a convenient time. A rolling automatic save for all the players out in the MurderFields will be enough for those general problems.

For important events you need to ensure once they happen a save is immediately triggered.  That can mean a forced save when players pick up a rare or epic item, a forced save when you enter town, and it absolutely means a forced save as part of a trade.

As for the multi-database situation, there are solutions. Databases have many various rules for transactions known by the letters "ACID". Atomic, consistent, isolated, and durable. When a transaction must involve multiple databases there are are protocols to ensure all the databases are either updated or aborted together. A common implementation of the protocol is OpenXA, or just XA. In those cases where it is important for the player to be notified, once the player sends the command they need to wait until the server returns that the transaction was committed across the entire system.  They usually take a few milliseconds to complete, enough that you can pop up a message box and fill it with a spinner if there is no data, or text like "trade completed" or "trade failed" based on the results.

Yes, i think some delay here and there will most likely not be a problem for this type of game.
I'm not sure i understand what you mean by multi-database though. If this targeted the split player data question,  i was under the impression that even for this distributed design i'd still only need one database, but what i was wondering about was, if on login, i should just load, from the db, the player data in parts for each server (i.e. zoneserver receives the realtime-relevant part (transform, skills etc), whereas the world/gameserver the non-time-critical parts (inventory, friendslist etc.) for example. It feels like this could be a setup for headaches, especially if some things have both realtime and non-realtime components (like droppable items/with spells). What would be a common way to go about this, hold everything on a zoneserver and supply the other services with the relevant data only at the time of a request? (this sounds like more load on a zoneserver and more roundtrips for messages)

the gameserver (or serviceserver) runs a recurring timer that just updates the database with newly piled up changes, but your description sounds like i should handle all corresponding changes (even moving items in an inventory) as transactions

Only for important changes, as frob also said. Re-organizing inventory doesn't change ownership of objects, so that's not so important IMO. Your game may be different.

Regarding only checkpointing now and then -- that's also good, except consider the situation where you and I trade -- I get 100,000 gold coins from you, you get a flaming sword of fire from me. Then you teleport to another city which lives on another game server, so your inventory checkpoints because you leave the server. Then the game server I'm on crashes, before it has checkpointed me. No, inventory says you have a flaming sword of fire, and are 100,000 gold poorer; meanwhile, my inventory says I don't have the money, but I also have the flaming sword of fire. Item duplication bug! And, in fact, in the early days of MMOs, players would figure out how to crash a server on demand, to create bugs just like this.

 i was under the impression that even for this distributed design i'd still only need one database

Typically, it's a good idea to have one database per "character domain." Wherever a character can go, all other characters it can meet would be on the same database. This makes trades, friending, blocking, and so forth, easy and consistent.

A single database will run out of headroom (scalability, performance, whatever you want to call it) at some point. There's only so big a machine you can physically buy (or rent.) The less requests you make to the database per player, the higher it will scale, but at some point, it will stop. At that point, you have to either physically shard your characters (character X lives in universe A, and can never meet character Y in universe B) and spin up a database per universe, or you have to horizontally shard across multiple databases, and build some kind of transaction monitor / escrow work queue to solve the data consistency problems.

Even something as simple as "when I friend you and you accept, our friend lists are updated together" will skew out of order once you have multiple databases, and you'll need to have some mechanism to either make sure that that doesn't happen, or detect and heal the problem after the fact. But, the good news is, 99.9% of all games don't get so big as to not fit on a single database, so you probably don't need to worry about this. And, if you end up being lucky, you'll be rolling in money, and have an easy time hiring people who can fix that for you :-)

 

enum Bool { True, False, FileNotFound };

That sounds plausible.
And for the player data splitting, is it a bad idea, or too dependend on specifics, is there a common route i can take or terminologies i can look up to learn?

If you want to store different players on different databses, then the word you're looking for is "horizontal sharding."

 

enum Bool { True, False, FileNotFound };

What i meant was the runtime location of player related data. Is it common to have it reside partially in multiple locations; If the services are physically or logically separated, how do they access the same resource, does each load/gets assigned its part of the dataset (maybe just passed along at the point of a request?), or am i approaching a problem from the wrong side? It seems to me that this may be vital in my understanding of 'distributed' (realtime) computing.

Those are implementation details that differ for each game.

For small games made by individuals, usually a single machine running a single database is more than sufficient.  You are unlikely to need thousands of transactions per second. Probably unlikely to reach hundreds or even dozens of transactions per second.  One server with the tables you need to restore the player when they join the game.

Of course that is not universally true. Some games by individuals become incredibly popular and require more powerful services, but that is rare. And on the flip side, people write bad code or complex code that can waste the cycles of the most powerful machines; in that situation even a small service can 'require' many machines.

For games made by large companies, it is common to have multiple services which get coordinated.  Authorization and player accounts are usually one set of database services.  Accounting and billing are often a second database service. The game's player information is often a third.  The active running of games from moment to moment is usually kept in memory on the server, synchronizing with those other services as needed, often with forced saves done as described above, and rolling saves for everything else.  

This topic is closed to new replies.

Advertisement