• Advertisement
Sign in to follow this  

System architecture for critique:

This topic is 4612 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Comments and opinions on the following system architecture would be welcomed. This system should go open source within the next few months (once I'm no longer embarrassed by it). The target game is relatively small scale (circa 600-700 concurrent users per server). There is a central database of users, passwords and character-server pairs. There are an unknown number of game server microclusters, which host the database for their instance of the gameworld locally. The client never communicates with this database directly. Only the servers are aware of where it is. Network transport is typically by means of a reliable UDP layer, exposed partially to allow state updates to override previous state updates. Traffic management is by means of a relevance-graph, which is similar to a zoning mechanism crossed with a dpvs viewset. The relevance graph maintains links of how an effect of a given radius will travel to other relevance-aware objects. Discussion of this is beyond the scope of this post, but in short, I don't like square zones. A microcluster consists of a master server, (game)database server, and potentially a number of slave servers. These are separate processes linked at startup for the microcluster and may exist on the same machine. Communication between these is encrypted with a cluster key currently, but may be relaxed. A client connects to a master server directly. (no centralised login server) Diffie-Hellman keys are exchanged to agree an encryption key for the login process. TEA is used for encryption, with 128bit key. (If you read my post on it this is why the D-H key was implemented at 128bit). The client sends a hashed username and password to the master server. The master server queries the (still hashed) username and password to the DB, in order to identify the account the client is using, or determine a new account must be set up. At no point does a game server become aware of plaintext username or password. If the account has characters on the microcluster, one can be selected, otherwise one can be created. This process will have preventative measures in place to prevent automation. With a character selected, a game session key is issued by the server and is used for encryption then on. A slave server within the microcluster can be assigned to serve a certain point in the hierarchy of the world - typically this will be a high-traffic area. Slave servers can be assigned and de-assigned at run time, but the administrator setting them up will need to be aware of the internal microcluster key, if internal encryption is being used. In any situation where a third party could attempt to insert a slave server to the microcluster, an internal key should be used. Slaves may have their own slaves. A slave server has its own internet-visible port which the client is informed of when the logged in character moves to that area. This port is opened only when a client is in the area, as informed by the slave's master. The slave is informed by the master of the client's session key. The master is no longer responsible for direct communication with the client. Throughput to other areas of the hierarchy is achieved by an internal linking port (bound to a lan address). Thus, a slave server must be on the same lan as the master server and database server. Because of the relevance graph traffic between a master and a slave will typically only include those effects that are broadcast in nature (system messages, group chat) or occur near the edge of the slaved area (both within and without). A slave is therefore best used for something like a busy dungeon, where there is little traffic 'at the border'. EDIT: Forgot to mention, Slaves are made aware of each other by the master server, so if another slave controls a neighbouring area, and an effect crosses over, the two slaves will communicate directly, saving the message buffer in the master from taking a damn good thrashing. Which is what you might have thought was happening before reading this edit. *blush* That pretty much covers it... quite a long read, but I hope some people will take the time and rip this design apart. I'd rather it happen now then when I'm charging someone to use it. :-) If there are any questions, just post. Thanks

Share this post


Link to post
Share on other sites
Advertisement
Quote:
At no point does a game server become aware of plaintext username or password.


Why is this important?

Question on implementation: Is there a challenge involved in the hash, and if so, who issues it and where does it come from? (If there is no challenge, then the hashed username is as good as the plaintext username.)

Share this post


Link to post
Share on other sites
It's also wrong. :-( The central DB issues the hash, via the gameserver. What I basically have is a large number of login / combined game servers which synchronise with a central DB.

I should really explain what I'm trying to achieve better.

Problem Field:

I have a variable number of discrete game instances.

I have a variable number of individually and centrally trackable users.

Users can have any number of free characters on any game instances.

Users can also subscribe any of their characters, which gives the character benefits, including connection priveledge over an unsubscribed character. This must be centrally tracked. Unsubscribed characters need not be centrally tracked since they are not billable, and are likely to exist in much larger numbers.

Game instances may be comprised of one or several machines running either a game server process, or database server process. Machines forming a cluster in this manner will be on a private network. Neither machine nor network can be guaranteed secure. Exclusive access to machine or network is not guaranteed.

A large number of simultaneous users may occur. It may be taken that they cannot all connect to the same machine at once.


Potential Solutions:

A master server process governs each game instance. A single game database exists for each instance.

A central database of users, passwords, and subscribed character-server pairs exists.

A user connects to and is authenticated (DH) by a master server, and provides their username. This is passed on to the central database by regular batch. The gameserver issues hashes for all supplied user names, receives hashed passwords from the users and from the central database and compares them. If there are subscribed characters for that user on this master server, then they are available in addition to any unsubscribed characters for that user. Otherwise a character can be created.


Conclusion:

I'm starting to think that just one big central login server issuing session tickets would be more sensible security, but the number of characters to store would quickly become problematic (and require more hardware than is affordable) given that they are free to create.

Share this post


Link to post
Share on other sites
The number of characters STORED is not really a problem for hardware use -- disk is cheap, and it's unlikely you'll store even close to 40 GB worth of character data :-)

Number of queries is what matters (and the speed of executing those queries). Design the central server to use few queries, and fewer updates. Cache periodicly updated character data in the front-end of the server, and only commit when there's time (through some idle queue), or commit when the player levels up, dies or exits. Simple, less-than-$1000 hardware running with mirrored SATA disks (good trade-off between cheap and protected) ought to take a large number of users/characters if you're careful to keep the database traffic to an absolute minimum.

It'd be interesting to see the actual number of active (online) users that you think you need to design for, and some assumptions you have about database transaction rates. With those numbers, it'd be much easier to see whether you need something more complicated or not.

Share this post


Link to post
Share on other sites
The system's designed to be scalable in terms of user base and independent from game design as much as possible. Typically I'd expect this system to be used for SMALL independent games. This is a 3D graphical mmog codebase.

Typically this is designed for small independent games to ease production. Low budget servers would be used initially on these projects, so the system has to scale down to run on a single reasonable spec PC.
No assumptions are made as to the genre of the end product.

Assumed requirements for the system are:

a) A large number of encrypted concurrent connections (200+)

b) A server simulation layer (game layer) to be independent from network

c) A relevance layer linking the server simulation (physical) and transport layers, designed to cut irrelevent communication.

d) A scalable authentication system with minimal adminstrative work in adding new servers.

e) A local database server for persistence. Abstracted persistence functions accessible through the game entity layer. MySQL supported. Another local database for asset storage and mangement.

f) Assets loadable from file or local database by server.

g) Some remote asset management functionality - checking asset manifests and files on a client to ensure they are up-to-date and have not been tampered with.

h) Simple 3d collision detection and physics on the server, independent of any rendering API and hardware.

i) ANSI C++ throughout server code. Should compile on anything with minimal tweaking. Currently FC3, 2.4 Kernel

That's the basic codebase, which like I said, I intend to release once it's working and not the hideous jumbled mess of half-finished classes it is at the moment. ;-)
As a point of interest the initial game I have in mind to (showcase && test) || destroy it is aimed as follows:

a) Run on a single reasonable linux box, <$1000 pcm lease, 100Mbit balanced.

b) Cope with around 600-700 concurrent users. Ideally around 1000 SUBSCRIBED users per box. (Assuming not all online at same time, if this happens there's trouble).

c) Running an RPG - (game)database used to store character details, skills, inventories, appearances. Some cached in-game whilst relevant.

d) ALL assets are held in a local (same machine) database, and can therefore be editted live (providing they are not in use).

e) Complex simulation of many entities. This is a test of the relevance layer and a LOT of fallback techniques in behavioural simulation.

f) Relevance layer used to showcase visibility limiting (crude portal engine) on the server.

g) Asset synchronisation - asset manifests are checked and may be marked as invalid. Individual asset files may be checked by means of CRC. Assets will be updated by means of a binary patch downloaded from outside the client.

If it occurs that the server is idle, or bandwidth is not being used entirely, it may be feasible to incorporate an asset delivery system into the server itself - trickle down required assets a little at a time using spare bandwidth and server time. Does anyone know if this has already been attempted in this manner or whether it may be a better idea to have a simple FTP thread that can be started/stopped/paused by the client and an anonymous FTP server delivering the assets?

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement