ONRPG Server Architecture

Started by
12 comments, last by Pranav Sathyanarayanan 10 years ago

Hey there guys, this is my first post on GameDev.net forums, and I figured I would inquire here considering a good amount of the folks here can offer valuable insight on the topic. I guess I will jump straight into the question, my team and I are starting a small hobby ONRPG, just to fool around with online technology and try and come up with some cool little features and such on our little sandbox. By no means is this one of those "How do I make an MMO?" "How do in C++?" kind of threads, we just have a few design questions and some compare/contrasts.

So what is the issue? Well, we decided we want a seamless world, not huge or anything but we just want the ability for players to live in one persistent world, and so we have our Terrain paging and the ability to load and unload parts of the world on the fly to provide that seamless feel. The issue becomes, for the online part. After having googled and talked about this for countless hours both by myself and in our team meetings, we have discussed various approaches to the problem. Basically, the issue for us comes down to this: how do we synchronize the state of every player in the world, if the world is not divided into zones. Our solution to this problem is, pass the vector of the player as a 12 byte packet along with the information/inputs sent to the server for every command, therefore, a PositionalServer or some master server will have a local copy of the User's position at all times in RAM. So no DB queries would need to exist, and no matter what portion of the world the player is in, the PositionalServer will have access to all players currently logged in and simply replicate the command the user decided to make to the adjacent players. The cons of this approach is the extra 12 bytes of overhead everytime the player decides to even press a button or do an action. The positives is there would be nearly 0 stress on the DB as far as state synchronization is concerned when it came to replicating player events in an "Area of Interest" fashion since the server itself knows the player's locations. What is your guys opinion on this?

Is there some industry standard for this player replication that I have missed reading about, as in how would you handle sending information about players to others. Is there some easier way? Would the 12 extra bytes of bandwidth be devestation in a scenario with 10, 100, 1000, 10000 players? Thank you!

Advertisement
First, have you studied other systems that are well documented, such as Quake III or Source networking? The fact that you're talking about movement location in the same sentence as database indicates to me that you're perhaps not familiar with the model where simulation is all in-RAM, and databases are only occasionally checkpointed, or used specifically for transactional actions like player trade.

Second, how worried are you about hacking the client to allow speed hacks, teleport hacks, etc? To avoid those hacks, you have to run a simulation on the server that makes sure that the movement that the client performs is legal according to game rules. If you just accept and forward the positions sent from the client, then a hacked client (or totally re-engineered script pretending to be a client) will be able to send any position value that the player chooses.

Third, how do you indicate movement to other players? If you stand right next to me, and start moving, how do I (my client) hear about this? If you worry about sending 12 bytes for position, then what are you currently doing that sends less?

So, it sounds to me as if there are at least two things you need to solve here:
1) Game rules enforcement on the server, which may include movement rules. This is typically solved using "ghost copies" or "replica objects."
2) Bandwidth management for each player -- if you are across the world and move, my client shouldn't need to hear about that, but if you're right next to me, I should. This is typically solved through "interest management."

Each of those areas are kind-of bigger than a quick forums answer can answer, but they are written up in published literature and some articles on the web (although somewhat scattered -- there's no "bible of all things networked games.")
enum Bool { True, False, FileNotFound };

Thanks for the quick response hplus, unfortunately I have not yet studied Quake III or Source networing. I suppose that is the first thing I shall do before proceeding any further, however I am not familiar with the model where simulation is all in-RAM. I am not necessarily worried about "just" 12 bytes, my question was infact whether or not this can be considered a decent way to approach the issue is all.

As far as your first point goes, hacking is of concern to me and I will definitely take your advice on ghost copies and/or replica objects and study those, as of right now I am trying to learn more about your second point. Interest management :). So clearly database is out of the question, and this "in-RAM" situation basically calls for the server just determining the set of clients to replicate information to based on all of their positions it currently has in its RAM? Thank you!

A typical interest management system will have a copy of all the object state (and receive updates from the simulation system.) Clients will then connect to this system. The system would determine the "interesting set" of objects to each client, based on that client's viewpoint, and send updates for the objects of interest.

The naive implementation of this is N-squared, so it will not scale to very large sets of users. If the system allows all users to gather in a single small place (such as a town square, auction house, etc,) then the solution is actually "everybody sees everything," which is the worst case.

For a more spread out case, various spatial indices are used to speed up the operation of "for each connected user, find the objects that that user should see." Quad trees, Octrees, hash grids, and other in-RAM spatial indexing data structures are often used. You then need to figure out what to do as the viewpoint moves, and/or as the viewed objects move, and go in/out of interest. A naive "in/out" limit will cause degenerate behavior where objects on the edge of visibility milling around will cause excessive traffic, because the work of bringing an object "into view" (sending all the current state) is typically a lot more than the work of just updating an already visible object (sending a position delta.) Typical systems will use some level of hysteresis -- have a closer "must see" radio, and a further out "must lose" radius. Additionally, adaptive systems will adjust those radii (or other eligibility criteria for visibility) based on how many objects are already in view, to make sure to not make too many objects visible at any one time.
enum Bool { True, False, FileNotFound };

Our solution to this problem is, pass the vector of the player as a 12 byte packet along with the information/inputs sent to the server for every command, therefore, a PositionalServer or some master server will have a local copy of the User's position at all times in RAM.


No, no no no. smile.png

Each player is naturally in only a single chunk at a time. Each chunk will be handled by some particular server. Players can only see a certain distance and only see objects within some radius. Take that radius, take all the chunks around a player within that radius, and only broadcast any changes to the player state within that radius.

If a player moves to a chunk handled by a different server, transfer ownership of the player. This is hidden from the player as he connects to a frontend server that forwards input request to the appropriate server (the chunk server owning the avatar, the login server, the chat server, etc.).

Each server needs to track the objects in the chunks it controls as well as be in some way be aware of all the objects in chunks on different servers for chunks that are "near". If two chunks are on separate continents, there's nothing to sync. If the chunks are next to each other, each server has a read-only synchronized copy of the objects on the other chunks nearby. This way gameplay also works much like a player: gameplay code can query any object's state so long as that object is within some reasonable radius but it can only modify the local objects within its purview. Any modifications to other objects must be sent as messages to their authoritative servers.

There is no central bottleneck responsible for synchronizing. The gameplay servers handling the chunks work in a peer network protected behind your frontend servers and firewall. The client connects to a single frontend server and that's all it communicates to.

Disney had a great GDC talk a few years ago on this sort of architecture for one of their MMOs. I can't for the life of me find it now but maybe you'll have more luck; sorry I can't supply a link.

This is all also very similar to how you'd handle multi-threaded game logic updates in a chunked/seamless world in a local-only (single player) game.

Sean Middleditch – Game Systems Engineer – Join my team!

hplus: Wow, thank you for that answer, I will definitely read more into those in-RAM spatial indexing data structures and algorithms associated with bringing objects into and out of view. I also saw an article about that hysteresis you mention in HeroEngine's Spatial Awareness system documentation, it makes a ton of sense. Thank you!

Sean: Well I am thrilled to know our solution is incorrect, back to the drawing boards haha. So from what I take of your explanation, there is no "central" server for processing that incoming data, but rather a frontend process which simply transfers ownership of the user to a "chunk server", and while the user moves around, what exactly determines the transfer of the avatar to the adjacent chunk. Is it the frontend server being aware of the client's new position being a part of the new chunk or?

This method is what we actually had considered at first, but the problem we could not solve was, how would the player receive information about players in adjacent chunks, but your question practically handles it, if your chunks are small enough (a.k.a the size of the spatial awareness radius you desire), you can have the 8 or so chunks around the current chunk's objects in a read-only fashion and query their various states. I am a little confused still on the semantics of this, as based on your response I am still unsure of how if a player in an adjacent zone moves, how to update their position on the local client as the chunk only updates the read-only state of the object. Sorry for playing 21 questions, just wanna get as much info as I humanely can! Thank you soo much for your responses guys!!

Update:

Disney MMO Link:

http://www.gdcvault.com/play/1013848/MMO-101-Building-Disney-s

That was an interesting read, some very nice points, was I reading something wrong or for their games (utilizing Panda3D), it seemed like because of SmartFoxServer's capability to do extensions in Java, they seemed to be serializing entire objects in some instances? Some of their logic diagrams are plenty useful though.

So now I suppose, if I did split my world into several smaller chunks, the interest management in a zone with a rather low amount of players would simply be asking the current and adjacent chunks for all players within a certain radius or?

there is no "central" server for processing that incoming data


That totally depends on your particular game style. For example, in FPS games like Battlefield, Unreal, Planetside, or APB, there is one server per "island" or "city instance" or "level" or whatever. You don't need some way of handing off between servers if your game design has some "travel" or "teleport" mechanism to go between levels. The client simply connects to the right server instance for the level instance the player joins.

For large, roaming worlds with seamless loading, like World of Warcraft, Second Life, There.com, Asheron's Call, etc, there still has to be hand-off between servers, but it has to be seamless, and the servers have to know about each other -- I may be standing "on" the area for one server, and interacting "to" something in the area of another server. It may be able to design the game such that no such interactions can happen; if so, the client needs to connect to multiple servers, but the servers don't need to know about each other. To support true cross-server interaction, that's where ghost replicas come into play.

It's possible to have the client connect to a single gateway server, and have that gateway forward the appropriate information to different game servers. Or you can have the client connect directly to the game servers. It's really up to you, depending on what you measure your bottlenecks and design challenges to be.
enum Bool { True, False, FileNotFound };

Wow I see, so yes my game will definitely need handshakes between servers in order for players to travel seamlessly throughout the world. I tried googling "ghost replica", what exactly does that term imply?

From what I take of both of your explanations, these chunks or portions of the world each run as a separate processes spread on several servers. As each chunk replicates information about players in it, as far as 'spatial awareness' is concerned I should have a system to tell the process my player is in to perhaps query information about players in adjacent chunks, so as to not seem like we are standing at a seam and have no information about players like 3 feet in front of me that happen to be in the adjacent chunk. Seems interesting enough. Would a viable approach for this be:

An individual process (chunk or node or w.e you may call it) has a socket open to the adjacent processes and synchronize read-only objects like Sean had said?

A "ghost replica" is an object whose "master" lives on another server, but which stands in for the object on the local server. Typically, servers will be allocated to serve geographic regions; say 1 square kilometer per server. The servers would then have a "border area" where they can see objects from neighboring servers -- say, a 200 meter area outside the main 1x1 km (so total of 1.4x1.4 km.) The main server would send updates about those objects to the neighbor servers, just like they send updates to clients. Interactions between those "ghost" objects and the objects hosted on the actual server would be resolved by the server that "owns" the "source" object of the interaction.

Once objects move across the border, the "ghost" version will be promoted to a "real" or "authoritative" version on the target server, and the old version will be demoted to a "ghost" on the source server. This hand-off can be quite tricky to make seamless.

If you are aiming for less than 5x5km area, and less than 1000 simultaneous online players, I would recommend you just build your world as one big process instead, it is much, much, easier to get right. (And still pretty hard :-)
enum Bool { True, False, FileNotFound };

Wow that actually makes a ton of sense, so I guess the difficulty of all of this is that each server should know its neighbors, and should update the ghost replicas of objects it owns to adjacent servers. I guess I will just have to start the programming and start realizing the problems as they come and fix them. I do not particularly aim for more than 1000 players concurrently online but it would eventually be nice.

Then the client should recognize it is on the boundary of a zone and should update its ghost replica on the pertinent adjacent servers.

I guess my first step now Is to write the master server, almost as if my game world is just 5x5 like you said and expand upon that server and reengineer it one we learn more issues with that simple model. I still have some difficulty imagining how exactly those individual servers will be able to know how and when to replicate the information properly to adjacent servers but I will learn by trail by fire haha. Its interesting then, that between two servers, there will be nearly 400m of overlap where objects are constantly updated on both servers, what about the corner case of 4 server' areas joining at one point, so the 200m boundaries overlap in a small area, then the owning server is responsible for updating the object on all the other 3 servers right?

Thanks!

This topic is closed to new replies.

Advertisement