Is this a viable architecture for a mmog ?

Started by
39 comments, last by _winterdyne_ 18 years, 4 months ago
Quote:Original post by Steadtler
What happens if a lot of your players decides to go in the same cell at the same time? If you cells are not very small, that will happen sooner than later. If your cells *are* very small, then it wont be performant because you will have too many cells. Its a classic problem I think. I see two solutions: Having cells of varying sizes (hard to manage!) or having copies of cells on multiple servers (RAID-like, less efficient).



That could certainly be a problem if the cell has a lot of information attached, or if a group of adjacent cells have a lot of information in conjunction.
The varying size cell solution sounds like really painful to implement!
Maybe instead of passing cells to other zones in overload conditions, the overloaded cells should be shared with other servers ...
I have not really thought about this deep enough but it should be managed in some way...



Quote:Original post by Steadtler
Since this information is small, of fixed change and does not change a lot quickly, you can keep a copy of this information on every server.

Hope I am of help.



Yes, I think that's the most reasonable/efficient way of managing that.

Thanks a lot Steadtler.



Advertisement
Keeping a single cell on multiple servers means you have to do a network message for same-cell object interaction; that's not great for performance.

We implemented variable-shape cells for There; each "cell" is an arbitrary set of nodes in a modified quadtree (modified to map an entire earth-size sphere, not just a square area). We can re-balance the cells "live" (in real time) but we usually don't need to, becuase after an initial settling pass, load is fairly repeating over time.
enum Bool { True, False, FileNotFound };
True visibility calculations aren't required for a client - besides, you need to send traffic from non-visible cells (for example a conversation behind the player).

Your only concern on the server end is what is 'relevant' to the client - typically this is a simple radius affair. You can use spatial partitioning methods to aid in culling for certain event types - but it's a lot less headache to design your world with an effective PVS system - for each segment defining either a procedurally generate PVS or manually reducing the set for certain layouts.

You CANNOT allow the client under any circumstance to dictate to the server what information it should receive. This is asking for trouble. Say you have this layout:

a1 a2 a3 a4 a5 a6
b1 b2 b3 b4 b5 b6
c1 c2 c3 c4 c5 c6
d1 d2 d3 d4 d5 d6
e1 e2 e3 e4 e5 e6
f1 f2 f3 f4 f5 f6

A client has a visual range of 1 cell radius and is in b5. It SHOULD receieve a4-6, b4-6, and c4-6.
Your client could request f1 d2 a1 and anything else it feels like.
You still need to perform the checks on the server. You might as well leave them on the server.

With a regular cell based map, your checks are really trivial - you simply take the cells at the appropriate coordinates. It gets a bit trickier when you have an irregular, hierarchical, structure like I do - which is why I have all that bizarre neighbourhood stuff going on.

If you want to transfer cells from server domain to server domain (don't use the term zone for the area a server handles if it can be discontinuous) you will need a means of distributing that information to all other servers in the cluster - a master server is the simplest way of doing this and handling messages that overflow zone boundaries.

Steadtler is right to encourage you to think about multi-server scaling - sure you can start your game on one server, and possibly get the content together to support say 300 users (still quite a big game!). When those 300 are paying you subs, and their mates are trying to join, you'll want to be able to expand the game with minimal down-time. If you have to rewrite your entire codebase to do so, you run the risk of losing customers. When you're paying a monthly fee for colocated hosting, losing customers sucks.

Winterdyne Solutions Ltd is recruiting - this thread for details!
Quote:Original post by hplus0603
Keeping a single cell on multiple servers means you have to do a network message for same-cell object interaction; that's not great for performance.

We implemented variable-shape cells for There; each "cell" is an arbitrary set of nodes in a modified quadtree (modified to map an entire earth-size sphere, not just a square area). We can re-balance the cells "live" (in real time) but we usually don't need to, becuase after an initial settling pass, load is fairly repeating over time.



Then your cells are a group of nodes of a quadtree and you split the cell if needed isn't it? It's almost like "Steadtler's varying size cell".
Then a solution when a cell is overloaded would be split the cell in 2 and send one part to another server.
I need to think the pro and cons of that.

thanks hplus0603.

Quote:Original post by _winterdyne_
True visibility calculations aren't...



Thanks for all the info winterdyne.

But for example positions and states of players that are in "radius range" but not visible shouldn't be sent by the server because it's not relevant, isn't it ?
Maybe I can have two sets of data, one that is radius dependent and another that is view dependent ¿? or will it be too much work for little performance improvement?

I have not said i don't need scalability, in fact it's a must.

Because your cells are in a grid topology, you can coarsely identify which should be visible based on the frustum. Within these, if you wish, you can send specific 'make visible / make invisible' messages based on fine frustum checks, but in order to quickly load up assets on your client, you should send the descriptors (defining what assets should be loaded for what entity) BEFORE you actually need them - I'd use a larger than visual range radius-based method for this as your client can turn around quite rapidly.

Given today's hardware, your main bottleneck is going to be the network, especially on a single server / single bandwidth allocation setup (unless of course you're running some insane physics simulation on a lot of entities). It's therefore in your interest to cull as much data from being sent as is possible.
You will of course need to sort data - split chat into different channels for example, and allow clients to subscribe to one or more - perhaps you have clients that specify to 'ignore shouts' - this should be done at the server end, not at the client.

Your cell based topology can help here - implement a channel for each major event type for each cell - and subscribe or unsubscribe clients to potentially relevant cells as they move around.

Winterdyne Solutions Ltd is recruiting - this thread for details!
Quote:you can coarsely identify which should be visible based on the frustum


The user can turn around much faster than the network can keep up. I would strongly recommend against trying to do any kind of aggressive view frustum cull on the server filter, because it will lead to object popping when the user quickly turns around.
enum Bool { True, False, FileNotFound };
Hence me advocating the radial method.
Winterdyne Solutions Ltd is recruiting - this thread for details!
I think that the raidus method is a great idea. As far as which server handles which cell here is what I wrote in another post.

Quote:
Why not just have one master machine then a bunch of slave machines. When you try to connect you connect to the master first to find the slave with the least amount of users. The slaves are the machines that actually do all your packet forwarding. On each slave machine you have a list of zones. And in these lists of zones you have the people who are currently in them. So lets say your in zone A6, you send a chat message to the slave, it looks at list A6 and sends the message to everyone in A6 and also sends it out to the other slave machines who in turn send the message to everyone in their A6 list. If you get too many people on one slave you can have users transfer to a less busy slave to better balance the load. Any problems with the scenario? Blizzards Battle.net servers work in the same manner except they balance the load of chat channels, not game zones.


Expanding on that the backend network traffic should be easily handleable. Gigabit networking comes standard on all boards now so that shouldn't be an issue.
Quote:Original post by I_Smell_Tuna
I think that the raidus method is a great idea. As far as which server handles which cell here is what I wrote in another post.

Quote:
Why not just have one master machine then a bunch of slave machines. When you try to connect you connect to the master first to find the slave with the least amount of users. The slaves are the machines that actually do all your packet forwarding. On each slave machine you have a list of zones. And in these lists of zones you have the people who are currently in them. So lets say your in zone A6, you send a chat message to the slave, it looks at list A6 and sends the message to everyone in A6 and also sends it out to the other slave machines who in turn send the message to everyone in their A6 list. If you get too many people on one slave you can have users transfer to a less busy slave to better balance the load. Any problems with the scenario? Blizzards Battle.net servers work in the same manner except they balance the load of chat channels, not game zones.


Expanding on that the backend network traffic should be easily handleable. Gigabit networking comes standard on all boards now so that shouldn't be an issue.



I have been told no to share a cell with various servers but with a 1 Gig transfer maybe it's not a problem any more.
I'm trying to implement all this stuff, at the moment i will not share cells between servers, but maybe i will try it in the future.
Two types of data will reside in the cells, radial and view dependent.
Radial data will be sent to the client whenever the player is in the radial zone of the data.
And for view dependent data a frustum culling will be done, I will use a bigger frustum than the original sent by the client, to avoid popups, players, npc and such will be view dependent data.

This topic is closed to new replies.

Advertisement