Jump to content

  • Log In with Google      Sign In   
  • Create Account


Kylotan

Member Since 08 Mar 2000
Offline Last Active Sep 09 2014 06:43 AM

Topics I've Started

MMOs and modern scaling techniques

10 June 2014 - 07:26 AM

(NB. I am using MMO in the traditional sense of the term, ie. a shared persistent world running in real-time, not in the modern broader sense, where games like Farmville or DOTA may have a 'massive' number of concurrent players but there is little or no data that is shared AND persistent AND updating in real-time.)

 

In recent discussions with web and app developers one thing has become quite clear to me - the way they tend to approach scalability these days is somewhat different to how game developers do it. They are generally using a purer form of horizontal scaling - fire up a bunch of processes, each mostly isolated, communicating occasionally via message passing or via a database. This plays nicely with new technologies such as Amazon EC2, and is capable of handling 'web-scale' amounts of traffic - eg. clients numbering the the tens or hundreds of thousands - without problem. And because the processes only communicate asynchronously, you might start up 8 separate processes on an 8-core server to make best use of the hardware.

 

In my experience of MMO development, this is not how it works. There is a lot of horizontal scaling, but instead of firing up servers on demand, we pre-allocate them and tend to divide them geographically - both in terms of real world location so as to be closer to players, and in terms of in-game locations, so that characters that are co-located also share the same game process. This would seem to require more effort on the game developer's part but also imposes several extra limitations, such as making it harder to play with friends located overseas on different shards, requiring each game server to have different configuration and data, etc. Then there is the idea of 'instancing' a zone, which could be thought of as another geographical partition except in an invisible 4th dimension (and that is how I have implemented it in the past).

 

MMOs do have a second trick up their sleeves, in terms of it being common to farm out certain tasks to various heterogeneous servers. A typical web app might just have many instances of the front-end server and one database (possibly with some cache servers in between), but in my experience MMOs will often have specific servers for handling authentication, chat and communications, accounts and transactions, etc. It's almost like extreme refactoring; if a piece of functionality can run asynchronously from the gameplay then it can be siphoned out into a new server and messaging to and from the game server set up accordingly.

 

But in general, MMO game servers are limited in their capacity, so that you can typically only get 500-1500 players in one place. You can change the definition of 'place' by adding instancing and shards, you can make the world seem to hold more characters by seamlessly linking servers together at the boundaries, and you can increase concurrency a bit more via farming out tasks to special servers.

 

So I wonder; are we doing it wrong? And more specifically, can we move to a system of homogeneous server nodes, created on demand, communicating via message passing, to achieve a larger single-shard world?

 

Partly, the current MMO server architecture seems to be born out of habit. What started off as servers designed to accommodate a small number of people grew and grew until we have what we see today - but the underlying assumption is that a game server should (in most cases) be able to take a request from a client, process it atomically and synchronously, and alter the game state instantly, often replying at the same time. We keep all game information in RAM because that is the only way we can effectively handle the request synchronously. And we keep all co-located entities in the same RAM because that's the only way we can easily handle multiple-entity transactions (eg. trading gold for items). But does this need to be the case?

 

My guess is that the main reason we can't move to a more distributed architecture comes partly down to latency but mostly down to complexity. If characters exist across an arbitrary number of servers, any action involving multiple characters is going to require passing messages to those other processes and getting all the responses back before proceeding. This turns behaviour that used to be a single function into either a coroutine (awkward in C++) or some sort of callback chain, also requiring error-detection (eg. if one entity no longer exists by the time the messages get processed) and synchronisation (eg. if one entity is no longer in a valid state for the behaviour once all the data is collected). This seems somewhat intractable to me - if what used to be a simple piece of functionality is now 3 or 4 times as complex, you're unlikely to get the game finished. And will the latency be too high? For many actions, I expect not, but for others, I fear it would.

 

But am I wrong? Outside of games people are writing large and complex applications using message queues and asynchronous behaviour. My suspicion is that they can do this because they don't have a large amount of shared state (eg. world and character data). But maybe it's because they know ways to accomplish these tasks that somehow the game development community has either not become aware of or simply not been able to implement yet.

 

Obviously there have been attempts to mix the two ideas, by running many homogeneous servers but attempting to co-locate all relevant data on demand so that the actual work can be done in the traditional way, by operating atomically on entities in RAM. On paper this looks like a great solution, with the only problem being that it doesn't seem to work in practice. (eg. Project Darkstar and various offshoots.) Sending the entities across the network so that they can be operated on appears to be like trying to send the mountain to Mohammed rather than him going to the mountain (ie. sending the message to the entity). What you gain in programming simplicity you lose in serialisation costs and network latency. A weaker version of this would be automatic geographical load balancing, I suppose.

 

So, I'd like to hear any thoughts on this. Can we make online games more amenable to an async message-passing approach? Or are there fundamental limitations at play?


Representing data-driven concepts alongside instances of those concepts

10 July 2013 - 08:02 AM

In many games, it's typical to load in several 'concepts' or 'definitions' from the data - for example, you might load in vehicle types, character classes, item types, etc. This data might be passed to some sort of factory which creates one of several related classes each time. And then in-game, you create instances of these concepts - vehicles, individual characters, individual items, etc. These will reference the definition class to get access to various pieces of data.

 

But where it seems to get tricky is when the instances are used in some sort of algorithm, and need their own set of state data, which might vary depending on the concept being referenced. If you can have fully-generic concepts, or fully-generic instances, it's not an issue. But often you don't, and the specifics of the instance may depend on the specifics of the concept.

 

I can think of several ways to approach this in C++, but none of them are fully satisfactory.

  • If there's one generic 'instance' shared across all related concepts, it needs to accommodate all possible state, which is awkward to maintain. (eg. the Vehicle object might need current_gear for cars, landing_gear_down for planes, rudder_position for boats, etc etc.)
  • if there's a generic data store used as the state data - eg. a std::map of key/value objects - then it will work for any case... if you don't mind all the error-checking in the definition to ensure important keys exist, and to ensure the values are the right type, etc.
  • If there's a separate instance class for every concept class, it's error-prone. You have to be very sure to create them properly and then several parts of the instance class need to perform casts to the assumed type.
  • If the definition class is re-used as the instance class - e.g. using the Prototype pattern - then you have one C++ class essentially handling 2 responsibilities. The part of the code dealing only with definitions has several state variables it doesn't need to touch, and the part dealing only with instances has several definition variables it shouldn't touch. Plus it wastes memory to duplicate the definition in that way.

This does seem like a problem lots of intermediate-level development will face. How are people handling issues like this?

 

(Edit: there's a good article on gameprogrammingpatterns.com about this, but it basically just agrees that behaviour becomes more difficult: http://gameprogrammingpatterns.com/type-object.html#it%27s-harder-to-define-behavior-for-each-type)


Communicating through an interface but needing implementation detail

02 July 2013 - 08:56 AM

I have an interface (defined as a abstract base class) that looks like this:

    class AbstractInterface
    {
    public:
        bool IsRelatedTo(const AbstractInterface& other) const = 0;
    }

And I have an implementation of this (constructors etc omitted):

    class ConcreteThing
    {
    public:
        bool IsRelatedTo(const AbstractInterface& other) const
        {
            return m_ImplObject.has_relationship_to(other.m_ImplObject);
        }

    private:
        ImplementationObject m_ImplObject;
    }

The AbstractInterface forms an interface in Project A, and the ConcreteThing lives in Project B as an implementation of that interface. This is so that code in Project A can access data from Project B without having a direct dependency on it - Project B just has to implement the correct interface.

Obviously the line in the body of the IsRelatedTo function cannot compile - that instance of ConcreteThing has an m_ImplObject member, but it can't assume that all AbstractInterfaces do, including the `other` argument.

In my system, I *can* actually assume that all implementations of AbstractInterface are instances of ConcreteThing (or subclasses thereof), but I'd prefer not to be casting the object to the concrete type in order to get at the private member, or encoding that assumption in a way that will crash without a diagnostic later if this assumption ceases to hold true.

I cannot modify ImplementationObject, but I can modify AbstractInterface and ConcreteThing. I also cannot use the standard RTTI mechanism for checking a type prior to casting, or use dynamic_cast for a similar purpose.

I have a feeling that I might be able to overload `IsRelatedTo` with a ConcreteThing argument, but I'm not sure how to call it via the base IsRelatedTo(AbstractInterface) method. It wouldn't get called automatically as it's not a strict reimplementation of that method.

 

Similarly, someone mentioned using the Visitor pattern to do double dispatch, but it's not clear how I would do that, or if it's even possible.

Is there a pattern for doing what I want here, allowing me to implement the `IsRelatedTo` function via `ImplementationObject::has_relationship_to(ImplementationObject)`, without risky casts?

 


Where are all the good GUI libraries?

03 April 2013 - 11:28 AM

This is as much of a rant as a question, for which I apologise.

 

Basically, I want to make small games that are quite GUI-heavy. Think XCOM, or old-school RPGs. And I want to use higher level languages, such as Python or C#, because life's too short to be writing in C++ if you don't really need to. Unfortunately, what I seem to be finding is that the game libraries and frameworks for any language other that C++ either have no GUI support or what they do have is shockingly bad.

 

My usual development environment of choice these days is Unity. It has a built-in GUI system, but this is pretty awful to use (unless you're a fan of immediate mode GUIs), lacks a lot of the really useful widgets, is very awkward to style, and renders really slowly.

 

Unity developers usually therefore resort to 3rd party libraries, but these too are awful in their own different ways. Take NGUI for example: if you want to be able to scroll one panel inside another, it needs to employ a separate shader and you need to place invisible barriers in the interface to stop the player from accidentally clicking one of the objects outside the clipped window. Ridiculous.

 

Another language I would like to use is Python. But pretty much the only modern game engine there is pyglet (or cocos2d, which is based on pyglet) and that doesn't seem to have any decent GUI library at all. kytten exists, but while having a decent selection of widgets, construction of dialogs is very 'fire-and-forget' and it's incredibly awkward to try and modify the GUI later. You end up needing to create the UI in reverse order so that you can hold references to the controls in the middle, in case you need to edit their values.

 

Yet when you look at C++, there seems to be a lot of decent GUI libraries available: CEGUI, GWEN, SFGUI, libRocket, Awesomnium, etc. It's obviously not impossible to write decent, usable, flexible GUI libraries. Just that nobody is apparently bothering when it comes to the other languages.

 

Is it any wonder that so many indie games are simple puzzle platformers, when we have 101 different choices for getting sprites onto the screen, and virtually no good options for getting text and dialogues on screen? Am I missing something? Are there some great options out there that I've overlooked? Or is this as big of a problem as I think it is?


Can we undo votes?

02 April 2013 - 03:43 PM

I clicked the down-vote instead of the upvote (on this topic's first post) and I can't find any way of undoing it. Why can't we change our votes?


PARTNERS