mmo server architecture [now with new design idea]

Started by
30 comments, last by rmtew 14 years, 10 months ago
Well, it's up to you. but I wouldn't suggest going down that path.

Designing an "engine" vs. a "game" is hard enough even for folks experienced in a particular area. It's really hard to develop things when you don't actually know the use cases.

When you're learning the problem area as well, it's even tougher.

I've almost always found that libraries/engines/whatever that are developed for a specific case and then expanded on end up much more usable than ones written in a vacuum.
Advertisement
Quote:Original post by kyoryu
To take advantage of multicore systems, you can also use multiple processes per machine.


*slams his head in the wall* Geez. Why didn't I think on that before :(

Now pretty late to start scaling up for multi-process(and as such one that could be used for multi-computer server) server which I opted not to do because for some bizare reason I thought I would need multiple computers for that...

Ah well. Features to be added later I suppose then ;-)

Boy do I feel stupid. Thank god for these forums. You never know when you realise your mistake just by reading threads ;-)
Quote:To take advantage of multicore systems, you can also use multiple processes per machine.


They can't all accept connections on the same socket, though. Thus, you'll either have to have one networking input process, which then uses shared memory or pipes to funnel data to worker processes, or you'll have to have some service virtualization, such as multiple interfaces with different IP, and one IP per server process, or an up-front service arbiter that can give you a different port for load balancing reasons.
enum Bool { True, False, FileNotFound };
Or, you can tell the client which machine/port to connect to when they need to switch servers.

But yes, that is the issue with that, and the solutions you proposed are viable ones.

They're probably necessary no matter what, unless you plan on one process per machine.
So I've done more research, watched a few video presentations and generally feel I have a good idea how big/commercial mmo servers work. From what I've seen few if not any implement a system like I've designed, not because it won't work but rather because you just don't need that much complexity.

I've changed up my design a bit in regards to this and have a built up a roadmap to the "end" design. I feel this will be easier to implement and get something working faster and I can see where the load is going at each stage, perhaps I'm over looking some process that takes too much time and needs to be extracted to it's on server type. You can view the road map here.

From what I've looked at the biggest thing that has impressed me so far is Eve Online. I knew it was a single world (no sharding) and that it had more peek concurrent users than any more users on a single server instance which is impressive by itself but what I didn't know is that it runs on (stackless) python. This made me rethink if I really want to use C++ as it can be error prone, tedious and slow to code efficiently and clearly express what you are doing (big statement however I would say a lot of people would agree python/C#/others are easier to develop in than c++). Threading can also be a hassle and fibers/tasklets/coroutines or whatever you want call them look very very interesting and a good solution. Server are relatively cheap in the scheme of things and even tho python is slow throwing more servers at the problem obviously does work.

The other thing I've noticed that I didn't expect is the use of TCP/IP. Eve Online, World of Warcraft and several other big mmos use TCP/IP exclusively, this imho would make things simpler and not have to rely on ENet or another library which I have no idea if I will run into problems with. Not to mention I don't plan on a FPS MMO so real time movement is less important.

As you guys have discussed running multiple servers on the same computer means you have to use different ports, I don't see this an issue as players connects through the gateway and it's an internal thing.

I'm going to look more into "stackless" languages and see which one I like better. I may just go the python route. Either way I think I have a solid route for development and expansion, feel free to criticize :) .
"stackless" is a misnomer IMO -- there's still a stack involved. The only thing with "stackless" python is that it allocates its own stacks, fiber style, rather than using the C stack. That, in itself, is not much of a benefit IMO.

Co-routines and fibers are interesting from many aspects, but they do not let you scale across multiple cores on a single CPU. You need real, pre-emptive threads to do that. It is possible to combine fibers with threads, if that's what floats your boat. I think the more important design take-away is that of sending data along with processing to the processing hub, and then executing a queue of tasks.

The main design choice for lean server-side utilization is to make the game not based on continual server-side simulation. In games like WoW, and Eve especially, the physics does not need to run server side. Instead, the server just acts as a router for player information, and an arbiter of seldom-executed game rules. If you can accept a one-second latency for any player commands, the server only needs to process player commands once a second at most, which will use a lot less CPU than a server that runs physics for the entire world at 30 times a second or more.
enum Bool { True, False, FileNotFound };
Quote:Original post by hplus0603"stackless" is a misnomer IMO -- there's still a stack involved. The only thing with "stackless" python is that it allocates its own stacks, fiber style, rather than using the C stack. That, in itself, is not much of a benefit IMO.
Stackless does not allocate stacks, fiber style. When a switch is done from one microthread to another, it copies the area of stack that has been used from the current microthread to a buffer, then copies back onto the stack the corresponding buffer from the next microthread. As I understand fibers, this is much more lightweight than they are. Last I looked at Windows fibers, they allocated a stack, like proper threads. Coincidentally, Windows fibers were one of the solutions that were considered by CCP back in 2000.

The name Stackless is a holdover of its original implementation and not an advertisement of benefit. Ditching the stacklessness and the accompanying continuations they allowed the implementation of, had no downsides except to those who desire continuations.
Quote:Original post by hplus0603Co-routines and fibers are interesting from many aspects, but they do not let you scale across multiple cores on a single CPU. You need real, pre-emptive threads to do that. It is possible to combine fibers with threads, if that's what floats your boat.
Yes. In my experience, the benefits of coroutines are as a tool suitable for building systems that allow programmers to write synchronous code. In microthreads for instance. This was the functionality CCP was looking for when it chose Stackless Python.

Of course, if the coroutines are not what Lua Coco calls "True C coroutines" (this is what Stackless offers), then you lose a lot of the benefits that coroutines give. The ability to write straightforward boilerplate-free code on top of them for instance. The "generator coroutines" that were added to Python are an example of one of thse more limited forms of coroutines.

Coroutines can be used as a basis for a system that scales across multiple cores. Anyone who desired to do this would need to develop their own custom system. There is no implementation that I know of available for reuse.

Second Life has a custom system jerry rigged on top of .NET for this very purpose. You can read a high level description of it in this blog post. I think the video of the presentation Linden Labs gave at Lang.NET in 2006 goes into more detail. It can be found on this page by searching for "second life".

On a related note, one of the core features of Stackless Python is that you can serialise (using the Python pickle feature) a microthread, send it across the wire and unserialize it on the other end. Given that the Python pickle format is platform independent, you can use this to take running code on a machine of little endian architecture and restore it on another machine of big endian architecture. This can be used for the same purpose, but the burden (or opportunity) is on the developer in taking this and building on it.
Quote:Original post by rmtew
Quote:Original post by hplus0603"stackless" is a misnomer IMO -- there's still a stack involved. The only thing with "stackless" python is that it allocates its own stacks, fiber style, rather than using the C stack. That, in itself, is not much of a benefit IMO.
Stackless does not allocate stacks, fiber style. When a switch is done from one microthread to another, it copies the area of stack that has been used from the current microthread to a buffer, then copies back onto the stack the corresponding buffer from the next microthread. As I understand fibers, this is much more lightweight than they are.


Not if there is a lot of data to copy. But considering Python VM has relatively high constant factor overhead, it's probably less sensitive to such implementation details.

See Game Programming Gems II, 3.3 for an article on such implementation. It's still stack-based, it merely includes a bit more juggling.
Perhaps what I said implies that I was going to use stackless/coroutines instead of threading, that's not exactly what I meant. My interest in stackless is more based on what it can do for code readability and being able to process many things at once in the same thread.

My basic idea is still to have a listener thread with shared message in/out stacks between worker threads however coroutines come into play for processing multiple items inside a thread while an operation is blocked.

From what I've looked at erlang seems like the best thing to use however I feel I would be hard pressed to find developers that would want to learn to use it and eventually I will want some help. Stackless python does look good but according to some presentations on Eve Online the thing that made it viable was StacklessIO which was written in C++, I really don't know what all StacklessIO entails and I'm unsure if I wish to go down that route. Mono recently included Mono.Tasklets (it's actually been out since ~2006 but wasn't part of mono) which is pretty much a hack for coroutines as it has to save off the whole stack, while it may run a few orders of magnitude faster than stackless python the memory requirements are almost double which makes it less viable imho. What I'm currently looking into is Boo, it has support for coroutines as well however I'm unsure of how it does it and if it's a hack or not. The main that interests me about boo is that you can add on your own syntax like lisp macros and it's suppose to be as fast as C# (most other dynamic CLI languages seem to take a performance hit).

Anyways I'm still researching what I want to use, for now I can be working on protocol design as that will be useful no matter what language/platform I choose.
Quote:Original post by SeoushiStackless python does look good but according to some presentations on Eve Online the thing that made it viable was StacklessIO which was written in C++, I really don't know what all StacklessIO entails and I'm unsure if I wish to go down that route.


The problem with using piecemeal information like this, is that it can lead to the wrong interpretation. Note that Stackless IO is a recent development, only put into production in the last year or so. Compared to the past networking extensions EVE has used, it is an just an upgrade to add higher performance. EVE has worked without it, although not as scalable networking performance-wise, for many years.

Other than a performance upgrade, what Stackless IO is, is a networking library that wrapped asynchronous IO in a way that allows the microthreads to use socket objects in a way that just blocks the current microthread, rather than the current thread like the standard Python sockets do. If you want programmers using a microthreading framework to be able to write more readable synchronous code, then this is the kind of thing you need to do anyway. Before there was Stackless IO, there were at least two other C++ libraries that did exactly the same thing.

You can get a Python-based library I have written which provides the same functionality (but without the finely tuned multithreading performance) as Stackless IO. This library, Stackless socket, is available from here. It is example code that does not have full test coverage, but then again it has been deployed in released products. The goal of this module is to allow people to use sockets in microthreads exactly as they do without them. That is, code readability through consistent API. And an advantage of this is that you can monkeypatch in this library in place of the standard socket module and other modules that use the socket module, will suddenly be compatible with your microthreading.

If you really need the satisfaction of living the hype of Stackless IO, the intention is to release it as open source when it is ready I believe.

This topic is closed to new replies.

Advertisement