Whatever happened with Spatial OS?

Started by
12 comments, last by drainedman 4 years, 2 months ago

Spatial OS was supposed to revolutionize big-world MMOs. Huge numbers of players in the same world. Lots of hype in 2017-2018. Improbable claims to have raised US$500 million to build it.

Then, not much. Worlds Adrift and Mavericks went live and then went broke. Sominium Space has user counts in 1 digit. Nostos is supposed to come out this month, but it's apparently heavily sharded, with only a few players in each instance.

What went wrong? Doesn't work? Too slow? Costs too much? Doesn't scale? Having to run on Google servers is unacceptable?

Any big-world technologies coming along that really scale? (At least as good as Second Life, 50,000 users in one world, but without the lag of SL.)

Advertisement

I haven't used it, but I've followed that business for a long time.

My guess is that they're running up against three main challenges:

1) each game generally needs a carefully tuned networking model that has intimate knowledge about that specific game

2) approaches you can take for military simulation style networking, don't work for internet real-time gaming networking

3) networking and work distribution are hard problems, but they're not AS HARD as "building a fun game that people want to play"

I note that they are starting their own internal studios, which presumably will consume a fair chunk of the money they raised. My guess is that, if they're going to make it, they will make it as a games studio, on the basis of the strength of their games, not as a technology provider. There simply is no (big) market there currently. (Having tried to push into/create that market myself, twice, that's my current judgment.)

And, because they're building technology with something described as a "generic game networking protocol," chances are that their games studio won't be sufficiently well served by the protocol, and they will either fork/adapt it, or the game(s) will fail.

enum Bool { True, False, FileNotFound };

It is now can be used in MMOGs building in Unreal and Unity.
Not sure what you mean https://improbable.io/

There's a normal sequence to these kinds of efforts (these are not the first people to try this):

  • Making a design that can conceivably work
  • Getting some real games to try the technology
  • Success from the real games
  • Making it an actual business that can return 10x the invested money

Most hobbyist projects fail in step 1; most commercial ventures fail in step 3. From looking at the web site, it looks like improbable is currently in step 2. If they make it past 3, I still have questions about how they'd manage step 4...

So far, the most-used game networking middleware is probably RakNet, which sustained a company of approximate size "one main person." And in the end, that person moved on to Facebook and the library is now open source.

Most game engines (Unreal, Lumberyard, Source, Id tech, Frostbite, ...) grow out of a successful initial game. I think the evolutionary pressures are similar, but even harsher, for lower-level components like multiplayer networking.

enum Bool { True, False, FileNotFound };

As I understand it, the basic Spatial OS idea works something like a modern multiprocessor cache. You have objects, which are accessible from any machine in the cluster. Local access is much faster than remote access, of course. So, when there are too many remote accesses to an object, the object is serialized, shipped over the network to where it is being accessed the most, and made live on a different machine. Objects that are only being read can have copies in multiple regions, but a change to any of the copies must invalidate all the read-only copies. (Yes, something like that is how shared memory multiprocessor caches work deep inside.)

For this to work, the game code probably has to be highly parallel, because you're going to block waiting for object moves over the network. So such blocks must not block the whole game loop, just one thread of it. Which requires highly parallel game and physics engines. Which, at last, do seem to exist.

This is a reasonable enough architecture, but does it really scale? They have demos with huge numbers of distant spaceships, but that's easy to do. As far as I can tell, Spatial OS maxed out with 1000 users on the ground in Mavericks in a one-time demo. They weren't all visible to each other, though. Could that approach handle Second Life, with 30,000 to 50,000 users in a shared world?

The Second Life approach, fixed sized regions that communicate at the edges only to move game objects and avatars, breaks down when too many users are in one region. The system only runs well with 20 or less users in a region, because the server has only one main thread.  It's often run with more users in a region than that, and it slows down badly. Second Life can't have crowds. Which is a lack. Everybody wants crowds.

10,000,000 users on a huge plain waving glow sticks at a distant DJ - probably not necessary. Fortnite did that in shards, and in one world it would be a lousy user experience anyway. (Although, Burning Man in VR. Maybe someday...) 1000 users in a big club or a busy shopping area or assaulting a D-Day beach is a reasonable goal for now. Could Spatial OS do that? Has anybody pushed it hard enough to find out?

Finding out if this approach scales is valuable. If it works, it's going to be like physics engines - soon, everyone will be doing it.

Synchronous waiting for object state to interact is a deal killer. It simply cannot work in a real-time game. Sun Project Darkstar found this out the hard way, too. (That, plus a surprising number of other well-trodden land mines!)

What sharded MMO games and virtual worlds did, was to shard geographically, and then let objects near the border of the land area "mirror" their state over to the other server (unconditionally -- no request needed.) Thus, every local object could interact with all objects it could see with zero latency. Note that the counter-party would be interacted with by the copy of the object -- there are various ways of either lag-compensating that, or making sure that you have the "control inputs" for the mirrors just like the original so that you can use deterministic simulation to have everyone see the same state at the same time.

It turns out, this isn't particularly important to users, so it's not really worth the effort, according to how the market has reacted. You're better off focusing on jamming 200-20000 users on a single server by optimizing your physics, and then use some kind of travel or teleport or loading portal to travel between areas. Users can't reasonably interact with more than a dozen other users at once, anyway, and when your design gets too dense in users, you end up with the "everyone in a pile" problem where the physics state of everybody depends on the state of everybody else, which cannot be parallelized or solved with RPC -- it's a natural n-squared problem.

enum Bool { True, False, FileNotFound };

https://github.com/puniverse/galaxy

Typed up a reply but lost the text. This link may be of interest.

drainedman said:

https://github.com/puniverse/galaxy

That's an interesting substrate. As the note says, it's a lot like a cache coherency system.

One question is, if you have lots of concurrent accesses to this data structure, do they wait asynchronously, or does one blocked request lock the whole thing up until that request is satisfied. If you're using something like this for a game, you're going to need a very parallel game engine. Not just as many threads as CPUs, far more than that.

@undefined sorry I did answer all this but my post got deleted and I was too tired to answer again.

I worked with spatialos, as well as on site for a week. The tech was, despite the hype almost unusable and did not perform well. That was several years ago however.

The architecture from what I remember was clunky with lots of memory copying and I recall thinking it wouldn't scale well.

So instead as a proof of concept for my company I made my own single shard server based off open source in memory data grids. It worked a hell of a lot better and scaled on LAN and could do huge physics etc. The link I posted was most similar to what I did.

However, this simply wont work over internet. The delay and bandwidth issues would not be workable for any semi real time game. In fact the issues and solutions that are framed tend not to be the real hard issues. Making single shard server and load balancing, reads, writes, etc isnt that hard. Especially if you tightly design the game and data. But making it run as a general purpose MMO, that can be applied to any game without some massive compromises will never happen.

To answer your question. I believe there are no such issues reading data at high concurrency. I had no such issues regarding this. Regarding this kind of thing I feel its important to ask the right questions, and not get amazed by a solution thats not really targetting the problem domain. (below tech are also guilty of this problem)

Other technologies I have worked with include Cloudgine and HadeonOS. Also check out Dual universe, it uses caf. It will never work though because they misunderstand the problem of a global single shard server.

Assume all the servers for adjacent regions are in the same data center with 1Gb/s or better links. I realize that this is not going to work across links with significant delay.

This topic is closed to new replies.

Advertisement