• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.
Sign in to follow this  
Followers 0
pronpu

A New Approach to Databases and Data Processing — Simulating 10Ks of Spaceships on My Laptop

34 posts in this topic

The actual code that's run on the GPU depends on both the shader bytecode and the command stream generated from the API.

 

Cool. I actually did not know that.

 

I also think there is a lineage difference. Web developers come out of HTML design and high-level application frameworks. They have never had a multi-year project die because of occasional one-second lag spikes. They generally wouldn't be able to explain how a L1 memory cache is different from an OS-level block buffer cache. They want to use whatever new shiny toy gets their site to look and work better for their test user -- and most of them don't really have that many concurrent users.

 

I'm afraid this is what many game developers believe but it is patently false. True, most web companies (one company one vote) match your perception, but in terms of general effort (one developer one vote), engineers at twitter, amazon, facebook and google know quite well how to explain the role of the L1 cache, and are extremely knowledgable of concurrency issues. They know the cost of a store fence and a read fence, the cost of a CAS and of a task switch. They tackle latency and scaling issues that those at Blizzard and CCP haven't even dreamed of. They know how to take advantage of garbage-collectors and JITs, and can tell you exactly what machine code the JVM generates in what circumstances. They use read-write locks with striped counters that recognize contention and grow to accomodate it. They choose wait-free queues based on trade-offs between raw performance and garbage generation. They use fork-join, Clojure reducers and immutable data structures. They use parallel data structures. They use Erlang or Akka for actors. Believe me, they know how to reduce latencies to microseconds when they need it, and they need it more than you think. I've even seen someone experiment with eventual consistency at the CPU core level vs. CASs. I don't know about all game companies, but I can assure you that Twitter and Google know how to get more out of each of their cores than Blizzard or CCP (having spoken to Blizzard and CCP engineers). 

 

Game engines usually aren't just a client-side only thing.
For a single-player game, the client machine has to run both the game client and the game server.

 

Yes, but the server side, as well as the middleware mentioned by @hplus0603 above, is little more than a networking framework. It's analogous to Apache or Nginx in the web industry, or to RTI in defense. It doesn't help you with scheduling and concurrency. 

 

A proprietary (read: NIH) one, because the ones we looked at weren't suitable for games.

 

Of course. :) Nice architecture, though. I especially liked the delayed messages bit. But, correct me if I'm wrong, at the end of each cycle all threads are effectively stopped and only one core is used, right? Also, how do you ensure that each thread has a similar workload? That's fine if you have 8 cores, but what if you have 100? Stopping 100 threads and waiting for stragglers has a huge impact on scalability.

 

Let me tell you how SpaceBase works internally. Each R-Tree node (a sub-region of space, dynamically resized by the number of objects), is an actor, only fork-join is used for actor scheduling, and it employs work-stealing queues. Work stealing queues ensure that no core ever waits for others: every message produced by an actor goes into a thread-local queue, but when a thread runs out of messages, it steals one from another thread's queue, trying to steal a task (message) that is likely to generate a large number of subtasks (other messages), so that cross-thread stealing wouldn't happen often. 

 

Determinism can be massively important!
A massive part of game development is spent debugging, and a deterministic sim makes reproducing complex cases very simple -- someone who has produced the error can just save their inputs-replay file. Anyone can then replay this file to see how the error case developed over time, breaking execution and inspecting state at the error itself, but also in the lead up to the error.

 

It appears that your actor architecture was designed with two core requirements (other than the main ones of using multi-core): developer habits and determinism, and it seems that you've done a great job at satisfying the two, at the cost of some performance and scaling, I guess. Here's how those maligned "web startups" do it in the performance sensitive bits: You have an in-memory cyclical buffer per threads that records events along with a high-resolution timetag (obtaining the timetag is the expensive operation here, but recording is only turned on during debugging). When a bug occurs, the buffers from all threads are merged. 

 

If your environment protects you from races, you usually don't even need that. All you need to do is replay the particular inputs a single task received.

 

breaking "frameworks" down into a large collection of simple, single-responsibility libraries, resulting in a much more flexible code-base.

 

I agree with that wholeheartedly! Unfortunately, when your library does scheduling, it inevitably becomes a "framework"... in most languages: Clojure, because it has abandoned the traditional OO model and implements on a uniform data access pattern, and because it's so beautifully (without impacting practicality) functional, it has the advantageous property of making any framework a library. All objects are the same, you don't need to inherit anything and mixins can be, well, mixed-in if necessary. It also protects you from data races and can be extremely fast. If you don't want frameworks, use Clojure and they will all magically disappear. 

 

 

One popular game engine that I've used implemented a pretty neat stream processing system where: ...

 

Nice! Reminds me of this.

 

The 'project' was shut down, in terms of funding being withdrawn, but that doesn't mean so much for open-source. The people who worked on it had multi-node up and running, and the contention costs were so high that adding nodes actually reduced performance, hence it wasn't distributed in that state. It turns out that an optimistic locking model degrades significantly when distributed across the network and when the simulation's spatial coherence is not high enough to keep interacting entities colocated, with a resultant cascade effect when entities end up spending more and more of their time on the wire and not in memory.

 

Maybe an optimistic locking model degrades significantly when distributed across the network and maybe not, but Darkstar (or RedDwarf as it was later called) never tested that. It never reached that stage, and only did simple test distributing their single-node code. Check your sources...

 

If you see online games as spatial simulations then they are a suitable nail for your spatial database to hammer. But that is one particular reductionist view of games. Much of the game has little or nothing to do with space and doesn't benefit from spatial coherence. Solving the spatial problem well is hard, which you know, and which is why you feel that what you've made has potential worth. But game developers realised that the spatial problem was hard about 10 years ago and designed around it, in a way that suits the rest of their game and which isn't limiting when faced with non-spatial queries.

 

... and that's a reductionist view of what I outlined in my original blog post. SpaceBase is spatial, but we're working on other products using the same approach which aren't.

You can do the same.

 

This may seem conservative when compared to the world of websites that make a loss year after year, but the market conditions for these businesses are very different.

 

Perhaps for good reason game devs are conservative, and that's why we're not trying to sell them our products any more... I posted here because I thought some might be interested in the approach. You never know, an independent game developer might decide to use our tech for a new game and enjoy the performance boost. It's free for small instances anyway.

0

Share this post


Link to post
Share on other sites

Maybe an optimistic locking model degrades significantly when distributed across the network and maybe not, but Darkstar (or RedDwarf as it was later called) never tested that. It never reached that stage, and only did simple test distributing their single-node code. Check your sources...

 

Here's a video of them showing multi-node operation at GDC Austin: http://player.vimeo.com/video/6644938

 

On this forum thread it was said, "the Sun team showed a working prototype of the multi-node functionality at GDC-Austin before Oracle disbanded the team.  [...] We are working on a simpler solution at Nphos that we are confident will allow us to scale horizontally for casual games, and *may* perform well enough for MMORPGs". The very phrase "may perform well enough for MMOs" gives an insight into how poorly the system was scaling before and how low their expectations were! The horizontal case for casual games is already well-provided for by other tech anyway, which is probably why Nphos doesn't seem to be around any more.

 

Perhaps for good reason game devs are conservative, and that's why we're not trying to sell them our products any more... I posted here because I thought some might be interested in the approach.

 

It's certainly interesting! I'm sorry if I've come across as argumentative in trying to show that there would be problems using this tech for MMOs, as I assumed that was why you were posting here. I'm quite interested in the load-balancing and spatial coherence aspects myself, as someone who's working on distributed seamless world tech for MMOs. I also have experience of working on a team which shipped an MMO and came to appreciate that while features that buy you scalability are great, ease of use for the programmers is probably more important in terms of getting the project finished and working. Ultimately players are far more tolerant of shards that are half the size than single worlds with bugs. :)

0

Share this post


Link to post
Share on other sites

The very phrase "may perform well enough for MMOs" gives an insight into how poorly the system was scaling before and how low their expectations were! The horizontal case for casual games is already well-provided for by other tech anyway, which is probably why Nphos doesn't seem to be around any more.

 

Yes, that's referring to Nphos's simple solution.As for Darkstar -- there was a prototype, but research was far from finished. At no point in the project, throughout its various incarnations, have they released even an alpha version of multinode. They believed they would be able to achieve terrific scaling, but all efforts on multinode were halted once Oracle defunded the project.

 

there would be problems using this tech for MMOs, as I assumed that was why you were posting here.

 

Yes, there would be problems, but also some great benefits to be gained. Nevertheless, we've given up all efforts to persuade game developers of anything. Turing and von Neumann themselves could not have persuaded a game developer. But the lucky developer who will try our stuff will get some great results, and we'll be all the more happy for him. That's not our business plan, though. If you give it a try you'll be happy that you did; if not -- well, I don't try everything, either.

 

ease of use for the programmers is probably more important in terms of getting the project finished and working.

 

Yes, but ease of use is different from sustaining habits. You're talking about habits, not ease, and, like we both said, game developers are conservative :) Many "web developers that don't know what a TLB is" were able to quickly wrap their heads around Clojure, Erlang, Go and Node and mostly reaped the benefits of better productivity, even at the price of changing their habits. But, to each his own.

 

Ultimately players are far more tolerant of shards that are half the size than single worlds with bugs.

 

Yes, because they only know what they're getting. I'd take that further. Players are more tolerant of games that haven't progressed much in terms of features and abilities in something like a decade; this particularly applies to MMOs. Google and Facebook are able to process in realtime several orders of magnitude more than they were a decade ago. MMOs operate at pretty much the same scale. 

 

If you'd care to hear my thesis about it, here it is, and it's completely different from yours, about risk and investments. The reason for conservatism is that AAA games, especially MMOs, are prohibitively expensive to produce, and the software development is mere fractions of the cost. It almost all goes to content. For this reason, incumbents like Blizzard and CCP have little to fear from small startups with a better software architecture threatening their position. A startup could build an MMO architecture far superior to Blizzard's, but it doesn't matter because they won't have the resources to produce as much content and compete with WoW or EVE. There are very few incumbents, all of them co-existing happily together. And that is why the big studios have little reason to constantly innovate when it comes to architecture. To push MMO studios to innovate you'd have to come up with a game that raises player expectations, and that would only happen if players like it, which, in turn, will only happen if the content is rich. So it won't come from a startup. Without this expectation from their customers, innovation happens at a glacial pace. No one is in a hurry. 

 

This is completely different in industries where you don't have such high content production costs, and so even the incumbents are constantly fearful of newcomers; so they must innovate. Engineers there are constantly looking for new techniques and approaches that would give them an edge over the competition. 

0

Share this post


Link to post
Share on other sites

At no point in the project, throughout its various incarnations, have they released even an alpha version of multinode.

 

Sure, but 'not released' is not the same as 'not implemented'. They have had it up and running and demonstrated it working. But the developers have said that they needed to do more research to get it scaling upwards rather than downwards and the person close to the project that went on to form Nphos had rather low expectations based on his experiences so far.

 

You're talking about habits, not ease, and, like we both said, game developers are conservative smile.png

 

Hmm, I would say that writing a sequential script where one entity can find and access another entity directly is easier than factoring it out into a selection of queries and callbacks. Even if you had a language which let you write it in a more fluent way, eg. hiding the callbacks via coroutines, you've still added a layer of implicit concurrency that the developer now has to worry about. Yes, it's comparing apples with oranges, because your system scales out in a way the existing systems don't, but you don't get that for free unless you were already doing things the hard way. And you've pretty much admitted that there are key MMO features that just can't be done with your system, which makes it a hard thing to sell to a game designer.

 

MMOs operate at pretty much the same scale [as a decade ago]

 

Certainly true, for the most part. It's lamentable that the most popular MMO is one of the least technologically advanced when it comes to scalability. But it's hard to make an argument that there's a big demand to have everybody in one large game world. In fact there are some game design reasons why people do actually want separate servers, and indeed separate instances within servers.

 

The reason for conservatism is that AAA games, especially MMOs, are prohibitively expensive to produce, and the software development is mere fractions of the cost. It almost all goes to content.

 

Content is expensive but the software engineering side is never a mere fraction of the cost, unless you want to count between 2/5 and 3/5 as 'mere'. :) This may or may not change for MMOs after release when mostly it's about adding new content, but that new content also requires code support.

 

It is true that it is hard to disrupt a market that is full of such large incumbents. But the scalability of the world isn't likely to do it. If "one massive shared space" was a game-changer (no pun intended) then EVE would be far bigger than it is. Better technology only makes a difference where the user experience changes significantly as a result, and the majority of the paying customers in the MMO world seem to be asking to play with increasingly smaller and smaller groups of people in order to better control their play experience. Thus the lure of the seamless world has lost its lustre somewhat. It's easy to mistake games as being a symptom or a demonstration of technology, but really the tech is the tool here, not the product.

0

Share this post


Link to post
Share on other sites

Better technology only makes a difference where the user experience changes significantly as a result, and the majority of the paying customers in the MMO world seem to be asking to play with increasingly smaller and smaller groups of people in order to better control their play experience. Thus the lure of the seamless world has lost its lustre somewhat. It's easy to mistake games as being a symptom or a demonstration of technology, but really the tech is the tool here, not the product.

 

I absolutely agree, yet expectation or demand is often a function of what's available. I think it was Henry Ford who said, "if I were to ask my customers what they wanted they'd say they want a faster horse". I don't think there was demand for video games before there were any, or for 3D shooters before Wolfenstein. New technology just opens new horizons for the more adventurous, imaginative early adopters. Then it needs a killer app for the late adopters to realize its potential.

 

I don't think EVE's success, or its amount, is simply a function of the architecture. I'm sure that if someone came up with a great game that employs a seamless world, a lot of others will follow. Still, that game would have to be great for many reasons; obviously just having a seamless world won't be enough.

0

Share this post


Link to post
Share on other sites

Yes, but the server side, as well as the middleware mentioned by @hplus0603 above, is little more than a networking framework. It's analogous to Apache or Nginx in the web industry, or to RTI in defense. It doesn't help you with scheduling and concurrency.

I gave you several examples of how they do include scheduling/concurrency frameworks.... They usually provide a large amount of spatial partitioning tools too.

Stuff like your join queries (for example, return all pairs of objects that intersect one another, or return all pairs of objects that are within a given distance from one another) exists in every engine and is highly optimized. In terms of spatial querying, you are competing against technology that most game-devs already have. In terms of parallel processing, many game devs already have solutions, but yes, there's still many trying to transition, who may be potential customers.

 

But, correct me if I'm wrong, at the end of each cycle all threads are effectively stopped and only one core is used, right? Also, how do you ensure that each thread has a similar workload? That's fine if you have 8 cores, but what if you have 100? Stopping 100 threads and waiting for stragglers has a huge impact on scalability.

Good catch. Yes, in that incarnation of the system, every thread had to complete a cycle before the next once could be scheduled, and no thread could start the next cycle until scheduling was complete. Side note: the system that replaced this one mitigated this by scheduling messages to be executed on extra cycle in the future, which allowed two different cycles to be overlapping slightly.

This wasn't as impactful as it seems, because the actor system is only a very small part of the game's frame. There were many other systems running on the "job" model that I mentioned, so whenever a thread was "waiting", instead of going to sleep, it would pop items from a shared job queue and make itself useful. As long as other systems, like physics, etc, were kicked off before starting the actor-processing for a frame, the stalls were filled with other useful work.

Load balancing wasn't much of an issue because the bulk of the workload was in the job system, but instead of splitting a cycle's actor queue into equal pieces per thread, you can split it into a larger number of pieces in a shared queue that idle threads can 'steal' from.

 

Nevertheless, we've given up all efforts to persuade game developers of anything. Turing and von Neumann themselves could not have persuaded a game developer.

You're being silly and ignorant again. I don't understand how you can be refuting stereotypes about X-devs while spreading stereotypes about Y-devs at the same time.

I could crack out the typical Java programmer stereotypes about being able to create a class that abstracts your own mother while being unable to even add two numbers without 18 indirect utility layers, but that's not at all helpful...

 

The ability to convince someone to use your product depends on who they are. I can tell you straight-up that console developers will be a very hard sell, simply because Java doesn't exist there! You could have the greatest middleware in the world, but it's no use if they can't run it. You can't blame headstrong developers with ingrained habits for this.

That leaves PC games... but only PC games that don't also include a console version, because they'll be the same code-base.

If a PC game isn't also on a console, then it's probably either an indie game, or an MMO.

 

Indies have more freedom to experiment with unproven technology and unconventional languages (Java never got a foothold in mainstream games... C# is only starting to make headway now... GC languages are hard to use effectively in RT environments, etc), so they might be a target, especially if you've got a free "starter" license. Hopefully one will become the next Minecraft and be able to afford a real license.

 

MMO's are a more obvious target, because they actually require large-scale server infrastructure, and have the freedom to explore traditional large-scale server-side technologies, so a requirement on the JRE is more welcome. However, the number of large-scale MMO developers in the world is very low, so the chance of netting customers is also pretty low.

0

Share this post


Link to post
Share on other sites

I gave you several examples of how they do include scheduling/concurrency frameworks.... They usually provide a large amount of spatial partitioning tools too.

Stuff like your join queries (for example, return all pairs of objects that intersect one another, or return all pairs of objects that are within a given distance from one another) exists in every engine and is highly optimized. In terms of spatial querying, you are competing against technology that most game-devs already have. In terms of parallel processing, many game devs already have solutions, but yes, there's still many trying to transition, who may be potential customers.

 

When I last looked into this, there wasn't any middleware that combined spatial queries and parallel processing in the way that SpaceBase attempts to do. Lots of tech can shuffle actors (or zones containing actors) from one node to another, whether automatically or to meet specific demands, and lots of tech can query all actors known to the current node that match some sort of spatial criteria. But I don't know any that can query actors across all nodes based on spatial criteria, and return a reference to actors that can be manipulated (and not just read). This transparent and efficient clustering was the holy grail that Project Darkstar failed to achieve. By comparison, SpaceBase may well manage this, but it apparently comes at a high price: increased code complexity for each task, and certain features made impossible within the framework. The latter point is especially pertinent because it's likely that those are the features that Darkstar supported but which caused its performance problems when they started testing multi-node capability.

 

If you're in a position to either completely rule out or seriously limit any operations that don't take advantage of spatial coherence, partitioning your data becomes much cheaper.

0

Share this post


Link to post
Share on other sites

The ability to convince someone to use your product depends on who they are. I can tell you straight-up that console developers will be a very hard sell, simply because Java doesn't exist there! You could have the greatest middleware in the world, but it's no use if they can't run it. You can't blame headstrong developers with ingrained habits for this.

That leaves PC games... but only PC games that don't also include a console version, because they'll be the same code-base.

 

Not to mention:

  • Developers want to leverage their heavy past investment in their own existing code that may have no relationship to MMO tech. (eg. Asset loading, audio, rendering, input, scripting languages, game-oriented mathematics.
  • They often want to share code between server and client - so that legacy code, probably in C++, will influence their choice of server language
  • The more senior game developers may only have C++ experience (at least in terms of which languages they are considered expert in) and will be significantly less productive when having to learn new paradigms and new libraries.

I never saw a project that was started completely from scratch before - everything was built in terms of the company's previous code base. But that wasn't just 'habit' - it was the most effective way to get to where they needed to be given the resources they had with the least amount of risk. Obviously there are often more efficient ways to work, but being twice as efficient isn't enough if you have five times more work to do.

0

Share this post


Link to post
Share on other sites

When I last looked into this, there wasn't any middleware that combined spatial queries and parallel processing in the way that SpaceBase attempts to do.

Oh yeah, I didn't mean to imply otherwise.

 

But within the realm of single-PC games (not distributed servers), these kinds of multi-node concerns don't exist (or with the NUMA PS3, the nodes are so small extremely that data structures are never resident - everything is transient) and existing tech does a decent job of spatial queries. e.g. I can batch up a collection of spatial queries and pass them to PhysX, which will process them over many cores using smart partitioning techniques.

Edited by Hodgman
0

Share this post


Link to post
Share on other sites

In terms of spatial querying, you are competing against technology that most game-devs already have. In terms of parallel processing, many game devs already have solutions, but yes, there's still many trying to transition, who may be potential customers.

 

I don't think so. The whole approach, as outlined in the blog post, is that scalability comes from making parallelization and querying one and the same. If you say you have one or the other, or even both separately -- that's nothing to do with what we provide. We say that IF you have real-time db queries AND the db uses knowledge about the domain and the queries in order to parallelize your business logic THEN you get significant scaling. We're not trying to solve the problem of querying spatial data. We're solving the problem of scaling by utilizing the database. 

 

The main point of our approach is this: we're not utilizing parallelism to speed-up database queries. We're utilizing the database to parallelize (and speed up) your business logic

 

 

You're being silly and ignorant again.

 

Yeah, I'm probably being silly, but I don't think I'm ignorant. smile.png After spending a lot of times talking to potential customers in various markets, some patterns emerge.

 

 

I can tell you straight-up that console developers will be a very hard sell, simply because Java doesn't exist there!

 

All our middleware is server-side only. That's where you find the scaling issues we're trying to solve.

 

However, the number of large-scale MMO developers in the world is very low, so the chance of netting customers is also pretty low.

 

True. That's another reason why we're not trying harder to convince game devs to buy our stuff. For indie developers our software is free anyhow (depending on the game world size).

Edited by pronpu
0

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now
Sign in to follow this  
Followers 0