hplus0603

Moderators
  • Content count

    11895
  • Joined

  • Last visited

  • Days Won

    2

hplus0603 last won the day on June 20

hplus0603 had the most liked content!

Community Reputation

11303 Excellent

About hplus0603

Personal Information

  • Location
    Redwood City, CA
  1. Okay, so that was an easy win :-) Now, look up multi-resolution grids, or perhaps quad trees, and you may be able to only store the bits of the map you really need :-)
  2. Why do you need a copy of the level of the server, if each player is in their own personal world? Also, if you make a level that's that big, do you think a player will actually manage to walk all over that level? Or could you incrementally generate blocks of the level as the player gets closer? Let's assume your world is 2000x2000 cells, with 10 bytes of cell attribute data. (Let's also assume you use value structures, not reference/heap objects, else you'll have massive memory waste!) If you generate 50x50 blocks around where the player currently is, you only need to instantiate 150x150 cells (nine blocks of 50x50) at one time. When the player moves out of range of a block, you could even unload that block, and re-generate it on demand, if needed. Or you could save it -- presumably players can't actually visit most parts of a map in a reasonable-length play session?
  3. It depends on the game. "Party games" can work without a central match-maker. An example of such a game might be "Space Team" (which I recommend if you have some friends over and your neighbors don't mind some shouting.) You typically implement these using plain UDP broadcast, or some library/service that is in turn implemented on top of UDP broadcast. That means they work as long as all players are connected to the same WiFi network. It is theoretically also possible for one of the phones to act as an access point, and the other phones to connect to it, but in practice, this either doesn't work, or performs really poorly. Real-time interactive games that don't rely on physical proximity will need a matchmaker server for matching up the players to each other. Once matched up, you have the choice of having one game instance "host" the game and the other players "joining" that hosted game ("client host",) OR each instance running its own game and "updating" the others about their actions ("peer-to-peer",) OR running the server centrally and having each player just send their updates to the server, and the server echoes the data to the other players. Peer-to-peer is the least robust, client hosts are nice for developers who have very little money, and central servers generally provides the most robust gaming experience with the least chance for cheating. But, in each of these cases, you at least need a server of some sort to match-make/introduce the players to each other. (Libraries or toolkits like Game Center or Play Services or whatever can do this for you in some cases.) Finally, asynchronous multiplayer games -- anything from a "chess by mail" game to a "backyard monsters" base builder to a "farmville" semi-social game -- needs some server that stores the state of the game between player check-ins. Because the players aren't online at the same time, the client hosted method, and the peer-to-peer method, don't work.
  4. One thing I'd like to see is more obvious call-out of "read" versus "not read" -- the blue filled/unfilled little icon on the left isn't quite punchy enough.
  5. You know what they say: "Good decisions come from experience. Experience comes from bad decisions." If you just want to write it as a learning exercise, feel free to use whatever ORM you want. It'll probably run fine at small scales. There are plenty of other things you also need to learn to build an MMO-like system, so perhaps this isn't the time when you learn to talk SQL directly, and you can save that for some other project. Good luck!
  6. The simple tools to "alter table add column" will lock the table for the duration of updating the data on disk, at least with MySQL. For heavily used tables, this is a non-starter, unless you're OK with hours of downtime. There are tools that alter the schema "online" by re-writing the data in the background; we have tables that are big enough that this work never completes and finally declares failure after days of trying. So, adding columns on small tables (say, below ten million rows and below a gigabyte in size) is quite doable; once you get bigger than that, nothing becomes simple anymore :-) Then the second question is: How much do you need to worry about this? 99.99% of games never get that big. Solving that problem too early, means you're wasting time solving a problem that you don't need to, which is time that you could have used to work on more immediate problems, like "is the game fun?" and "can you actually get players?" (Almost) Every successful system is full of shortcuts, that have to be solved at some time in the future. Knowing where a little bit of extra engineering will save you lots of time later, and ideally also make you faster while building that future, is the trick. Over-engineering is not the solution!
  7. You are not wrong :-) It's actually still quadratic, but quadratic in a smaller number (number of entities divided by number of servers, times number of entity copies needed for cross-border resolution.) Similarly, a single locality query for "nearby" objects is not a constant cost, but actually has a cost that is linear in number of neighbors. While there may be 300,000 queries for 10,000 entities at 30 Hz, each query may return more than one entity, and thus may cost more than "1" along a few cost metrics (storage, memory touched, entities to check against, etc.) MPI lets you send messages between processes, using non-lossy but also not-real-time-aware TCP RPC. This is a useful primitive to use when building distributed systems, but it doesn't really get at the real question, which is "how do you structure your game design to make best use of distributed servers, and avoid placing undue burden on the server system that you have?" It seems like SpatialOS implements a particular kind of trade-off and an API to support developing entities for that trade-off. Other systems do a similar thing, reaching different conclusions with different base assumptions. This is why "how can I compare these different platforms?" is such a hard question, because it totally depends on the specifics of your game. Farmville works great on plain Amazon ECC web server instances. Unreal Tournament, not as much.
  8. It's totally normal, if you consider a character as being an entity with a well-defined set of skills. If you believe that skills are separate entities unto themselves, and they come and go (rather than the character just raising/lowering/enabling/disabling them) then, yes, you need a separate table to join characters to skills. I would recommend that you do NOT go for flexibility right now, because making just your first game is going to be hard work enough -- you're not going to have lots of time to try many different things, at least not to the point where you will put them into the database. That being said, if you really believe this is an area of quick iteration, then just slam 'em all into a single JSON blob and call it good! Separately, I would advise NOT to use an ORM. Ever. For any application. Objects in RAM behave differently from rows in a database, and the impedance mis-match that happens when you try to join them, ends up killing projects. And, what's most insidious, is that this kind of death only happens towards the end of the project, where scale makes itself known. Projects that manage to pull out of this, typically re-code their ORM integration in raw SQL, and exactly how painful that is, depends on how heavily they relied on the ORM "magic" before then. http://blogs.tedneward.com/post/the-vietnam-of-computer-science/ https://blog.codinghorror.com/object-relational-mapping-is-the-vietnam-of-computer-science/ http://seldo.com/weblog/2011/08/11/orm_is_an_antipattern Programming with relations in memory is actually, in general, a better model than programming in "rich" objects, anyway -- this is the learning from the "data oriented design" movement that's basically taking over high-performance gamedev in the last 15 years or so.
  9. There are three ways of doing this, each with benefits/draw-backs. The most straightforward way is to create a table, indexed by "character ID," which contains the skill level for each skill -- a column per skill. This has good performance, makes it easy to analyze, and is straightforward to marshal to/from internal data structures. The draw-back is that each game design change (new skill, remove skill, etc) requires a schema change (add/remove/change columns.) The second is to create a multi-join table; character ID -> skill ID -> skill level. This makes it easy to add more skills, without a schema change. It makes it slightly harder to marshal to/from game structures (you may or may not get all the skills, may get unknown skills, etc.) Indices get taller, performance slightly slower, etc. But it's quite flexible. The third is to create a table with "character ID -> JSON blob" and leave schema management to your code marshaling to/from your JSON data. This is super flexible, and has only one row per character, but loses all the other benefits of a database -- you can't easily query "what's the average archery skill level of all players" and such, at least not without a table scan and expensive JSON parsing expression. Which one do you go with? Depends on your goals. But don't underestimate the simplicity of one-row-per-character, one-column-per-skill!
  10. I know what you're talking about. I built a system that had many similar properties, including this programming model for entities. Our sales people used the same marketing claims. It turns out, there are things developers do "as a matter of course" that ends up generating way too much RPC traffic to scale well. Developers need to know what the distribution decomposition is, if they want to get anywhere near (say, within an order of magnitude of) the theoretical maximum performance of the system. Naive developers, even using your carefully crafted API that attempts to make developing distributed objects easy, and "hiding" RPC/messaging, WILL flood your system to the point where scalability is 1/100th of what it should be. Further, developers will assume that RPC or events are reliable, AND bounded time. As you know you can't get both of those at the same time across a lossy network ("two generals problem.") If, in the context of "paradigm shift," you suggest that developers also need to train themselves to know about these things, then yes, inside the paradigm you live inside the paradigm. But that paradigm includes limitations that are imposed by your particular distribution model. That's an unavoidable outcome of distributed games, and it's what makes distributed games (and other systems) an order of magnitude harder to work with than in-RAM single-player games, and pretending that they're the same does nobody any favors. (Except possibly salespeople on commission who would prefer to close deals early over closing the right deals -- luckily, I've managed to avoid most of those in my life!)
  11. Yes, but this is also the case for any client-server game, and it depends on your internet connection at home; whether there's a single server or a swarm of workers on the other side can't improve that. I think you misunderstood me. I'm talking entirely about things that go in inside a virtualized, cloud-hosted data center. Because it uses virtualization for the machine hosts, you are subject to the requirements of the virtualization platform, and that often introduces significant (many milliseconds) latencies in scheduling, because the VM hosts are all optimized for batch throughput, not for low-latency real-time processing. For real-time simulation running close to full machine utilization, a physics step time that goes from 15.5 milliseconds to 17.5 milliseconds will make you miss your deadline. For real-time physics games, I much prefer bare metal for this reason. It's also interesting that you mention co-simulation across visibility borders and PhysX at the same time. PhysX is not deterministic, so any co-simulation across borders will diverge. With enough authoritative network state snapshots, you can mash that with brute force, of course. Regarding the "borders moving with load," that's something we looked at an implemented for There.com, but it ended up not being useful for real gameplay, because players tended to gather in the same kind of gathering places. Meanwhile, the view distance across borders (i e, how much you need to co-simulate) has to be determined by the "visibility range" of your objects. If your object is a missile cruiser with a range of 150 kilometers, you have to have an instance of the object on any server that touches this radius, so that it can do target acquisition. (Either you have an instance of the cruiser on each server within the radius, OR you bring a copy of each object within that radius to the cruiser's server -- if there are fewer cruisers than targets, you want the former, for hopefully obvious reasons.) If you have a soldier with a sniper rifle with a 2 kilometer scope, you have to be able to see each object within two kilometers, or the player will be sad. I'm pointing this out, not to cast shade on SpatialOS, but to show that any distributed server framework has to be used with gameplay design that goes hand-in-hand with the networking/simulation capabilities, and each solution will bring with it specific limitations you have to accept as a game designer. Using words such as "invisible to the developer" or "without having to think about distribution" sounds great in marketing, but ends up not actually being helpful to the end developer. And, honestly, is actually untrue for all but the most trivial kinds of games. I've found that the companies that end up doing the best in gaming are those that are clear about pros and cons about their systems, and that do not make over-simplified promises they cannot actually deliver on (without tons of caveats) in their marketing.
  12. Cores talk to memory through a memory bus. That memory bus has some fixed latency for filling cache misses. Surely, it will be faster than an external network, but on the other hand, all of those users need packets generated to/from themselves, going into/outof the core, too. It's hard to get more than four-channel memory into a single socket, and price goes up by something like the square of the socket count (dual-socket much more expensive than single-socket; quad-socket much more expensive than dual-socket.) And, once you have CPUs with different sockets, the different CPUs should be thought of as different network nodes -- a cache miss filled in 400 cycles on a local RAM module may take 4000 cycles when filling from a remote CPU NUMA node. (usually, the difference is less stark, but it's easily noticeable if you measure it.) So, let's assume there are four sockets, each with 50 GB/s throughput. Split 25,000 users per core, at 60 Hz. That gives 33 kilobytes per player per frame, and this is assuming that you will use memory optimally. (Most cache-miss-based algorithms would be happy to get past half the theoretical throughput.) Can you do all the processing you need to do for a single player for a single frame, touching only 33 kilobytes of RAM? (Physics, AI, rules, network input, network output, interest management, and so on all go into this budget.) It's quite possible, if you know what you're doing, and carefully tune the code for the system that's running it, but it's no sure slam-dunk winner. Write your code in Java or Python or some other language that ends up chasing too many pointers, and you lose your edge very quickly. I just priced a PowerEdge with quad 16 core/32 thread Xeons, four 8-way 16 GB DIMMs per socket, and dual 10 Gb network interfaces; it's about $50k (plus tax and if you need storage disks and such.) You'd also want at least two, because if you have a single server and it dies, your game is as dead as the server. (You'd also need data center rack space/power/cooling and routers/switches/uplink and so on.) Although, still, $100k isn't that bad; the multiplayer networking systems have to compete with this offering and make it worthwhile, which limits their ability to charge upwards, which in turn means they can't solve problems that are too advanced or too fancy, and thus have to simplify their solution to the wider masses of developers. This, coupled with the incredible importance of designing game and network behavior hand-in-hand, probably is one of the explanations why there isn't a phletora of successful game multiplayer hosting/middleware solutions out there, and why each of the ones that are surviving actually fills a different niche -- there simply isn't space to survive for two different competitors in the same niche.
  13. Also, a variety of multi-entity architectures have been tried. The most famous failure is probably Sun Darkstar, which ended up supporting fewer entities in cluster mode than in single-server mode :-) They used tuple spaces, which ends up being an instance of "shard by ID." The other main approach is "shard by geography." A "massively multiplayer" kind of server ends up with, at a worst case, every entity wanting to interact with every other entity. For example, everyone try to pile into the same auction area or GM quest or whatever. (Or all the mages gather in one place and all try to manabolt each other / all soldiers try to grenade each other / etc.) N-squared, as we know, leads to an upper limitation to the number of objects that can go into a single server. Designing your game to avoid this, helps not just servers, but also gameplay. When there's a single auction area that EVERYBODY wants to be in, it's not actually a great auction experience (too spammy,) so there's something to be said for spreading the design out. (Same thing for instanced dungeons/quests, etc.) Anyway, once you need more than one server, then you can allocate different servers to different parts of the world (using level files, or quad trees, or voronoi diagrams, or some other spatial index,) To support people interacting across borders, you need to duplicate an entity across the border for as far as the "perception range" is. This, in turn, means that you really want the minimum size of the geographic areas to be larger than the perception range, so you don't need to duplicate a single entity across very many servers. If by default you have chessboard distribution, and the view range is two squares, you have to duplicate the entity across 9 servers all the time. That means you need 10 servers just to get up to the capacity range of a single non-sharded server! The draw-back then is that you have a maximum density per area, and a minimum area size per server, which means your world has to spread out somewhat evenly. Because the server/server communication is "local" (only neighbors,) you can easily scale this to as large an area as you want, as long as players keep under the designated maximum limit. Many games have used methods similar to this (There.com, Asheron's Call, and several other.) The other option is to allocate by ID, or just randomly by load on entity instantiation. Each server simulates entities allocated to them. You have to load the entire static world into each server, which may be expensive if your world is really large, but on modern servers, that's not a problem. Then, to interact between other servers, each server broadcasts the state of their entities using something like UDP broadcast, and all other servers decode the packets and forward entities that would "interact with" entities that are in their own memory. This obviously lets you add servers in linear relation to number of players, and instead you are limited by the speed at which servers can process incoming UDP broadcast updates to filter for interactions with their own entities, and you are limited by available bandwidth on the network. 100 Gbps Ethernet starts looking really exciting if you want to run simulation at 60 Hz for hundreds of thousands of entities across a number of servers! (In reality, you might not even get there, depending on a number of factors -- Amdahl's Law ends up being a real opponent.) None of this is new. The military did it in the '80s on top of DIS. And then again in the late '90s / early '00s on top of HLA. It's just that their scale stops at how many airplanes and boats and tanks they own, and they also end up accepting that they have to buy one computer per ten simulated entities or whatever the salespeople come up with. There's only so many billion-dollar airplanes in the air at one time, anyway. For games, the challenge is much more around designing your game really tightly around the challenges and opportunities of whatever technology you choose, and then optimizing the constant factor such that you can run a real-size game world on reasonable hardware. (For more on single-hardware versus large-scale, see for example http://www.frankmcsherry.org/assets/COST.pdf )
  14. Every five years, some company with expertise from some "adjacent" technology area (finance, mil/sim, telecom, geospatial, etc) believes that they can do games better! They will win with their superior tech! I have not seen a single one of those actually manage to get anything real and lasting into the marketplace. Not for lack of trying! Is Improbable different? Perhaps. They got a giant investment from SoftBank, which might mean they have something neat and new. Or it may mean they have good connections to investors who have different investment criteria. You mean, in the "cloud," where latencies cannot be managed, where noisy neighbors can flood your network, and where two processes that communicate intimately end up being placed on different floors of a mile-long data center, and that charges 10x mark-ups on bulk network capacity? That "cloud"? Or is this some other "cloud" that actually works for non-trivial low-latency real-time use cases? In the end, though, most games just aren't well-funded and big enough to actually make a lot of sense for more business-focused companies. And, those who are (OverGears of BattleDuty and such) they end up distinguishing their games from others by integrating gameplay with networking with infrastructure really tightly, and at that point, a "one size fits all" solution looks more like "one size fits none." Don't get me wrong. Distributed simulation is a fun area to work in, and there are likely large gains to be had through innovative whole-stack approaches. History just shows that the over/under on any one particular entrant in the market is "not gonna make it."
  15. There are two main models for FPS games: - Clients "host" games and matches are made on a matchmaker with NAT introduction - Servers are run by the game publisher, and all players connect to those game servers (Overwatch, Gears of War, etc) For the actual gameplay, in Unreal Engine, you'll pretty much always use the Unreal networking code, because it's optimized for Unreal low-latency play. The role of a third party is then to do whatever you need around this, which may include just keeping account information, or may add guilds, commerce, leagues, etc. Separately, if you choose to pay for the servers as a publisher, then you also need some management of those real hardware servers, making them available to your sign-in/lobby, etc. Because the Unreal model is "one process runs the game, and all clients connect to that process" (be it client-hosted, or centrally hosted,) then the question of "keeping up" entirely comes down to your game process, and the hardware you choose to run it on. The third-party services don't really affect that bit.