• Content count

  • Joined

  • Last visited

Community Reputation

452 Neutral

About snacktime

  • Rank
  1. You need to do more research and get to the point where you understand why some formats are more efficient.  Formats like protobuf not sending field names is an obvious one.  Varint compression is huge once you understand how it works (you can send X-bit integers using fewer then X bits). But the big picture is that it's a combination of a number of things that impact network usage and overall performance.  Using techniques to simply not send data you don't need to send is just as if not more important then optimizing the data format.  Structuring your code to take the best advantage of things like varint compression, makes a huge difference.   For example a trick I use is I never send floats for stuff like position updates.  I send integers using varint compression and decide on the highest decimal precision I actually need.  I multiply/divide to convert floats to ints and visa versa at that precision.  That results in huge savings for the type of data that makes up most of my network traffic.   Currently the best general approach I know of is varient/MSB encoding combined with using integers to represent as much as you can.  I've just found it to give the best results over the largest variety of use cases in multiplayer games.   And I also have to factor in integration with other frameworks I might be using.  Like I might be using Akka or MS Orleans as my core server framework, and if they natively support protobuf, well that means I don't have to take the GC hit to deserilize my format and then serialize again into theirs.  And on the server if you are working with message rates normal to say an mmo or fps game, it's object creation and GC that eventually becomes your bottleneck.  What I always tell people that are relatively new at this is no, don't even think about creating your own format until you first have a solid understanding of how existing formats work and you have gone through creating at least one working game of the specific genre you are tackling.  That's the best overall advice I can give.          
  2. Is art programming?

    Both require being proficient in dealing with abstractions and being creative.  Aesthetics is a major part of writing code. The people who created many of the algorithms we use in programming, that's definitely a creative work. I don't know if I would try to equate art to programming, but you can certainly create art with programming, and the ability of algorithms to match what a real person can do has gotten progressively better.  It's no where near what a good artist can do, but it's moving in that direction. And if you narrow it to the more technical aspects, programming beats the human.  You can't draw a perfect mathematically based shape, but a computer can. But a computer cannot bring complex ideas to life like a human can.  The best ai we have can still only reason about a small number of things at a time compared to the human brain.  There are a lot of problems that we can solve in a second that would take a computer years to do given the current algorithms we know about.  But both the speed of computers and the algorithms we use are improving.
  3. MMORPG networking Solutions

    It's been a while since I've mentioned it, but it's come a long ways and I just released a new version.   Game Machine.   Game machine is an open source server platform that excels at large virtual worlds with lots of players in the same area.  It's focus is on solving a core set of hard problems not just networking.  It has built in persistence that scales well.  The best space optimization of any platform I'm aware of, and a well defined structure for writing game logic that runs concurrently.  Plus a bunch of built in functionality such as multiple characters, groups/chat, area of interest, etc..   The default client is for Unity.  Biggest market it's where I put my focus for all the client side implementation.  But it's fairly simple to integrate with the server as the core protocol is protocol buffers, and I moved stuff like reliable messaging to a higher level of abstraction.  So the networking layer is very dumb (as it should be IMO).   I wrote a completely functional open world mass pvp combat mmo in under 3 months using it, it was an eat your own dogfood project.  And all of the server side code for that is also open source and comes with.  So there are a ton of examples for how to solve most of the things you might tackle in an mmo.  
  4. Game content source repository?

    Github is coming out with large file support, and there is a commercial git based platform that supports large files, can't remember the name.  It's based on git-annex.   So the future is looking brighter.   Even though perforce handles binary/large files really well, I just can't give up github workflow for code.  So I've been using amazon S3 for large binary files.  Managing versions gets tricky when working in teams.  Let me rephrase, it outright sucks.  But it's been manageable.
  5. I have a sandbox pvp mmo where I'm trying to model some of the elements of Eve online that I like.  Overall the gameplay is a mix of Eve and say GW2.  Much of the functionality is already playable, running on live servers, it's not just in the design phase.   So one of the key tactical elements in Eve is stargates that act as chokepoints, and I've been trying to think of something similiar to put in the game.  Practically speaking it's not really possible to have different parts of the map that only have a couple of narrow access points, it's a huge open world that is procedurally generated then hand tweaked.  Hand crafting a world this size just isn't practical for an indie studio, so we had to take a compromise route.    I was thinking that I can apply specific access points for trade via roads, which is kind of the angle I'm going with now.  I have npc cities and player run cities, and the general idea is you have to transport your goods from your player run city to an npc city to be able to sell to players outside your guild.  To transport your goods you have to stay on a road.  If you are on a road you can carry a lot of weight, if you go off, you become extremely  encumbered.  To make things more interesting I was going to put a couple of watch towers on the road between every player run city and the npc cities, and watch towers are also player controlled.  So if you own the watch towers, you effectively control the trade along that road (opposing factions will get easily killed if they don't bypass the watch tower, and you can't do that if carrying a lot of stuff).   Anyone have other ideas on this?
  6. Mmo's are kind of just the perfect storm of hard software problems.  Generally the ones you have to deal with in an mmo fall into the following areas.     - Concurrency.   Understanding how basic threading and mutexes work gets you maybe 5% of the way to where you need to be in order to do concurrency well.  Go read up on fork/join and the lmax disruptor for starters.  Read up on lock free algorithms.  That's the kind of stuff you need to know something about to do this well.   - Persistence.  Games are typically 50/50 read write when it comes to the database.  Most software is read heavy, most databases are optimized for read heavy apps.  You need to know stuff like implementing write behind caches and various other caching mechanism's to handle this problem and do it with consistent low latency.   - Networking.  This is the thing most people associate with game servers, and it's also by far the simplest problem to solve, as it's mostly already solved.  Not really worth going into.   - Scalability.  Games are stateful.  Global state synchronization doesn't scale.  Problem!  It's kind of why you don't use transactions for everything in your database and only use them when needed, because the cost for synchronizing state at scale is huge.  There are known solutions to this, it's a solved problem but more difficult to find information on.  Also ties directly into concurrency.   As far as cloud services it just depends on the type of game, and the type of cloud.  Realtime multiplayer games eat up a ton of bandwidth,more then you would think.  Most cloud services are priced for web stuff and that pricing just doesn't work for realtime multiplayer stuff.   You need to get down around $0.02 per GB before it starts to make sense.   Most cloud providers also overprovision.  If you run your own vm's then virtualization can work great, but you won't get consistent performance out of most cloud virtualization.     Overall the cloud is overrated, especially if you have your own dev ops team.  When you do the math on buying the hardware yourself and either colocating or even just outsourcing the management of your hardware, the cloud starts to look really expensive.  Contrary to what Amazon and others want you to think, they have huge margins on this stuff.  They count on the fact that cloud hosting is just taken for granted and people not actually doing the math.   The sweet spot I found that we used on almost a dozen games, was to use a company like softlayer that could provision real hardware to our specs and manage the network/hardware level admin.  And our small dev ops team managed provisioning and stuff like that.  It was cost effective and we kept good performance.   FYI the current trend is for more companies to use hybrids and more real hardware.  People are tired of cloud providers overprovisioning, and starting to actually do the math and see how consistently they are just flat getting ripped off.  There is a reason why cloud providers don't tell you how many vm's they run per core. 
  7. I'm aware of most if not all the available physics libraries, but not having used most of them I'm not really sure which is best for what I need.   The use case is specifically for doing LOS checks to other players/mobs on the server side which is java.   The client is Unity.  Players and mobs are already tracked in a spatial grid.     Ideally I'd like the simplest library possible that just let's me import meshes into it and do raycasting against them.  Also the ability to setup some type of masking so I can pick what meshes the ray will stop at when hit.   I envision the whole process something like this.   1.  Initialize the physics engine with the static meshes and a bunch of player meshes.  Player meshes all have base coordinates of something like 0,0,0.   2.  When I need to do a LOS check,  I take as many player meshes as I need for the simulation and change their coordinates to match the players actual world coordinates.  Then do my LOS checks.  Then set the player meshes involved back to the default position.   3.  Rinse, repeat   I've been looking at using one of the java jni wrappers for bullet or possibly the java ODE port.  I'd like to find something more specific to the task at hand if possible, if such a thing exists.
  8. raycasting can't give you paths for things it can't see, so it's really limited to being useful for local avoidance.   You are going to need a combination of pathfinding and steering to make a moba/rts game.   There are actually surprisingly few good open source pathfinding libraries, and TONS of crap written by people learning how to write pathfinding code.  My suggestion would be to just go with recastnavigation.   It's pathfinding is great.  The only part that's a bit dated is the crowd stuff.  It works but it's cpu intensive.  You can use it for a moba game, but for an RTS game with a lot of objects, it won't work as it just uses too much cpu.   My preference would actually be to just use an engine like Unity or UDK that already has what you need built in.  Even though their pathfinding is loosely based on recast, they provide crowd implementations that are far more performant and can handle much larger crowds.
  9. MMOs and modern scaling techniques

      I think we're crossing wires a bit here. Reliable messaging is a trivial problem to solve (TCP, or a layer over UDP), and thus it is easy to know that either (a) the request was processed correctly, or will be at some time in the very near future, or (b) the other process has terminated, and thus all bets are off. It's not clear why you need application-level re-transmission. But even that's assuming a multiple-server approach - in a single game server approach, this issue never arises at all - there is a variable in memory that contains the current quantity of gold and you just increment that variable with a 100% success rate. Multiple objects? No problem - just modify each of them before your routine is done.   What you're saying, is that you willingly forgo those simple guarantees in order to pursue a different approach, one which scales to higher throughput better. That's fine, but these are new problems, unique to that way of working, not intrinsic to the 'business logic' at all. With 2 objects co-located in one process you get atomicity, consistency, and isolation for free, and delegate durability to your DB as a high-latency background task.     So this is an interesting topic actually.  The trend is to move reliability back up to the layer that defined the need in the first place, instead of relying on a subsystem to provide it.     Just because the network layer guarantees the packets arrive doesn't mean they get delivered to the business logic correctly, or processed correctly.   If you think 'reliable' udp or tpc makes your system reliable, you are lying to yourself.
  10. MMOs and modern scaling techniques

      Ok - but there is so much in game development that I can't imagine trying to code in this way. Say a player wants to buy an item from a store. The shared state model works like this: BuyItem(player, store, itemID):     Check player has enough funds, abort if not     Check itemID exists in store inventory, abort if not     Deduct funds from player Add funds to store     Add instance of itemID to player inventory Commit player funds and inventory to DB Notify player client and any observing clients of purchase This is literally a 7-line function if you have decent routines already set up. Let's say you have to do it in a message-passing way, where the store and player are potentially in different processes. What I see - assuming you have coroutines or some other syntactic sugar to make this look reasonably sequential rather than callback hell - is something like this: BuyItem(player, store, itemID):     Check player has enough funds, abort if not     Ask store if it has itemID in store inventory. Wait for response.     If store replied that it did not have the item in inventory:         abort.     Check player STILL has enough funds, abort if not     Deduct funds from player     Tell store to add funds. Wait for response.     If store replied that it did not have the item in inventory:         add funds to player         abort     Add instance of itemID to player inventory     Commit player funds to DB     Notify player client and any observing clients of purchase This is at least 30% longer (not counting any extra code for the store) and has to include various extra error-checks, which are going to make things error-prone. I suspect it gets even more complex when you try and trade items in both directions because you need both sides to be able to place items in escrow before the exchange (whereas here, it was just the money).   So... is there an easier or safer way I could have written this? I wouldn't even attempt this in C++ - without coroutines it would be hard to maintain the state through the routine. I suppose some system that allows me to perform a rollback of an actor would simplify the error-handling but there are still more potential error cases than if you had access to both sides of the trade and could perform it atomically.   You talk about "using an actor with a FSM", but I can't imagine having to write an FSM for each of the states in the above interaction. Again, comparing that to a 7-line function, it's hard to justify in programmer time, even if it undoubtedly scales further. I appreciate something like Akka simplifies both the message-passing and the state machine aspects, so there is that - but it's still a fair bit of extra complexity, right? (Writing a message handler for each state, swapping the message handler each time, stashing messages for other states while you do so, etc.)   Maybe you can generalise a bit - eg. make all your buying/selling/giving/stealing into one single 'trade' operation? Then at least you're not writing unique code in each case.   As for "writing the code so all messages are idempotent" - is that truly practical? I mean, beyond the trivial but worthless case of attaching a unique ID to every message and checking that the message hasn't been already executed, of course. For example, take the trading code above - if one actor wants to send 10 gold pieces to another, how do you handle that in an idempotent way? You can't send "add 10 gold" because that will give you 20 if the message arrives twice. You can't send "set gold to 50" because you didn't know the actor had 40 gold in the first place.   Perhaps that is not the sort of operation you want to make idempotent, and instead have the persistent store treat it as a transaction. Fair enough, and the latency wouldn't matter if you only do this for things that don't occur hundreds of times per second and if your language makes it practical. (But maybe there aren't all that many such routines? The most common one is movement, and that is easily handled in an idempotent way, certainly.)   Forgive my ignorance if there is a simple and well-known answer to this problem; it's been a while since I examined distributed systems on an academic level.   Handling something like a transaction is really not that different in a distributed system.  All network applications deal with unreliable messaging, reliability and sequencing has to be added in somewhere.  Modern approaches put it at the layer that defined the need in the first place, as opposed to putting it into a subsystem and relying on it for higher level needs, which is just a leaky abstraction and an accident waiting to happen.   If a client wants to send 10 gold to someone and sends a request to do that, the client has no way of knowing if the request was processed correctly without an ack.  But the ack can be lost, so the situation where you might have to resend the same request is present in all networked applications.   Blocking vs non blocking is mostly an issue that comes up at scale.  For things that don't happen 100,000 times per second, you don't need to ensure that stuff doesn't block.   As for FSM's, I use them a lot but mostly because I can do them in ruby, and ruby DSL's I find easy to read and maintain.  That Akka state machine stuff I don't like, never used it.  Seems a lot more awkward then it needs to  be.   Things like purchasing items is something that's not in the hot path, I can afford to use more expressive languages for stuff like that.
  11. MMOs and modern scaling techniques

    Ok so I'll chime in here.   Distributed archtiectures are the right tool for the job.  What I've seen is that the game industry is just not where most of the innovation with server side architectures is happening, so they are a bit behind.  That has been my observation from working in the industry and just looking around and talking to people.   When it comes to performance, it's all about architecture.  And the tools available on the server side have been moving fast in recent years.  Traditionally large multiplayer games weren't really great with architecture anyways, and for a variety of reasons they just kind of stuck with what they had.  This is changing, but slowly.   I think the game industry also had other bad influences like trying to equate the lessons learned on client side performance to the server.    The secret to the current distributed systems is that almost all of them are based on message passing, usually using the actor model.  No shared state, messages are immutable, and there is no reliability at the message or networking level.  Threading models are also entirely different.  For example the platform I work a lot with, Aka, in simple terms passes actors between threads instead of locking each one to a specific thread.  They can use amazingly small thread pools to achieve high levels of concurrency.   What you get out of all that is a system that scales with very deterministic performance, and you have a method to distribute almost any workload over a large number of servers.   Another thing to keep in mind is that absolute performance usually matters very little on the server side.  This is something that is just counter intuitive for many game developers.  For an average request to a server, the response time difference between a server written in a slow vs a fast language is sub 1ms.  Usually it's in microseconds.  And when you factor in network and disk io latencies, it's white noise.  That's why scaling with commodity hardware using productive languages is common place on the server side.  The reason why you don't see more productive languages used for highly concurrent stuff is not because they are not performant enough, it's because almost all of them still have a GIL (global interpreter lock) that limits them to basically running on a single cpu in a single process. My favorite model now for being productive is to use the JVM but use jvm languages such as jruby or closure when possible, and drop to java only when I really need to.   For some of the specific techniques used in distributed systems, consistent hashing is a common tool.  You can use that to spread workloads over a cluster and when a node goes down, messages just get hashed to another node and things move on.   Handling things like transactions is not difficult I do it fairly often.  I use an actor with a FSM, and it handles the negotiation between the parties.  You write the code so all messages are idempotent, and from there it's straight forward.   Handling persistence in general is fairly straight forward on a distributed system.  I use Akka a lot, and I basically have a large distributed memory store based on actors in a distributed hash ring, backed by nosql databases with an optional write behind cache in between.  Because every unique key you store is mapped to a single actor, everything is serialized.  For atomic updates I use an approach that's similar to a stored procedure.  Note that I didn't say this was necessarily easy.  There are very few off the shelf solutions for stuff like this.  You can find the tools to make it all, but you have to wire it up yourself.   Having worked on large games before, my recent experiences with distributed systems has been very positive.  A lot of it comes down to how concurrency is handled.  Having good abstractions for that in the actor model makes so many things simpler.  Not to say there are not any challenges left.  You hit walls with any system, I'm just hitting them much later now with almost everything.
  12. Peer to Peer Online Game

    There are all sorts of problems with using peer to peer for this kind of thing.  A server that is aware of all the state can optimize all kinds of stuff that you cannot do with p2p.  For instance what if you only want information on objects within a certain radius?  A server that knows where everyone is at can just send you what you want, p2p wouldn't be able to do that.  And then you get into who do you sync against and how, and all sorts of other problems that are fairly straight forward with a server, but really hard with p2p.   The problem with just comparing a simple theory of which method is more bandwidth efficient completely ignores the fact that one of the methods simply doesn't require sending as much data to solve the same problem.   The wall you always hit with games like this is client bandwidth (and to some extent latency they go hand in hand).  Every other problem is basically solved, but you hit the bandwidth/latency wall so fast that you never even get a chance to see how far you could push everything else.  Using p2p just means you will hit that wall much faster.   I'm not saying it wouldn't be fun to play around with, but I just don't see how you could ever overcome some of the obstacles there, or even if you did how it would be better then server authoritative.  It's one of those things that sounds cool but in practice just doesn't work well.
  13. Ya documentation is next and it will be up asap as I also have a business side to this that will offer a cloud hosted version and additional monitoring/admin tools.    I'll update this thread when there are significant enough changes to be of interest, such as a fully documented api, etc..
  14. This is a culmination of roughly a years worth of work.  In some ways it is still a WIP, mostly with getting good documentation in place for the server api.  But the core is solid and it's completely usable.    This started as a side project while I was still working as the lead server architect at a  largish game studio, and getting really frustrated with the complete lack of a good server platform for multiplayer games.  In one way or another none of them met my criteria, and the ones we did test basically fell over when we tested them for use at the scale we were working at (Billions of requests per day).    Some of the main goals that Game Machine strives for:   Inherently scalable using modern architectures. Provide higher level abstractions for hard problems like concurrency and persistence. Be a productive system to work with, and simple enough for indie developers to dive into Open source   Currently there is one working client in C# that includes some examples that work in Unity.  The entire api is based on messaging using protocol buffers, so integration is fairly straight forward.   The getting started page is up to date and should get you a working server.  Documentation for the server api is still sparse.  I spent some time today to get at least something up, enough so a determined individual could figure it out:)  I'm working hard on getting decent docs up asap.     You can access it all on the github repo:   Cheers,   Chris Ochs
  15. Posting this here instead of the Unity forums simply because I really value some of the knowledgeable people on this forum.   Note that the code below is specific to C# (Unity).   So I have a server that uses an actor model with message passing, it's actually a fully distributed system.  Message reliability is at most once.  Any sequencing and additional reliability are handled at the business (game) logic layer.   On the client side I have a simple actor system which mirrors how the server works.  So we have the same paradigm with the same rules end to end.  For example sending a message from client to server uses the same api as sending server to server or server to client.  There are a few minor differences but overall it's the same.   A bit more detail on the actual api as that will be important.   An actor has just one important method with the following signature:   [source]void OnReceive(object message);[/source]   This method accepts messages and the user is then responsible for taking it from there and building whatever logic is required.   The actor system which manages everything has a static method to find any actor in the system (they are registered on creation):   [source]static UntypedActor Find(string name);[/source]   Sending a message to an actor looks like this:   [source]ActorSystem.Find("MyActor").Tell(message);[/source]   Sending a message to a remote actor:   [source]ActorSystem.Find("/remote/MyServerActor").Tell(message);[/source]   Or just sending to the server without specifying an actor in which case a default routing mechanism kicks in:   [source]ActorSystem.Find("/remote/default").Tell(message);[/source]   Now on to the actual question.  Given that a lot of people who use this will have existing systems in place, what is the most appropriate interface to existing code?  I'm not out to force everyone into embracing the actor model end to end. (the context here is that this is a client for an open source server).   For incoming messages my first idea is to provide a callback api.  A message gets delivered to an actor, and you can then do as much or little logic as you want in the actor, then fire a callback to pass data on to another part of your system.   For outgoing messages I'm thinking about sticking with the paradigm of always keeping message creation in the actor.  So if you are off in say a chat gui, and want to send a chat message to your group, you might make a call like so:   [source]ActorSystem.Find("ChatManager").Tell("you guys all suck I'm out","group");[/source]   The actor would take that message and then create a message the server understands and send it to the server.    I was also thinking about wrapping this type of call in a static method on the actor.  Would make it easier to see what calls are available, and easier to document in code.   It's worth noting that messages are protocol buffers, and I've designed the message structure to model an entity component system.  So the process for an end developer to design new messages is fairly simple.  Actors do deal with composing messages, but the calls to serialize are abstracted into another layer.  At some later date I might provide actual entity and component classes to abstract it a bit more, but right now that's not a priority.   Would love to get some feedback on the approaches outlined above.