Jump to content

  • Log In with Google      Sign In   
  • Create Account

snacktime

Member Since 14 Apr 2013
Offline Last Active Dec 14 2014 04:41 AM

Posts I've Made

In Topic: Pathfinding in a 3D RTS or MOBA

02 August 2014 - 11:16 PM

raycasting can't give you paths for things it can't see, so it's really limited to being useful for local avoidance.   You are going to need a combination of pathfinding and steering to make a moba/rts game.

 

There are actually surprisingly few good open source pathfinding libraries, and TONS of crap written by people learning how to write pathfinding code.  My suggestion would be to just go with recastnavigation.   It's pathfinding is great.  The only part that's a bit dated is the crowd stuff.  It works but it's cpu intensive.  You can use it for a moba game, but for an RTS game with a lot of objects, it won't work as it just uses too much cpu.

 

My preference would actually be to just use an engine like Unity or UDK that already has what you need built in.  Even though their pathfinding is loosely based on recast, they provide crowd implementations that are far more performant and can handle much larger crowds.


In Topic: MMOs and modern scaling techniques

11 June 2014 - 06:09 PM

 


If a client wants to send 10 gold to someone and sends a request to do that, the client has no way of knowing if the request was processed correctly without an ack. But the ack can be lost, so the situation where you might have to resend the same request is present in all networked applications.

 

I think we're crossing wires a bit here. Reliable messaging is a trivial problem to solve (TCP, or a layer over UDP), and thus it is easy to know that either (a) the request was processed correctly, or will be at some time in the very near future, or (b) the other process has terminated, and thus all bets are off. It's not clear why you need application-level re-transmission. But even that's assuming a multiple-server approach - in a single game server approach, this issue never arises at all - there is a variable in memory that contains the current quantity of gold and you just increment that variable with a 100% success rate. Multiple objects? No problem - just modify each of them before your routine is done.

 

What you're saying, is that you willingly forgo those simple guarantees in order to pursue a different approach, one which scales to higher throughput better. That's fine, but these are new problems, unique to that way of working, not intrinsic to the 'business logic' at all. With 2 objects co-located in one process you get atomicity, consistency, and isolation for free, and delegate durability to your DB as a high-latency background task.

 

 

So this is an interesting topic actually.  The trend is to move reliability back up to the layer that defined the need in the first place, instead of relying on a subsystem to provide it.  

 

Just because the network layer guarantees the packets arrive doesn't mean they get delivered to the business logic correctly, or processed correctly.   If you think 'reliable' udp or tpc makes your system reliable, you are lying to yourself.

 

http://www.infoq.com/articles/no-reliable-messaging

http://web.mit.edu/Saltzer/www/publications/endtoend/endtoend.txt

http://doc.akka.io/docs/akka/2.3.3/general/message-delivery-reliability.html


In Topic: MMOs and modern scaling techniques

11 June 2014 - 04:26 PM

This is basically the discussion that led me to post here - two sides both basically saying "I've done it successfully, and you can't really do it the way that the other people do it". Obviously this can't be entirely true. smile.png  I suspect there is more to it.

 

Let me ask some more concrete questions.

 

 

 


No shared state, messages are immutable, and there is no reliability at the message or networking level.

 

Ok - but there is so much in game development that I can't imagine trying to code in this way. Say a player wants to buy an item from a store. The shared state model works like this:

BuyItem(player, store, itemID):
    Check player has enough funds, abort if not
    Check itemID exists in store inventory, abort if not
    Deduct funds from player
    Add funds to store
    Add instance of itemID to player inventory
    Commit player funds and inventory to DB
    Notify player client and any observing clients of purchase

This is literally a 7-line function if you have decent routines already set up. Let's say you have to do it in a message-passing way, where the store and player are potentially in different processes. What I see - assuming you have coroutines or some other syntactic sugar to make this look reasonably sequential rather than callback hell - is something like this:

BuyItem(player, store, itemID):
    Check player has enough funds, abort if not
    Ask store if it has itemID in store inventory. Wait for response.
    If store replied that it did not have the item in inventory:
        abort.
    Check player STILL has enough funds, abort if not
    Deduct funds from player
    Tell store to add funds. Wait for response.
    If store replied that it did not have the item in inventory:
        add funds to player
        abort
    Add instance of itemID to player inventory
    Commit player funds to DB
    Notify player client and any observing clients of purchase

This is at least 30% longer (not counting any extra code for the store) and has to include various extra error-checks, which are going to make things error-prone. I suspect it gets even more complex when you try and trade items in both directions because you need both sides to be able to place items in escrow before the exchange (whereas here, it was just the money).

 

So... is there an easier or safer way I could have written this? I wouldn't even attempt this in C++ - without coroutines it would be hard to maintain the state through the routine. I suppose some system that allows me to perform a rollback of an actor would simplify the error-handling but there are still more potential error cases than if you had access to both sides of the trade and could perform it atomically.

 

You talk about "using an actor with a FSM", but I can't imagine having to write an FSM for each of the states in the above interaction. Again, comparing that to a 7-line function, it's hard to justify in programmer time, even if it undoubtedly scales further. I appreciate something like Akka simplifies both the message-passing and the state machine aspects, so there is that - but it's still a fair bit of extra complexity, right? (Writing a message handler for each state, swapping the message handler each time, stashing messages for other states while you do so, etc.)

 

Maybe you can generalise a bit - eg. make all your buying/selling/giving/stealing into one single 'trade' operation? Then at least you're not writing unique code in each case.

 

As for "writing the code so all messages are idempotent" - is that truly practical? I mean, beyond the trivial but worthless case of attaching a unique ID to every message and checking that the message hasn't been already executed, of course. For example, take the trading code above - if one actor wants to send 10 gold pieces to another, how do you handle that in an idempotent way? You can't send "add 10 gold" because that will give you 20 if the message arrives twice. You can't send "set gold to 50" because you didn't know the actor had 40 gold in the first place.

 

Perhaps that is not the sort of operation you want to make idempotent, and instead have the persistent store treat it as a transaction. Fair enough, and the latency wouldn't matter if you only do this for things that don't occur hundreds of times per second and if your language makes it practical. (But maybe there aren't all that many such routines? The most common one is movement, and that is easily handled in an idempotent way, certainly.)

 

Forgive my ignorance if there is a simple and well-known answer to this problem; it's been a while since I examined distributed systems on an academic level.

 

Handling something like a transaction is really not that different in a distributed system.  All network applications deal with unreliable messaging, reliability and sequencing has to be added in somewhere.  Modern approaches put it at the layer that defined the need in the first place, as opposed to putting it into a subsystem and relying on it for higher level needs, which is just a leaky abstraction and an accident waiting to happen.

 

If a client wants to send 10 gold to someone and sends a request to do that, the client has no way of knowing if the request was processed correctly without an ack.  But the ack can be lost, so the situation where you might have to resend the same request is present in all networked applications.

 

Blocking vs non blocking is mostly an issue that comes up at scale.  For things that don't happen 100,000 times per second, you don't need to ensure that stuff doesn't block.

 

As for FSM's, I use them a lot but mostly because I can do them in ruby, and ruby DSL's I find easy to read and maintain.  That Akka state machine stuff I don't like, never used it.  Seems a lot more awkward then it needs to  be.   Things like purchasing items is something that's not in the hot path, I can afford to use more expressive languages for stuff like that.


In Topic: MMOs and modern scaling techniques

10 June 2014 - 10:04 PM

Ok so I'll chime in here.

 

Distributed archtiectures are the right tool for the job.  What I've seen is that the game industry is just not where most of the innovation with server side architectures is happening, so they are a bit behind.  That has been my observation from working in the industry and just looking around and talking to people.

 

When it comes to performance, it's all about architecture.  And the tools available on the server side have been moving fast in recent years.  Traditionally large multiplayer games weren't really great with architecture anyways, and for a variety of reasons they just kind of stuck with what they had.  This is changing, but slowly.

 

I think the game industry also had other bad influences like trying to equate the lessons learned on client side performance to the server. 

 

The secret to the current distributed systems is that almost all of them are based on message passing, usually using the actor model.  No shared state, messages are immutable, and there is no reliability at the message or networking level.  Threading models are also entirely different.  For example the platform I work a lot with, Aka, in simple terms passes actors between threads instead of locking each one to a specific thread.  They can use amazingly small thread pools to achieve high levels of concurrency.

 

What you get out of all that is a system that scales with very deterministic performance, and you have a method to distribute almost any workload over a large number of servers.

 

Another thing to keep in mind is that absolute performance usually matters very little on the server side.  This is something that is just counter intuitive for many game developers.  For an average request to a server, the response time difference between a server written in a slow vs a fast language is sub 1ms.  Usually it's in microseconds.  And when you factor in network and disk io latencies, it's white noise.  That's why scaling with commodity hardware using productive languages is common place on the server side.  The reason why you don't see more productive languages used for highly concurrent stuff is not because they are not performant enough, it's because almost all of them still have a GIL (global interpreter lock) that limits them to basically running on a single cpu in a single process. My favorite model now for being productive is to use the JVM but use jvm languages such as jruby or closure when possible, and drop to java only when I really need to.

 

For some of the specific techniques used in distributed systems, consistent hashing is a common tool.  You can use that to spread workloads over a cluster and when a node goes down, messages just get hashed to another node and things move on.

 

Handling things like transactions is not difficult I do it fairly often.  I use an actor with a FSM, and it handles the negotiation between the parties.  You write the code so all messages are idempotent, and from there it's straight forward.

 

Handling persistence in general is fairly straight forward on a distributed system.  I use Akka a lot, and I basically have a large distributed memory store based on actors in a distributed hash ring, backed by nosql databases with an optional write behind cache in between.  Because every unique key you store is mapped to a single actor, everything is serialized.  For atomic updates I use an approach that's similar to a stored procedure.  Note that I didn't say this was necessarily easy.  There are very few off the shelf solutions for stuff like this.  You can find the tools to make it all, but you have to wire it up yourself.

 

Having worked on large games before, my recent experiences with distributed systems has been very positive.  A lot of it comes down to how concurrency is handled.  Having good abstractions for that in the actor model makes so many things simpler.  Not to say there are not any challenges left.  You hit walls with any system, I'm just hitting them much later now with almost everything.


In Topic: Peer to Peer Online Game

09 June 2014 - 11:24 PM

There are all sorts of problems with using peer to peer for this kind of thing.  A server that is aware of all the state can optimize all kinds of stuff that you cannot do with p2p.  For instance what if you only want information on objects within a certain radius?  A server that knows where everyone is at can just send you what you want, p2p wouldn't be able to do that.  And then you get into who do you sync against and how, and all sorts of other problems that are fairly straight forward with a server, but really hard with p2p.

 

The problem with just comparing a simple theory of which method is more bandwidth efficient completely ignores the fact that one of the methods simply doesn't require sending as much data to solve the same problem.

 

The wall you always hit with games like this is client bandwidth (and to some extent latency they go hand in hand).  Every other problem is basically solved, but you hit the bandwidth/latency wall so fast that you never even get a chance to see how far you could push everything else.  Using p2p just means you will hit that wall much faster.

 

I'm not saying it wouldn't be fun to play around with, but I just don't see how you could ever overcome some of the obstacles there, or even if you did how it would be better then server authoritative.  It's one of those things that sounds cool but in practice just doesn't work well.


PARTNERS