implimenting AI elegantly

Started by
16 comments, last by IADaveMark 11 years, 8 months ago
For the 2D-top-down game I'm writing(in Java) I've written pathfinder using A* and simple AI around that(if player is in line of sight move towards player, if close enough to player attack player). However I'm looking to make more intricate AI as well as different AI for each of my enemies.I have the logic for my AIs mapped out, but am unsure of how to implement anything too complicated elegantly. What is the best way of writing AI, should I be reading lua scripts, hard coding it in java?
Advertisement
The August 2012 edition of Game Developer Magazine has a great article summing up the most common game AI architectures:

  • Ad-hoc rules
  • Finite State Machine (FSM)
  • Hierarchical FSM
  • Behavior Tree (BT)
  • Planner
  • Utility-based system
  • Artificial Neural Network

The article tries to stay objective and point out the pros and cons of each architecture, but I personally love utility-based systems, which are probably the cleanest and most flexible solution. They are also very robust, in the sense that they will do something sensible under unusual circumstances (at least when compared with the more scripted options of FSMs or BTs).

The idea behind utility-based systems is quite simple: Assign a score to each possible action in the current situation (its "utility") and pick the action with the highest score. The main problem people have with utility-based systems is that you cannot indicate rules like "in this situation, do this". Instead, you need to think of how to score every possible action, usually as the sum of multiple terms.

When we deployed a utility-based system at work (in a non-game context), I had to evangelize a bit and teach other team members how to express "rules" as terms in a utility function, how to assign weights to the terms and how to debug and fix situations where we weren't happy with the action selected by the system. Once people got used to the paradigm, we were able to tweak things very easily. The system has been making hundreds of thousands of decisions a day for several years now, and we are extremely satisfied with its behavior.
Any recommendations for a book for utility based AI(or just a favored book on AI in general)?
Well, the author of that article in Game Developer Magazine is also the author of this book, which is the only one I am aware of that deals with utility-based game AI. He also hangs out in these forums and he rarely misses an opportunity to plug his book, but I just saved him the trouble. :) It's actually a decent book, so go get a copy.

This classic book on [non-game] AI is something that you should probably read at some point if you are interested in the field. A few chapters into the book, they describe the general solutino to decision making: A rational agent always picks the action that maximizes the expected value of a utility function. That sentence is probably hard to understand in a vacuum, but it really is a great insight that can help you understand and design AI systems. A utility-based system is a very direct implementation of this idea, taking out the part about computing expected values (which is a really really hard problem), and plugging in expert knowledge instead. This works out beautifully for games because the expert knowledge is the place where a designer can gain fine control of how an agent behaves.
I'd agree with Alvaro's advice. I've found utility very suited to managing relatively simple decisions (limited number of outputs) from arbitrary information (large number of inputs). There was a lecture at the AI Summit by Kevin Dill and he emphasized the modularity of input "criteria" above all. I asked him about modularity of decisions/output and he said something like: "Sure, for that you need a BT-style structure."

If you have large numbers of possible outputs, and want to express a large variety of special cases that can combine together, then behavior trees or hierarchical planners are my recommended option. Utility has been found not to scale up very well in these areas (e.g. performance), and games like the SIMS 3 famously moved away from "utility everywhere" for these reasons.

Alex

Join us in Vienna for the nucl.ai Conference 2015, on July 20-22... Don't miss it!

I'm somewhat startled to hear you say that "Sims 3 famously moved away from 'utility everywhere'." I believe Richard would disagree. The only changes I'm aware of was simply making it hierarchical to the local area and then "other places" because the potential actions in the whole world would have ended up in the 10s of thousands. Which, by the way, would be a completely pain in the ass to do with a BT or any other architecture anyway. The only other non-utility decision was a rule-based system for selecting interactions between two characters based on personality criteria. (Richard talks about this in his interview with you... however, he does point out that the entire rest of the decision architecture is a utility-based system.)

On-topic, I would find huge disagreement with Alex on the statement that utility doesn't scale well. That's simply not the case. The bottom line, of course, is that you still have to end up in "a state" at some point -- regardless of whether you do it through an FSM, BT, utility, planner, etc. You need something that is mapped to an animation, behavior, etc. The only difference is in how you get to that state (i.e. the decision). Sure, you can do hybrids -- which is all Kevin was meaning in his statement to you. However, it is not terribly taxing to do utility-based architectures with dozens or even hundreds of potential outputs (if your animators can keep up) and even dozens (or hundreds) of potential inputs. In fact, I'm doing work for a client right now that is entirely utility based where I am writing a system to add/edit what could, theoretically, be referred to as "infinite" numbers of utility-to-action mappings.

Even more on-topic, the GDMag article Alvaro referred to was mine. And no, it wasn't a utility love fest. Because, even though not necessarily for some of the reasons Alex alluded to above, utility isn't the best answer for everything. In fact, the entire premise of the article was that there is no best answer. It depends.

Dave Mark - President and Lead Designer of Intrinsic Algorithm LLC
Professional consultant on game AI, mathematical modeling, simulation modeling
Co-founder and 10 year advisor of the GDC AI Summit
Author of the book, Behavioral Mathematics for Game AI
Blogs I write:
IA News - What's happening at IA | IA on AI - AI news and notes | Post-Play'em - Observations on AI of games I play

"Reducing the world to mathematical equations!"


If you have large numbers of possible outputs, and want to express a large variety of special cases that can combine together, then behavior trees or hierarchical planners are my recommended option. Utility has been found not to scale up very well in these areas (e.g. performance), and games like the SIMS 3 famously moved away from "utility everywhere" for these reasons.


So I'm writing a bit on this for the next AI Game Programming Wisdom, but to steal my own thunder...

Utility-based AI scales just fine - better than many AI approaches (e.g. FSMs or scripting), and its performance is generally linear on the number of options you're considering (i.e. very, very fast) - but even so, just like any other approach, it can become unmanageable when the decision space gets too large. Hierarchy is the standard tool that people use when this happens, to break up the decision making into manageable chunks. It is incredibly powerful, and has been applied to pretty much every architecture ever invented. So you can use a hierarchical planner, or a hierarchical FSM, or a hierarchical utility-based architecture.

Behavior Trees are sort of interesting, because a BT is not really an architecture in the way that utility-based AIs or planners or FSMs are architectures. Rather, a BT is a hierarchical framework into which you can insert any architecture you like - and you can put a different architecture in each node on the tree. Traditional BTs used very simple architectures (e.g. random selectors, sequence selectors, or very, very simple rule-based selectors), but lots of people (including myself) have been doing work using other architectures, like utility-based selectors or planner selectors.

Bottom line is, utility-based AI is awesome if you want an AI that is going to be flexible and responsive to the situation. It provides a very good balance between reactivity (the ability of the AI to respond to the situation in-game) and authorial control (the ability of the author to ensure that the AI does what he wants it to do). It's got a steeper learning curve than, say, an FSM or scripted AI, but no worse than a planner - and once you get used to working with it, it's completely natural (or at least, it is for me) and far more powerful. I use utility-based AI for nearly everything, unless what I'm doing is just incredible strightforward (but sometimes things are straightforward, which is why I have a BT framework). Hierarchy is also an awesome tool, and if you get to the point that you have so many possibilities to consider that configuration is getting hard then you should definitely consider a move in that direction.

You can find more detail on these thoughts, as well as some detailed description of the architecture I use, in my recent I/ITSEC and SIW papers:

http://www.iitsec.org/about/PublicationsProceedings/Documents/11136_Paper.pdf

http://www.sisostds.org/conference/download.cfm?Phase_ID=2&FileName=12S-SIW-046.docx
P.S. Looking at Alex's quote again:

There was a lecture at the AI Summit by Kevin Dill and he emphasized the modularity of input "criteria" above all. I asked him about modularity of decisions/output and he said something like: "Sure, for that you need a BT-style structure."


I'm not sure what I would have meant by that. I think modularity of *everything* is the key to both rapid configuration and reusability. I've been pushing really hard in that direction - as you can see in the articles I posted above, and also in my upcoming I/ITSEC article on the topic (but that won't be out until December). That said, I don't know what connection I might have been thinking of between modularity and BTs.

As I said above, a BT is really just a hierarchical framework onto which you place your AI. Modularity and hierarchy are both useful, powerful tools to have in your box - but they're orthoganal. You can have modular systems that are not hierarchical, and hierarchical systems that are not modular.
When I did it I made a small scripting engine which would read a .txt file which would contain a script in an assembly-type language. Gets a little complicated if the script gets too big but at least it means you java code doesn't get too messy.
Another way of doing it is by doing it in java. I would imagine every object has a behaviour so have a class called Behaviour then what you can do is just add an instance of behaviour to an entity. You could have child classes like RunAwayBehaviour extends BaseBehaviour then just add that to an entity. You could even cross both methods to have behaviours assigned to scripts. BaseBehaviour runaway = new BaseBehaviour("runAwayScript.txt").
entity.addBehaviour(runaway);
By doing this it makes it so you don't have to have multiple copies of the same script to assign to entities but have public static behaviours that are assigned instead which can be copied into the entities in java. Making a script parser isn't to hard if one line does one thing like the assembly language. I'm trying to find ways of making a High level script compiler but at the moment it seems like too much work.

I'm not sure what I would have meant by that. [...] That said, I don't know what connection I might have been thinking of between modularity and BTs.


We were walking out of Moscone North and heading towards West on Monday evening after your lecture that afternoon. The conversation started something like this...

I pointed out that while you focused your lecture on modularity of inputs and setting up criteria, I just haven't been faced with this problem in the past. I haven't had unmanageable inputs since the information I need is often expensive; every additional input costs you. I asked you about modularity of actions / outputs, since it doesn't seem to be talked about anywhere near as often, and that's a problem I've faced significantly more often.

That's when you said the line I quoted, which for some reason is burned in my brain (it made sense :-) "Ah, for that you need something like a BT."


I can see how hierarchy would help make it manageable, but a full utility-based hierarchy (like MASA's DirectIA aka. behavioral network, or Spir.Ops' drive system) would potentially have a huge performance impact since you have to simulate the whole thing to get a decision. If you don't simulate the whole thing and "prune" space with Boolean conditions for example, then you're basically moving towards decision-tree style AI.

For this reason, I tend to take the approach of BT first, then sprinkle utility around where necessary. It's easier to work with, it's modular, and it's fast as hell -- O(log n). The alternative, having an elegant utility architecture that you need to hack for performance, hasn't been as appealing for me.

Alex

Join us in Vienna for the nucl.ai Conference 2015, on July 20-22... Don't miss it!

This topic is closed to new replies.

Advertisement