Jump to content

  • Log In with Google      Sign In   
  • Create Account

Interested in a FREE copy of HTML5 game maker Construct 2?

We'll be giving away three Personal Edition licences in next Tuesday's GDNet Direct email newsletter!

Sign up from the right-hand sidebar on our homepage and read Tuesday's newsletter for details!


implimenting AI elegantly


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
17 replies to this topic

#1 GameGeezer   Members   -  Reputation: 753

Like
0Likes
Like

Posted 30 July 2012 - 08:09 PM

For the 2D-top-down game I'm writing(in Java) I've written pathfinder using A* and simple AI around that(if player is in line of sight move towards player, if close enough to player attack player). However I'm looking to make more intricate AI as well as different AI for each of my enemies.I have the logic for my AIs mapped out, but am unsure of how to implement anything too complicated elegantly. What is the best way of writing AI, should I be reading lua scripts, hard coding it in java?

Edited by GameGeazer, 30 July 2012 - 08:13 PM.


Sponsor:

#2 Álvaro   Crossbones+   -  Reputation: 13624

Like
7Likes
Like

Posted 30 July 2012 - 10:18 PM

The August 2012 edition of Game Developer Magazine has a great article summing up the most common game AI architectures:
  • Ad-hoc rules
  • Finite State Machine (FSM)
  • Hierarchical FSM
  • Behavior Tree (BT)
  • Planner
  • Utility-based system
  • Artificial Neural Network
The article tries to stay objective and point out the pros and cons of each architecture, but I personally love utility-based systems, which are probably the cleanest and most flexible solution. They are also very robust, in the sense that they will do something sensible under unusual circumstances (at least when compared with the more scripted options of FSMs or BTs).

The idea behind utility-based systems is quite simple: Assign a score to each possible action in the current situation (its "utility") and pick the action with the highest score. The main problem people have with utility-based systems is that you cannot indicate rules like "in this situation, do this". Instead, you need to think of how to score every possible action, usually as the sum of multiple terms.

When we deployed a utility-based system at work (in a non-game context), I had to evangelize a bit and teach other team members how to express "rules" as terms in a utility function, how to assign weights to the terms and how to debug and fix situations where we weren't happy with the action selected by the system. Once people got used to the paradigm, we were able to tweak things very easily. The system has been making hundreds of thousands of decisions a day for several years now, and we are extremely satisfied with its behavior.

#3 GameGeezer   Members   -  Reputation: 753

Like
0Likes
Like

Posted 30 July 2012 - 11:03 PM

Any recommendations for a book for utility based AI(or just a favored book on AI in general)?

#4 Álvaro   Crossbones+   -  Reputation: 13624

Like
6Likes
Like

Posted 31 July 2012 - 12:28 AM

Well, the author of that article in Game Developer Magazine is also the author of this book, which is the only one I am aware of that deals with utility-based game AI. He also hangs out in these forums and he rarely misses an opportunity to plug his book, but I just saved him the trouble. :) It's actually a decent book, so go get a copy.

This classic book on [non-game] AI is something that you should probably read at some point if you are interested in the field. A few chapters into the book, they describe the general solutino to decision making: A rational agent always picks the action that maximizes the expected value of a utility function. That sentence is probably hard to understand in a vacuum, but it really is a great insight that can help you understand and design AI systems. A utility-based system is a very direct implementation of this idea, taking out the part about computing expected values (which is a really really hard problem), and plugging in expert knowledge instead. This works out beautifully for games because the expert knowledge is the place where a designer can gain fine control of how an agent behaves.

#5 alexjc   Members   -  Reputation: 450

Like
1Likes
Like

Posted 31 July 2012 - 05:22 AM

I'd agree with Alvaro's advice. I've found utility very suited to managing relatively simple decisions (limited number of outputs) from arbitrary information (large number of inputs). There was a lecture at the AI Summit by Kevin Dill and he emphasized the modularity of input "criteria" above all. I asked him about modularity of decisions/output and he said something like: "Sure, for that you need a BT-style structure."

If you have large numbers of possible outputs, and want to express a large variety of special cases that can combine together, then behavior trees or hierarchical planners are my recommended option. Utility has been found not to scale up very well in these areas (e.g. performance), and games like the SIMS 3 famously moved away from "utility everywhere" for these reasons.

Alex

Edited by alexjc, 31 July 2012 - 05:24 AM.

Join us in Vienna for the Game/AI Conference 2014, on July 7-10... Don't miss it!


#6 IADaveMark   Moderators   -  Reputation: 2509

Like
5Likes
Like

Posted 31 July 2012 - 09:12 AM

I'm somewhat startled to hear you say that "Sims 3 famously moved away from 'utility everywhere'." I believe Richard would disagree. The only changes I'm aware of was simply making it hierarchical to the local area and then "other places" because the potential actions in the whole world would have ended up in the 10s of thousands. Which, by the way, would be a completely pain in the ass to do with a BT or any other architecture anyway. The only other non-utility decision was a rule-based system for selecting interactions between two characters based on personality criteria. (Richard talks about this in his interview with you... however, he does point out that the entire rest of the decision architecture is a utility-based system.)

On-topic, I would find huge disagreement with Alex on the statement that utility doesn't scale well. That's simply not the case. The bottom line, of course, is that you still have to end up in "a state" at some point -- regardless of whether you do it through an FSM, BT, utility, planner, etc. You need something that is mapped to an animation, behavior, etc. The only difference is in how you get to that state (i.e. the decision). Sure, you can do hybrids -- which is all Kevin was meaning in his statement to you. However, it is not terribly taxing to do utility-based architectures with dozens or even hundreds of potential outputs (if your animators can keep up) and even dozens (or hundreds) of potential inputs. In fact, I'm doing work for a client right now that is entirely utility based where I am writing a system to add/edit what could, theoretically, be referred to as "infinite" numbers of utility-to-action mappings.

Even more on-topic, the GDMag article Alvaro referred to was mine. And no, it wasn't a utility love fest. Because, even though not necessarily for some of the reasons Alex alluded to above, utility isn't the best answer for everything. In fact, the entire premise of the article was that there is no best answer. It depends.

Edited by IADaveMark, 31 July 2012 - 11:17 AM.
Derp grammar.

Dave Mark - President and Lead Designer of Intrinsic Algorithm LLC

Professional consultant on game AI, mathematical modeling, simulation modeling
Co-advisor of the GDC AI Summit
Co-founder of the AI Game Programmers Guild
Author of the book, Behavioral Mathematics for Game AI

Blogs I write:
IA News - What's happening at IA | IA on AI - AI news and notes | Post-Play'em - Observations on AI of games I play

"Reducing the world to mathematical equations!"

#7 Kevin Dill   Members   -  Reputation: 315

Like
5Likes
Like

Posted 31 July 2012 - 09:52 AM

If you have large numbers of possible outputs, and want to express a large variety of special cases that can combine together, then behavior trees or hierarchical planners are my recommended option. Utility has been found not to scale up very well in these areas (e.g. performance), and games like the SIMS 3 famously moved away from "utility everywhere" for these reasons.


So I'm writing a bit on this for the next AI Game Programming Wisdom, but to steal my own thunder...

Utility-based AI scales just fine - better than many AI approaches (e.g. FSMs or scripting), and its performance is generally linear on the number of options you're considering (i.e. very, very fast) - but even so, just like any other approach, it can become unmanageable when the decision space gets too large. Hierarchy is the standard tool that people use when this happens, to break up the decision making into manageable chunks. It is incredibly powerful, and has been applied to pretty much every architecture ever invented. So you can use a hierarchical planner, or a hierarchical FSM, or a hierarchical utility-based architecture.

Behavior Trees are sort of interesting, because a BT is not really an architecture in the way that utility-based AIs or planners or FSMs are architectures. Rather, a BT is a hierarchical framework into which you can insert any architecture you like - and you can put a different architecture in each node on the tree. Traditional BTs used very simple architectures (e.g. random selectors, sequence selectors, or very, very simple rule-based selectors), but lots of people (including myself) have been doing work using other architectures, like utility-based selectors or planner selectors.

Bottom line is, utility-based AI is awesome if you want an AI that is going to be flexible and responsive to the situation. It provides a very good balance between reactivity (the ability of the AI to respond to the situation in-game) and authorial control (the ability of the author to ensure that the AI does what he wants it to do). It's got a steeper learning curve than, say, an FSM or scripted AI, but no worse than a planner - and once you get used to working with it, it's completely natural (or at least, it is for me) and far more powerful. I use utility-based AI for nearly everything, unless what I'm doing is just incredible strightforward (but sometimes things are straightforward, which is why I have a BT framework). Hierarchy is also an awesome tool, and if you get to the point that you have so many possibilities to consider that configuration is getting hard then you should definitely consider a move in that direction.

You can find more detail on these thoughts, as well as some detailed description of the architecture I use, in my recent I/ITSEC and SIW papers:

http://www.iitsec.org/about/PublicationsProceedings/Documents/11136_Paper.pdf

http://www.sisostds.org/conference/download.cfm?Phase_ID=2&FileName=12S-SIW-046.docx

#8 Kevin Dill   Members   -  Reputation: 315

Like
1Likes
Like

Posted 31 July 2012 - 01:45 PM

P.S. Looking at Alex's quote again:

There was a lecture at the AI Summit by Kevin Dill and he emphasized the modularity of input "criteria" above all. I asked him about modularity of decisions/output and he said something like: "Sure, for that you need a BT-style structure."


I'm not sure what I would have meant by that. I think modularity of *everything* is the key to both rapid configuration and reusability. I've been pushing really hard in that direction - as you can see in the articles I posted above, and also in my upcoming I/ITSEC article on the topic (but that won't be out until December). That said, I don't know what connection I might have been thinking of between modularity and BTs.

As I said above, a BT is really just a hierarchical framework onto which you place your AI. Modularity and hierarchy are both useful, powerful tools to have in your box - but they're orthoganal. You can have modular systems that are not hierarchical, and hierarchical systems that are not modular.

#9 CryoGenesis   Members   -  Reputation: 496

Like
1Likes
Like

Posted 01 August 2012 - 10:48 AM

When I did it I made a small scripting engine which would read a .txt file which would contain a script in an assembly-type language. Gets a little complicated if the script gets too big but at least it means you java code doesn't get too messy.
Another way of doing it is by doing it in java. I would imagine every object has a behaviour so have a class called Behaviour then what you can do is just add an instance of behaviour to an entity. You could have child classes like RunAwayBehaviour extends BaseBehaviour then just add that to an entity. You could even cross both methods to have behaviours assigned to scripts. BaseBehaviour runaway = new BaseBehaviour("runAwayScript.txt").
entity.addBehaviour(runaway);
By doing this it makes it so you don't have to have multiple copies of the same script to assign to entities but have public static behaviours that are assigned instead which can be copied into the entities in java. Making a script parser isn't to hard if one line does one thing like the assembly language. I'm trying to find ways of making a High level script compiler but at the moment it seems like too much work.

#10 alexjc   Members   -  Reputation: 450

Like
0Likes
Like

Posted 02 August 2012 - 09:21 AM

I'm not sure what I would have meant by that. [...] That said, I don't know what connection I might have been thinking of between modularity and BTs.


We were walking out of Moscone North and heading towards West on Monday evening after your lecture that afternoon. The conversation started something like this...

I pointed out that while you focused your lecture on modularity of inputs and setting up criteria, I just haven't been faced with this problem in the past. I haven't had unmanageable inputs since the information I need is often expensive; every additional input costs you. I asked you about modularity of actions / outputs, since it doesn't seem to be talked about anywhere near as often, and that's a problem I've faced significantly more often.

That's when you said the line I quoted, which for some reason is burned in my brain (it made sense :-) "Ah, for that you need something like a BT."


I can see how hierarchy would help make it manageable, but a full utility-based hierarchy (like MASA's DirectIA aka. behavioral network, or Spir.Ops' drive system) would potentially have a huge performance impact since you have to simulate the whole thing to get a decision. If you don't simulate the whole thing and "prune" space with Boolean conditions for example, then you're basically moving towards decision-tree style AI.

For this reason, I tend to take the approach of BT first, then sprinkle utility around where necessary. It's easier to work with, it's modular, and it's fast as hell -- O(log n). The alternative, having an elegant utility architecture that you need to hack for performance, hasn't been as appealing for me.

Alex

Join us in Vienna for the Game/AI Conference 2014, on July 7-10... Don't miss it!


#11 Kevin Dill   Members   -  Reputation: 315

Like
0Likes
Like

Posted 02 August 2012 - 10:41 AM

I'm not familiar with those systems, but it sounds like they may be doing something much more complex than what I mean when I say "utility-based AI." To me, a utility-based AI is one that, in essence, takes the following steps:
  • Enumerate the possible choices (I usually call them options).
  • For each option, run a heuristic function that calculates how appropriate that option is given the current situation. This heuristic function is typically constant time, so the cost of this step is O(n) on the number of options. What's more, this heuristic function can usually be made to be very fast, though obviously this depends on what you're doing.
  • Pick an option based on the heuristics. Personally, I use a combination of taking the best and using weight-based random, which is also O(n) on the number of options. If you want to know more, read the articles I linked - there's lots of detail there. :)
I can easily make a utility-based AI be hierarchical, reaping the benefits of a BT's O(log n) performance. To do this, I need a top-level reasoner that makes just the big decisions. For example, for a puppy game my doggy's top-level reasoner might decide whether he should eat, sleep, play, or pee. Once that top level reasoner has picked an option, such as pee, then I might have a mid-level reasoner that thinks about *how* to go to the bathroom. Should I pee on some nearby object, and if so then which one? Should I go scratch at the door until my owner lets me out? Should I pick up my leash in my mouth and carry it to my owner? Finally, I might have a low-level reasoner that handles the process of executing that plan. This is often just a sequence selector (if you're using a BT), or even an FSM, although it's not hard to set up either of those architectures within a utility-based reasoner. For the sequence selector, just make the utility of each option be a fixed value, with each step having lower value than the previous, and steps that have already executed having a value of 0. For an FSM, just ensure that the highest priority state is either the one currently executing or the one we should transition to, as appropriate.

So yes, if I'm having performance issues then hierarchy is one of the tools that I can pull out to address that - and it's fair to say that that would be "something like a BT" - but it would still also be utility-based. It would just be utility-based AI placed inside of a BT. Again, "BT" is just a fancy term for "hierarchical framework into which you can place any architecture that you want."

With all of that said, at least on the games I've worked on the AI hasn't been anywhere near the biggest optimization nightmare. The main reason that I go hierarchical is that it makes my job configuring the AI simpler, not that it makes the AI run faster.

#12 Kevin Dill   Members   -  Reputation: 315

Like
0Likes
Like

Posted 02 August 2012 - 10:54 AM

I just re-read Alvarro's post - and he sums up my feelings on utility-based AI *extremely* well. Nicely said.

One thing I wanted to respond to:

The main problem people have with utility-based systems is that you cannot indicate rules like "in this situation, do this". Instead, you need to think of how to score every possible action, usually as the sum of multiple terms.


This is actually the reason I've started using the dual-utility approach I talk about in those articles I linked. I calculate two utility values: a priority and a weight. Options with higher priority will *always* be selected over options with lower priority if they are valid (i.e. if their weight is > 0). Among the highest priority options, I use weight-based random. So in essence I'm using the priority to divide my options up into categories, and then only selecting from among the most important category.

As an example I could have a bunch of options for reacting to hand grenades that all have a priority of 100, because reacting to hand grenades is really, really important - but those options would only be valid when there is a hand grenade to respond to. Then I could have normal combat options (e.g. shooting at the player(s)) with a priority around 10, give or take a point or two depending on the situation, and ambient options (e.g. getting a hamburger) with priorities around 0.

I can't take credit for inventing the approach - I stole it from Zoo Tycoon 2 (I'm not sure who originated it - maybe Nathan Sitkoff or Ralph Hebb?), but it's a pretty slick system.

#13 alexjc   Members   -  Reputation: 450

Like
0Likes
Like

Posted 07 August 2012 - 04:47 AM

Among the highest priority options, I use weight-based random. So in essence I'm using the priority to divide my options up into categories, and then only selecting from among the most important category.


Sounds like a behavior tree with localized utility :-) I like this approach better. Scales well and it's easy to author -- as you said.

I think BT is more specific than just a "hierarchical framework" that you can plug stuff into, but that's worth as separate discussion.

Alex

Join us in Vienna for the Game/AI Conference 2014, on July 7-10... Don't miss it!


#14 IADaveMark   Moderators   -  Reputation: 2509

Like
0Likes
Like

Posted 07 August 2012 - 01:19 PM


Among the highest priority options, I use weight-based random. So in essence I'm using the priority to divide my options up into categories, and then only selecting from among the most important category.


Sounds like a behavior tree with localized utility :-) I like this approach better. Scales well and it's easy to author -- as you said.

Alex

While that could be duplicated as a behavior tree with utility-based selector nodes, that almost adds more complication than is necessary. In Kevin's design, all that would be necessary is the addition of a single integer value that represents the "priority" (or category as he referred to it above). When processing the decisions, you check things in a certain priority first... highest to lowest. If there are no priorities 9's, for example, you look for 8's, etc. The difference between this setup and a BT is that those priority numbers (as with anything in a utility-based system) can be modified at runtime by stimuli in the game. So something might not always be a priority 9. Sometimes it might be a 2 and would be checked after all the other higher priorities. The short version is that even the priorities (which are acting as a higher level in the hierarchy) are fluid and reactive to the game dynamics rather than being "this is what we authored so this is what you get."

Anyway, we have wandered off the original topic... the point being there are many ways to do AI. You have to pick the one that suits your needs best. (Which was the point of my article in Game Developer last week.)
Dave Mark - President and Lead Designer of Intrinsic Algorithm LLC

Professional consultant on game AI, mathematical modeling, simulation modeling
Co-advisor of the GDC AI Summit
Co-founder of the AI Game Programmers Guild
Author of the book, Behavioral Mathematics for Game AI

Blogs I write:
IA News - What's happening at IA | IA on AI - AI news and notes | Post-Play'em - Observations on AI of games I play

"Reducing the world to mathematical equations!"

#15 wodinoneeye   Members   -  Reputation: 856

Like
3Likes
Like

Posted 16 August 2012 - 07:58 AM

"The idea behind utility-based systems is quite simple: Assign a score to each possible action in"


The old problem is coming up with a unified metric to "simply assign a score" (compared to THAT problem the rest is trivial)

Complex environments have too many endcases to have simple evaluation functions. Of course, the complexity of the decision logic increases exponentially with the complexity of the object's potential behaviors and its environment. (throw in handling uncertainty if you want it this be a magnitude harder and for temporally complex actions/results another magnitude).

Risk versus Reward (including Cost) analysis - how does the object's logic .judge a VECTOR of boiled down evaluation results adjusted for current preferences/historic success memories/changeable goal priorities --- to come up with a single value that it can competantly compare to another entirely different possibilities being 'considered'/evaluated?

You can try to find generalizations in the evaluation logic, but those endcases are legion.

Hand normalization (via cohesive judgement across the whole problem/solution space) . -- As usual the required human in the loop is the limitation.

A simple decision space you can probably comprehend and visualize it so as to tweak it into shape, but as it grows more complex it becomes a monster.

Edited by wodinoneeye, 16 August 2012 - 08:01 AM.

--------------------------------------------Ratings are Opinion, not Fact

#16 Álvaro   Crossbones+   -  Reputation: 13624

Like
0Likes
Like

Posted 16 August 2012 - 11:51 AM

Complex environments have too many endcases to have simple evaluation functions. Of course, the complexity of the decision logic increases exponentially with the complexity of the object's potential behaviors and its environment. (throw in handling uncertainty if you want it this be a magnitude harder and for temporally complex actions/results another magnitude).

Risk versus Reward (including Cost) analysis - how does the object's logic .judge a VECTOR of boiled down evaluation results adjusted for current preferences/historic success memories/changeable goal priorities --- to come up with a single value that it can competantly compare to another entirely different possibilities being 'considered'/evaluated?


I think you are making it sound much harder than it is. Remember that we are in a Game AI context. We are trying to create behavior that is compelling and that makes for good gameplay, that's all. In particular, I don't see any need to do careful risk-vs-reward analysis here (although it is possible).


You can try to find generalizations in the evaluation logic, but those endcases are legion.

Hand normalization (via cohesive judgement across the whole problem/solution space) . -- As usual the required human in the loop is the limitation.

A simple decision space you can probably comprehend and visualize it so as to tweak it into shape, but as it grows more complex it becomes a monster.


I don't know if you have actual experience working with a utility-based system, but my experience is quite the opposite.
It is useful to have some scale in mind. For instance, you can make the scale be dollars and ask "how much would this agent be willing to pay to take this action?". It's not always easy to answer everything that way, but in some contexts it might help you think about it.

If you have a specific type of game in mind for which you don't think this approach could be manageable, perhaps you can describe it, to make sure we are talking about the same thing. It is also possible that this approach is just not appropriate for every situation, which wouldn't surprise me. But I think the ability to handle complexity is a strength of the approach, not a weakness.

#17 wodinoneeye   Members   -  Reputation: 856

Like
0Likes
Like

Posted 16 August 2012 - 09:30 PM

Once you get past a certain level of complexity then 'being manageable' flies out the window with ALL the methodologies.

I didnt say dont use utility/priority-based logic, just that it is just the basic overall system (evaluating to one number to allow comparing/picking the 'best' solution) and FINDING that number (judging/evaluating) when you are dealing with a situation full of endcases and exception and multitudes of relevant factors becomes THE significant difficulty.

---

Example - your avatar gets injured and suddenly is in a mode/state where getting repaired is of prime importance.

The action of getting a Med-Kit suddenly zooms up as something of priority, but doesnt neccessarily override an easy opportunity to pickup some other useful item right next to the avatar. The 'repair' goal priority might best even be a curve function with the degree of being damaged (input) accelerating exponentially the priority (output) for attempting to achieve that goal. That is assuming the 'being damaged' itself can be expressed as a simple value and isnt a vector of factors itself (broken limbs vs blood loss vs burn damage etc... depending on complexity of your game mechanics).

Similarly, a Med-Kit that can be pathed-to as a shorter/easier movement distance versus another MedKit further/harder/riskier (and that evaluation might include potential for attack from any/all opponents/hazards). And what of a Big Med-Kit vs a Small one -- which one is better in relation to the risk/cost of obtaining it ??? The realistic decisions are not 'simple' and these ARE just for a fairly basic game mechanics (if your goal is to try to make the objects look 'smart' -- or rather NOT extremely 'dumb')

---

Players figure out quite quickly opponent behaviors that are simplistic and can often easily make use of them to their 'too easy' advantage.

Seeing objects that aught to be 'intelligent' make stupid moves over and over wont impress them.

----

You can start by assigning simplistic numeric evaluation numbers to actions just to get started, and then playtest to see where that simplicity falls down and then start adding more complex evaluations. (Unfortunately, that is almost immediately and in numerous cases).

Now you find you have to add the 'meat' of the AI system and will notice that that 'utility-based system' is only a tiny part of the whole as you add tools like pathing and influence mapping and fuzzy logic to try to get those 'simple numbers'. Suddenly because of the explosion of processing you now have to add culling of evaluated possibilities and time-scheduling of AI to fit resource requirements.

----

My real point is that for 'elegance' you might have one nice simple tool sitting there, but for the whole thing to work the rest grows mosterous and ungainly (its why AI is so difficult and most game companies avoid it like the plague)
--------------------------------------------Ratings are Opinion, not Fact

#18 IADaveMark   Moderators   -  Reputation: 2509

Like
0Likes
Like

Posted 17 August 2012 - 07:10 AM

Someone needs to read my book and watch my GDC lectures where I actually use some of the things you mention as examples. *shrug*
Dave Mark - President and Lead Designer of Intrinsic Algorithm LLC

Professional consultant on game AI, mathematical modeling, simulation modeling
Co-advisor of the GDC AI Summit
Co-founder of the AI Game Programmers Guild
Author of the book, Behavioral Mathematics for Game AI

Blogs I write:
IA News - What's happening at IA | IA on AI - AI news and notes | Post-Play'em - Observations on AI of games I play

"Reducing the world to mathematical equations!"




Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS