implimenting AI elegantly

Started by
16 comments, last by IADaveMark 11 years, 8 months ago
I'm not familiar with those systems, but it sounds like they may be doing something much more complex than what I mean when I say "utility-based AI." To me, a utility-based AI is one that, in essence, takes the following steps:

  1. Enumerate the possible choices (I usually call them options).
  2. For each option, run a heuristic function that calculates how appropriate that option is given the current situation. This heuristic function is typically constant time, so the cost of this step is O(n) on the number of options. What's more, this heuristic function can usually be made to be very fast, though obviously this depends on what you're doing.
  3. Pick an option based on the heuristics. Personally, I use a combination of taking the best and using weight-based random, which is also O(n) on the number of options. If you want to know more, read the articles I linked - there's lots of detail there. :)

I can easily make a utility-based AI be hierarchical, reaping the benefits of a BT's O(log n) performance. To do this, I need a top-level reasoner that makes just the big decisions. For example, for a puppy game my doggy's top-level reasoner might decide whether he should eat, sleep, play, or pee. Once that top level reasoner has picked an option, such as pee, then I might have a mid-level reasoner that thinks about *how* to go to the bathroom. Should I pee on some nearby object, and if so then which one? Should I go scratch at the door until my owner lets me out? Should I pick up my leash in my mouth and carry it to my owner? Finally, I might have a low-level reasoner that handles the process of executing that plan. This is often just a sequence selector (if you're using a BT), or even an FSM, although it's not hard to set up either of those architectures within a utility-based reasoner. For the sequence selector, just make the utility of each option be a fixed value, with each step having lower value than the previous, and steps that have already executed having a value of 0. For an FSM, just ensure that the highest priority state is either the one currently executing or the one we should transition to, as appropriate.

So yes, if I'm having performance issues then hierarchy is one of the tools that I can pull out to address that - and it's fair to say that that would be "something like a BT" - but it would still also be utility-based. It would just be utility-based AI placed inside of a BT. Again, "BT" is just a fancy term for "hierarchical framework into which you can place any architecture that you want."

With all of that said, at least on the games I've worked on the AI hasn't been anywhere near the biggest optimization nightmare. The main reason that I go hierarchical is that it makes my job configuring the AI simpler, not that it makes the AI run faster.
Advertisement
I just re-read Alvarro's post - and he sums up my feelings on utility-based AI *extremely* well. Nicely said.

One thing I wanted to respond to:


The main problem people have with utility-based systems is that you cannot indicate rules like "in this situation, do this". Instead, you need to think of how to score every possible action, usually as the sum of multiple terms.


This is actually the reason I've started using the dual-utility approach I talk about in those articles I linked. I calculate two utility values: a priority and a weight. Options with higher priority will *always* be selected over options with lower priority if they are valid (i.e. if their weight is > 0). Among the highest priority options, I use weight-based random. So in essence I'm using the priority to divide my options up into categories, and then only selecting from among the most important category.

As an example I could have a bunch of options for reacting to hand grenades that all have a priority of 100, because reacting to hand grenades is really, really important - but those options would only be valid when there is a hand grenade to respond to. Then I could have normal combat options (e.g. shooting at the player(s)) with a priority around 10, give or take a point or two depending on the situation, and ambient options (e.g. getting a hamburger) with priorities around 0.

I can't take credit for inventing the approach - I stole it from Zoo Tycoon 2 (I'm not sure who originated it - maybe Nathan Sitkoff or Ralph Hebb?), but it's a pretty slick system.

Among the highest priority options, I use weight-based random. So in essence I'm using the priority to divide my options up into categories, and then only selecting from among the most important category.


Sounds like a behavior tree with localized utility :-) I like this approach better. Scales well and it's easy to author -- as you said.

I think BT is more specific than just a "hierarchical framework" that you can plug stuff into, but that's worth as separate discussion.

Alex

Join us in Vienna for the nucl.ai Conference 2015, on July 20-22... Don't miss it!


[quote name='Kevin Dill' timestamp='1343926452' post='4965589']
Among the highest priority options, I use weight-based random. So in essence I'm using the priority to divide my options up into categories, and then only selecting from among the most important category.


Sounds like a behavior tree with localized utility :-) I like this approach better. Scales well and it's easy to author -- as you said.

Alex
[/quote]
While that could be duplicated as a behavior tree with utility-based selector nodes, that almost adds more complication than is necessary. In Kevin's design, all that would be necessary is the addition of a single integer value that represents the "priority" (or category as he referred to it above). When processing the decisions, you check things in a certain priority first... highest to lowest. If there are no priorities 9's, for example, you look for 8's, etc. The difference between this setup and a BT is that those priority numbers (as with anything in a utility-based system) can be modified at runtime by stimuli in the game. So something might not always be a priority 9. Sometimes it might be a 2 and would be checked after all the other higher priorities. The short version is that even the priorities (which are acting as a higher level in the hierarchy) are fluid and reactive to the game dynamics rather than being "this is what we authored so this is what you get."

Anyway, we have wandered off the original topic... the point being there are many ways to do AI. You have to pick the one that suits your needs best. (Which was the point of my article in Game Developer last week.)

Dave Mark - President and Lead Designer of Intrinsic Algorithm LLC
Professional consultant on game AI, mathematical modeling, simulation modeling
Co-founder and 10 year advisor of the GDC AI Summit
Author of the book, Behavioral Mathematics for Game AI
Blogs I write:
IA News - What's happening at IA | IA on AI - AI news and notes | Post-Play'em - Observations on AI of games I play

"Reducing the world to mathematical equations!"

"The idea behind utility-based systems is quite simple: Assign a score to each possible action in"


The old problem is coming up with a unified metric to "simply assign a score" (compared to THAT problem the rest is trivial)

Complex environments have too many endcases to have simple evaluation functions. Of course, the complexity of the decision logic increases exponentially with the complexity of the object's potential behaviors and its environment. (throw in handling uncertainty if you want it this be a magnitude harder and for temporally complex actions/results another magnitude).

Risk versus Reward (including Cost) analysis - how does the object's logic .judge a VECTOR of boiled down evaluation results adjusted for current preferences/historic success memories/changeable goal priorities --- to come up with a single value that it can competantly compare to another entirely different possibilities being 'considered'/evaluated?

You can try to find generalizations in the evaluation logic, but those endcases are legion.

Hand normalization (via cohesive judgement across the whole problem/solution space) . -- As usual the required human in the loop is the limitation.

A simple decision space you can probably comprehend and visualize it so as to tweak it into shape, but as it grows more complex it becomes a monster.
--------------------------------------------[size="1"]Ratings are Opinion, not Fact

Complex environments have too many endcases to have simple evaluation functions. Of course, the complexity of the decision logic increases exponentially with the complexity of the object's potential behaviors and its environment. (throw in handling uncertainty if you want it this be a magnitude harder and for temporally complex actions/results another magnitude).

Risk versus Reward (including Cost) analysis - how does the object's logic .judge a VECTOR of boiled down evaluation results adjusted for current preferences/historic success memories/changeable goal priorities --- to come up with a single value that it can competantly compare to another entirely different possibilities being 'considered'/evaluated?


I think you are making it sound much harder than it is. Remember that we are in a Game AI context. We are trying to create behavior that is compelling and that makes for good gameplay, that's all. In particular, I don't see any need to do careful risk-vs-reward analysis here (although it is possible).



You can try to find generalizations in the evaluation logic, but those endcases are legion.

Hand normalization (via cohesive judgement across the whole problem/solution space) . -- As usual the required human in the loop is the limitation.

A simple decision space you can probably comprehend and visualize it so as to tweak it into shape, but as it grows more complex it becomes a monster.
[/quote]

I don't know if you have actual experience working with a utility-based system, but my experience is quite the opposite.
It is useful to have some scale in mind. For instance, you can make the scale be dollars and ask "how much would this agent be willing to pay to take this action?". It's not always easy to answer everything that way, but in some contexts it might help you think about it.

If you have a specific type of game in mind for which you don't think this approach could be manageable, perhaps you can describe it, to make sure we are talking about the same thing. It is also possible that this approach is just not appropriate for every situation, which wouldn't surprise me. But I think the ability to handle complexity is a strength of the approach, not a weakness.
Once you get past a certain level of complexity then 'being manageable' flies out the window with ALL the methodologies.

I didnt say dont use utility/priority-based logic, just that it is just the basic overall system (evaluating to one number to allow comparing/picking the 'best' solution) and FINDING that number (judging/evaluating) when you are dealing with a situation full of endcases and exception and multitudes of relevant factors becomes THE significant difficulty.

---

Example - your avatar gets injured and suddenly is in a mode/state where getting repaired is of prime importance.

The action of getting a Med-Kit suddenly zooms up as something of priority, but doesnt neccessarily override an easy opportunity to pickup some other useful item right next to the avatar. The 'repair' goal priority might best even be a curve function with the degree of being damaged (input) accelerating exponentially the priority (output) for attempting to achieve that goal. That is assuming the 'being damaged' itself can be expressed as a simple value and isnt a vector of factors itself (broken limbs vs blood loss vs burn damage etc... depending on complexity of your game mechanics).

Similarly, a Med-Kit that can be pathed-to as a shorter/easier movement distance versus another MedKit further/harder/riskier (and that evaluation might include potential for attack from any/all opponents/hazards). And what of a Big Med-Kit vs a Small one -- which one is better in relation to the risk/cost of obtaining it ??? The realistic decisions are not 'simple' and these ARE just for a fairly basic game mechanics (if your goal is to try to make the objects look 'smart' -- or rather NOT extremely 'dumb')

---

Players figure out quite quickly opponent behaviors that are simplistic and can often easily make use of them to their 'too easy' advantage.

Seeing objects that aught to be 'intelligent' make stupid moves over and over wont impress them.

----

You can start by assigning simplistic numeric evaluation numbers to actions just to get started, and then playtest to see where that simplicity falls down and then start adding more complex evaluations. (Unfortunately, that is almost immediately and in numerous cases).

Now you find you have to add the 'meat' of the AI system and will notice that that 'utility-based system' is only a tiny part of the whole as you add tools like pathing and influence mapping and fuzzy logic to try to get those 'simple numbers'. Suddenly because of the explosion of processing you now have to add culling of evaluated possibilities and time-scheduling of AI to fit resource requirements.

----

My real point is that for 'elegance' you might have one nice simple tool sitting there, but for the whole thing to work the rest grows mosterous and ungainly (its why AI is so difficult and most game companies avoid it like the plague)
--------------------------------------------[size="1"]Ratings are Opinion, not Fact
Someone needs to read my book and watch my GDC lectures where I actually use some of the things you mention as examples. *shrug*

Dave Mark - President and Lead Designer of Intrinsic Algorithm LLC
Professional consultant on game AI, mathematical modeling, simulation modeling
Co-founder and 10 year advisor of the GDC AI Summit
Author of the book, Behavioral Mathematics for Game AI
Blogs I write:
IA News - What's happening at IA | IA on AI - AI news and notes | Post-Play'em - Observations on AI of games I play

"Reducing the world to mathematical equations!"

This topic is closed to new replies.

Advertisement