A question about behavior trees

Started by
1 comment, last by IADaveMark 9 years, 6 months ago

Hi everyone,

Ive recently been reading about behavior trees for AI and how they work. There's lots of great stuff on here.

However Im stuck with a question.

Imagine I have an AI truck, that needs to drive to point A, load up some goods, then drive to point B and unload them. These are laid out appropriately as leaf nodes in a behavior tree.

What happens when an action takes a large amount of time? For instance, in each game loop, no other nodes will be tested whilst the drive from point A to point B node continues to execute. But what happens if the truck is attacked by bandits halfway through?

There may well be a whole new set of rules for handling this situation, but Im always executing this "transport" node until its complete.

My only thought is rather than have a node that says "drive from point A to point B", it would become "move 5 feet in the direction of point B as long as there's no bandits around...and there's still fuel.....and the driver is not tired". My concern is thats alot of questions to ask, each game loop, for each AI entity.

Is this the right way to handle these long actions, by chopping them up and avoiding them altogether? Or is there a better way for this scenario?

regards

Luca

Advertisement

Its what makes AI such a difficulty. Depending on the complexity of the game - the different actions and objects and ways they interact, handling such things (to be 'smart') increases geometrically.

You might want to investigate an area of AI called 'Planners' which break problems down into solutions in a hierachical way (generalized 'solutions' get reused)m, and methods of evaluating a situation to pick what the object should try to do.

The planner evaluates appropriateness and best fit of a solution to a specified situation -- picks a 'solution' which then can be a FSM or Behavior Tree, which is now simpler/smaller, made for that specific type of problem (so you dont have humongus behavior trees that need to handle all the different situational cases).

Part of behavior as you noticed is Contingencies - when some solution that is being carried out is interrupted - The Planner level is used for that part of the logic - allows repeated reevaluation and choosing a solution for the current situation, and then resuming the original goals when the current priority is done with. The specific solutions then can be in the BT form (which by itself isnt as versatile in handling interuptions/retries/etc...)

You dont have to actually use a 'planner', but the design of how behavior problems are broken down and handled can give you ideas of whats needed to organize your logic for your AI.

With complexity the AI is to be capable of, one thing you will notice is not so much the complexity of the solutions DO X then DO Y then Do Z (each DO with retries handled), but HOW is the evaluation made of WHAT to do at any point in time -- which thing has the priority, and coming up with a system to determine an importance/priority Metric (basically a single number for each potential approach) IS the actual difficulty -when there are so many factors to consider and so many potential goals.

Even deciding how often to reevaluate a situation to change course (planners reevaluate all options and it can be prohibitive performance-wise to just do that every cycle .... like When has the environment changed ENOUGH to warrant stepping back and seeing if a better course of action is called for)

--------------------------------------------[size="1"]Ratings are Opinion, not Fact

You did hit on the correct solution in that the behavior is not "move to A" or "move to B" -- but rather "move in the direction of A/B". In fact, it is likely better to have a single node that is "move toward current long term destination" and have that destination be set elsewhere. Regardless, when the bandits attack, they trigger all the higher priority defense behaviors. When those are no longer valid (e.g. no more bandits), the highest remaining behavior is "move in the direction..." and the caravan resumes.

The actual switching of destinations would be a higher priority behavior than the "move in the direction of" that has as its condition "have I arrived?". If you have arrived at the specified destination, you would then run whatever logic it would be for selecting a new one (even if it is just flipping A/B). Now, since you are no longer at your specified destination, "move towards" is the highest ranking behavior and off you go.

Dave Mark - President and Lead Designer of Intrinsic Algorithm LLC
Professional consultant on game AI, mathematical modeling, simulation modeling
Co-founder and 10 year advisor of the GDC AI Summit
Author of the book, Behavioral Mathematics for Game AI
Blogs I write:
IA News - What's happening at IA | IA on AI - AI news and notes | Post-Play'em - Observations on AI of games I play

"Reducing the world to mathematical equations!"

This topic is closed to new replies.

Advertisement