Goals and Hierarchical Planning

Started by
0 comments, last by wodinoneeye 8 years, 6 months ago

Hey guys, I've been working on a planning system for the enemies and npcs in my RPG and was hoping for some thoughts, especially if someone has tried something like this before and how well it worked.

Objective: I wanted the non-playable characters in the game to be able to map out and execute a plan given some actions they could perform and the opportunities/obstacles around them.

Jump off: I started with a STRIPS-like system where the agent picked a target (some other entity, ie: the player, an apple, a coin) and created a state of the world consisting of true/false values. Then, given some goal state similar in structure to the world state, using predefined 'actions' or 'moves' that had conditions and effects they operated on the world state until it equaled the goal state, resulting in a series of moves or a plan.

Progress: At this stage I have a working prototype but it is by no means finished, in fact each time I run it seems to get a little less robust.

Using the idea from the STRIPS system (but none of the code) I am now using a complex Noun Phrase and Verb Phrase pairing to describe the objects in the world and their values. The actions and goal states also consist of noun phrase and verb phrase statements. The noun-phrase is either specific, holding a unique id or pointer to an actual entity in the game world or a 'variable.' The variable (used only in the actions, rules, and goal states) can be used to perform a sort of first order logic, allowing conditions such as "If there is any entity that is (1) not me, (2) that is not on my team, and (3) that is still alive, then my goal is to kill that entity." The verb phrase, depending on which part of the statement its in, can either declare a property, check a property, or alter a property.

Hierarchical: The actions can either be compound, describing a condition and an effect but not specifically how the agent will perform it, or primitive, describing a condition and an effect with an executable block that performs the action in-game. This way, the agent can quickly (usually with two or three moves) determine broadly whether they can accomplish their goal or not. Once they have a broad plan, they can break it down, treating the compound actions as sub goals and solving those with more specific and primitive actions.

Biggest Foreseeable Problem: Processing power. Even using a heuristic, the system must basically check every action against every entity for every goal. This problem can be lessened by quickly skipping actions and entities that show zero promise or are not applicable.

I'll clean up my code and post links to it later if there's enough interest; it's all written in Java.

Advertisement

It sound like you have the basic mechanism/structure for the AI planner.

Now the real difficulty is the decision metrics that 'pick' (evaluate) targets in your current situation.

Thats where the game mechanics irregularities cause lots of problems (how many true/false ... combinatorics can you limit it to for simple decisions, versus requiring more complicated discrete values of factors which have to be eventually reduced to a single 'best' judgement value)

Consider also that situations change and how frequently you may have to reevaluate the situation to possibly change an objects plan/actions (judging how often and what causes reevaluation is a whole seperate judgement operation).

Usually this also includes a decision of inertia for a goal where changing the actions/plan LOSES effort already spent towards one goal.

Similarly in a dynamic environment maintaining a flexible stance pays off better than optimal planning.

--------------------------------------------[size="1"]Ratings are Opinion, not Fact

This topic is closed to new replies.

Advertisement