STRIPS planning implementation?

Started by
20 comments, last by alexjc 16 years, 3 months ago
Has anyone seen any code that I can wade through that has a data structure and implementation of the STRIPS planning algorithm? I would like to see how it has been done before rather than trying to build it from scratch.

Dave Mark - President and Lead Designer of Intrinsic Algorithm LLC
Professional consultant on game AI, mathematical modeling, simulation modeling
Co-founder and 10 year advisor of the GDC AI Summit
Author of the book, Behavioral Mathematics for Game AI
Blogs I write:
IA News - What's happening at IA | IA on AI - AI news and notes | Post-Play'em - Observations on AI of games I play

"Reducing the world to mathematical equations!"

Advertisement
Was there a particular reason you wanted to use STRIPS? It's not a good planning language, particularly in dynamic domains.
Just wanted to play with it to see what made it tick. Over at Alex's site (AIGameDev.com) there is an article about the AI for F.E.A.R. Jeff Orkin says in a doc file that the planning algorithm that they used most resembled STRIPS. He goes on to describe it somewhat. It was interesting to ponder the possibilities.

Do you have suggestions that would be better? I have an interest in goal-based planning at the moment.

Dave Mark - President and Lead Designer of Intrinsic Algorithm LLC
Professional consultant on game AI, mathematical modeling, simulation modeling
Co-founder and 10 year advisor of the GDC AI Summit
Author of the book, Behavioral Mathematics for Game AI
Blogs I write:
IA News - What's happening at IA | IA on AI - AI news and notes | Post-Play'em - Observations on AI of games I play

"Reducing the world to mathematical equations!"

I dont know any public implementations, tho suspects many universities have, so thats somewhere you can look. I went on to re-create a FEAR-like planning AI last winter (for fun). The planner itself is pretty trivial, its a simple backtracking. Just re-use your favorite A* implementation.
How did you go about setting up the preconditions, etc.? What sort of data structure? Anything I can look at?

Dave Mark - President and Lead Designer of Intrinsic Algorithm LLC
Professional consultant on game AI, mathematical modeling, simulation modeling
Co-founder and 10 year advisor of the GDC AI Summit
Author of the book, Behavioral Mathematics for Game AI
Blogs I write:
IA News - What's happening at IA | IA on AI - AI news and notes | Post-Play'em - Observations on AI of games I play

"Reducing the world to mathematical equations!"

You could just iterate through every action, and then for each action evaluate its preconditions to determine if they are all true or not. Anything beyond that is an optimization.

A potentially more efficient method that works in general would be to have a list of actions associated with each precondition. When a precondition changes from true to false or vice versa, update all of the associated actions. If the last precondition for an action becomes true, add it to the list of valid actions. If a precondition for a previously valid action becomes false, remove it from the list of valid actions. Some domain knowledge might also allow for some optimizations here.
The Fear planner absolutely works, but I've found it overkill for many problems.

Basically, designers generally know what they want to see behaviorally. Most STRIPS style AI planning work I've done has basically been using the system to emulate what a HTN planner represents more explicitly.

You might want to look at simple HTN planning or Halo 3 style trees instead. Both seem like better ways to directly encode design requirements.
There isnt much to it, most of it is in two classes:

A "State" class contains a list of world variables in the form of a <name, value> pair, a list of the conditions to satisfy (in the same form), meaning to be satisfied the variable with the same name as the precondition must have a certain value, and a pointer to the previous State.

Then I have an "Action" class that contains:

- a list of conditions the action might affect (for optimisation purpose, I'll only consider an action if it might affect one of the current conditions).

- a list of conditions that need to be solved to apply this action

- a rule that tells if its a good idea to choose this action (variable cost for the action, a small improvement).

- an operator that will modify the current variables and conditions to generate a new state.

I load the actions from xml (bit o overkill, I wanted to try some stuff), it looks a like this:

<ACTION Name = "PickupWeapon">	<ARG Name = "Weapon" />	<CHANGE>		<CONDITION Name = "HaveWeapon" />	</CHANGE>	<REQUIRE>		<CONDITION Name = "AtObject" > 			<ARG Name = "Weapon" />		</CONDITION>			</REQUIRE>	<RULE>		<PREDICATE Name = "Always" />	</RULE></ACTION> 


to define the "PickupWeapon(weapon)" Action.

To plan, I just gather the original state, and apply A* using the actions as the edges of the graph... Go ahead, you'll be surprised how easy a simple planner like that is. 90% of the work is to link the planner to the game, like gathering the state variables or executing the plan itself.
Good advice and info peeps.

One thing that I am interested in is goal-based planning. That is, backward search from the goal rather than forward search from the current state.

The concepts I understand, but I'm just looking for different ways of implementing the data structure and code. I will be doing some more research on it here soon.

Any of you folks going to GDC?

Dave Mark - President and Lead Designer of Intrinsic Algorithm LLC
Professional consultant on game AI, mathematical modeling, simulation modeling
Co-founder and 10 year advisor of the GDC AI Summit
Author of the book, Behavioral Mathematics for Game AI
Blogs I write:
IA News - What's happening at IA | IA on AI - AI news and notes | Post-Play'em - Observations on AI of games I play

"Reducing the world to mathematical equations!"

A simple implementation of that is easy.

That definition of goal based planning should be pretty easy. Just encode your goal in a search space state. Have a method to determine distance between your initial state and the goal state. Do a search (either forward or backward) between the states using the actions as operators.

At least in GOAP, goals were special because they were prioritized outside of planning system. Basically, the AI decided what to do via goals and then how to do it via the planner.

The search direction itself shouldn't matter very much if you use an admissible search algorithm. At least behaviorally - the size of the space you search may vary.

This topic is closed to new replies.

Advertisement