Sign in to follow this  
IADaveMark

STRIPS planning implementation?

Recommended Posts

Just wanted to play with it to see what made it tick. Over at Alex's site (AIGameDev.com) there is an article about the AI for F.E.A.R. Jeff Orkin says in a doc file that the planning algorithm that they used most resembled STRIPS. He goes on to describe it somewhat. It was interesting to ponder the possibilities.

Do you have suggestions that would be better? I have an interest in goal-based planning at the moment.

Share this post


Link to post
Share on other sites
I dont know any public implementations, tho suspects many universities have, so thats somewhere you can look. I went on to re-create a FEAR-like planning AI last winter (for fun). The planner itself is pretty trivial, its a simple backtracking. Just re-use your favorite A* implementation.

Share this post


Link to post
Share on other sites
You could just iterate through every action, and then for each action evaluate its preconditions to determine if they are all true or not. Anything beyond that is an optimization.

A potentially more efficient method that works in general would be to have a list of actions associated with each precondition. When a precondition changes from true to false or vice versa, update all of the associated actions. If the last precondition for an action becomes true, add it to the list of valid actions. If a precondition for a previously valid action becomes false, remove it from the list of valid actions. Some domain knowledge might also allow for some optimizations here.

Share this post


Link to post
Share on other sites
The Fear planner absolutely works, but I've found it overkill for many problems.

Basically, designers generally know what they want to see behaviorally. Most STRIPS style AI planning work I've done has basically been using the system to emulate what a HTN planner represents more explicitly.

You might want to look at simple HTN planning or Halo 3 style trees instead. Both seem like better ways to directly encode design requirements.

Share this post


Link to post
Share on other sites
There isnt much to it, most of it is in two classes:

A "State" class contains a list of world variables in the form of a <name, value> pair, a list of the conditions to satisfy (in the same form), meaning to be satisfied the variable with the same name as the precondition must have a certain value, and a pointer to the previous State.

Then I have an "Action" class that contains:

- a list of conditions the action might affect (for optimisation purpose, I'll only consider an action if it might affect one of the current conditions).

- a list of conditions that need to be solved to apply this action

- a rule that tells if its a good idea to choose this action (variable cost for the action, a small improvement).

- an operator that will modify the current variables and conditions to generate a new state.

I load the actions from xml (bit o overkill, I wanted to try some stuff), it looks a like this:


<ACTION Name = "PickupWeapon">
<ARG Name = "Weapon" />
<CHANGE>
<CONDITION Name = "HaveWeapon" />
</CHANGE>
<REQUIRE>
<CONDITION Name = "AtObject" >
<ARG Name = "Weapon" />
</CONDITION>
</REQUIRE>
<RULE>
<PREDICATE Name = "Always" />
</RULE>
</ACTION>


to define the "PickupWeapon(weapon)" Action.

To plan, I just gather the original state, and apply A* using the actions as the edges of the graph... Go ahead, you'll be surprised how easy a simple planner like that is. 90% of the work is to link the planner to the game, like gathering the state variables or executing the plan itself.

Share this post


Link to post
Share on other sites
Good advice and info peeps.

One thing that I am interested in is goal-based planning. That is, backward search from the goal rather than forward search from the current state.

The concepts I understand, but I'm just looking for different ways of implementing the data structure and code. I will be doing some more research on it here soon.

Any of you folks going to GDC?

Share this post


Link to post
Share on other sites
A simple implementation of that is easy.

That definition of goal based planning should be pretty easy. Just encode your goal in a search space state. Have a method to determine distance between your initial state and the goal state. Do a search (either forward or backward) between the states using the actions as operators.

At least in GOAP, goals were special because they were prioritized outside of planning system. Basically, the AI decided what to do via goals and then how to do it via the planner.

The search direction itself shouldn't matter very much if you use an admissible search algorithm. At least behaviorally - the size of the space you search may vary.

Share this post


Link to post
Share on other sites
If you're interested in understanding STRIPS style planners then yes, it's best just to implement your own. I'm not aware of any publicly available at the moment, mostly because those working in the planning community have been focusing on post STRIPS-style planners for the past decade or so.

Just on the terminology, which might help your lit review... searching from the goal backwards is generally called "regression planning", as opposed to progression or forward planning (planning from the starting state).

Cheers,

Timkin

Share this post


Link to post
Share on other sites
Quote:
Original post by BrianL
The search direction itself shouldn't matter very much if you use an admissible search algorithm. At least behaviorally - the size of the space you search may vary.


May vary a *lot*, since you have to consider all actions at every step if you search forward, instead of just those actions that solve a current goal when searching backward, no? Friends of mine always suggest to do a bi-directional search and pick the first one that finds a plan, or merge the plans if they meet, but I found that the regression search / backtracking only is always faster using the optimization described above. It limits the plan-space a bit of course...

Share this post


Link to post
Share on other sites
Quote:
Original post by BrianL
Wouldn't an informed forward search (ie A*) address the 'only consider actions which solve towards the goal' issue?

Since the possibility space is not geometric, there isn't a really good heuristic to use that would tell the algorithm that you are "towards the goal".

Share this post


Link to post
Share on other sites
Yeah. For the heuristic you would have to look at actions that solve subgoals of actions that solve subgoal of actions that solve... etc one of the current goal. You would have to plan in order to plan. A Forward search is much more of a "blind search", in the action space at least.

Share this post


Link to post
Share on other sites
Quote:
Original post by Steadtler
Yeah. For the heuristic you would have to look at actions that solve subgoals of actions that solve subgoal of actions that solve... etc one of the current goal. You would have to plan in order to plan. A Forward search is much more of a "blind search", in the action space at least.

I don't remember where I heard it, but someone suggested an analogy of finding an ant on the tip of a leaf of a tree by starting at the trunk. The any has a better chance of finding you at the trunk than you have of finding the ant. That doesn't completely fit, but it's something to think about.

Share this post


Link to post
Share on other sites
Quote:
Original post by alexjc
If performance is a concern, you should be thinking about hierarchical planners anyway rather than forwards/backwards A* :-)

Alex

Yes and no. You can only get a hierarchy so far up. At some point, you still have to determine what components to use and what order to put them in. I agree with the premise, however, that using only the most granular level actions.

Share this post


Link to post
Share on other sites
Quote:
Original post by alexjc
If performance is a concern, you should be thinking about hierarchical planners anyway rather than forwards/backwards A* :-)

Alex


We're not talking about simple performance tradeoff here. Forward search is exponentially most costly than backtracking with that optimization, so its a matter of even being able to find a plan at all.

I think the loss of expressiveness is also far greater from hierarchical planners to regression planners than from regression planners to forward planners.

Share this post


Link to post
Share on other sites
There is implementation of GOAP like algo, with source.
report - http://www.edmundlong.com/edsWiki/
source - http://www.edmundlong.com/GOAPSystemMasters.rar
Its the 2d deathmatch bot game, each team of bots has own AI from: FSM, GOAP, team FSM, team GOAP.

Share this post


Link to post
Share on other sites
Heh, actually my article was written after I read Ed Long's blog about his thesis (mentioned here), and he mentioned the F.E.A.R. source.

I didn't follow up as I figured you read the original article anyway ;-)

Alex

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this