Jump to content
• Advertisement

# A* for multiples targets

This topic is 3557 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

## Recommended Posts

Hello all, I'm have interest in using A* planner in order to make a choice between differents targets in 2d grid world with distance based heuristik: 1) My first Hypothesis is that A* works well for 2 or more targets if h(u,G)= min( distance(u,gi) ) for gi in G I haven't make a lot of test but it seems to work... Is it sounds good to you? 2) My second hypothesis is that for the 2 goals I defined a preference (a float) and i tried to put it in the heuristik in order to drive the state exploration toward the most prefered goal... h(u,G)= min( distance(u,gi)/gi.preference ) for gi in G but it doesn't work as i expected:( It seems that the preference pounderation my heuristik too minor as the more preference is high the more h tends to be 0 and then the more A* become Dijkstra... Do anyone have an idea on what to do? Is the only way to select between my two weighted target is a pre A* a-priori selection or can I use A* to make this selection?

#### Share this post

##### Share on other sites
Advertisement
Quote:
 Original post by mp3butcher1) My first Hypothesis is that A* works well for 2 or more targets if h(u,G)= min( distance(u,gi) ) for gi in G I haven't make a lot of test but it seems to work...Is it sounds good to you?
Yes. Mathematically, distance to a set of points is precisely "max(distance(u,gi)) = distance(u,G)".

Quote:
 2) My second hypothesis is that for the 2 goals I defined a preference (a float) and i tried to put it in the heuristik in order to drive the state exploration toward the most prefered goal...h(u,G)= min( distance(u,gi)/gi.preference ) for gi in G
You can't just divide like that. The heuristic has to remain larger than the minimum possible distance, which dividing does not guarantee, otherwise A* breaks.

You could use a barycentric weighting instead:

h(u,G) = sum(distance(u,gi) * pref(gi)) / sum(pref(gi))

However, this doesn't solve your problem, since A* will still find the shortest path (which is independent of the heuristic) and so there will be no actual selection. What you can do is instead alter your graph so that you have only one new target, but every old target is connected to the new target with an edge—have the weight of that edge be larger for low-preference nodes and smaller for high-preference nodes.

#### Share this post

##### Share on other sites
Thanks for this fast answer.It seems to work properly.
Do you have litterature reference for this type of problem?
I haven't found any by myself:(

#### Share this post

##### Share on other sites
I don't know of any. Try to see the A* algorithm in terms of searching through a graph, this could help you a bit I believe.

#### Share this post

##### Share on other sites
Quote:
 Original post by mp3butcher2) My second hypothesis is that for the 2 goals I defined a preference (a float) and i tried to put it in the heuristik in order to drive the state exploration toward the most prefered goal...h(u,G)= min( distance(u,gi)/gi.preference ) for gi in G but it doesn't work as i expected:( It seems that the preference pounderation my heuristik too minor as the more preference is high the more h tends to be 0 and then the more A* become Dijkstra...

About this 2nd question..
As mentioned before changeing heuristic cost function wont be a good approach. I think best solution would be to modify algorithm stop condition. You can stop the search after finding the most prefered node or when you have found lesser solution, but overall cost of nodes beeing currently searched is too high (so if way to best node would be found - it would be too costly). Limit value to stop the search should be depandent on overal cost to get to allready found less-prefered node, and that node 'priority'.

#### Share this post

##### Share on other sites
If you already have a single-target A* implementation, it might be easier to implement the search backwards: Start the algorithm by putting all the targets in the open list, and use the agent's position as the goal. In order to make a target less desirable, pretend that it already has some cost when you put it in the open list.

#### Share this post

##### Share on other sites
thanks for your post alvaro, It is a good idea.
but the problem with backward search is that you make induction not deduction...It avoids the possibility to integrate a state var in the cost function:
for example i have a confidence index that is initialized at 0 and when i find a signage in the environnement i put it at max and then decrease it with the time.
I can integrate this confidence index in my cost function but only in forward search.

#### Share this post

##### Share on other sites

• Advertisement
• Advertisement

• ### Popular Contributors

1. 1
Rutin
46
2. 2
3. 3
4. 4
5. 5
JoeJ
19
• Advertisement

• 13
• 10
• 12
• 10
• 13
• ### Forum Statistics

• Total Topics
632998
• Total Posts
3009808
• ### Who's Online (See full list)

There are no registered users currently online

×

## Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!