# A little confusion with A*

## Recommended Posts

Does A* algorithm support -ve edges ? Apparently the pseudocode seem to support -ve edges to me ( the way nodes are removed from the "closed" list and put back into the "open list" ). But i have a little confusion here. Suppose we have a situation as below
A-->B-----------
|   ^          |
|   |------|   |
-------->  C <-|


[ sorry for the poor text-art ] A has an edge towards B and C, and B has a edge towards C, while C has a edge back towards B. Now, for example, A was popped out first in the algorithm; hence, both B and C will be pushed into the "Open List." Now Let B has the minimum "G" value hence B will be popped out next. Suppose now the edge between C-B is negative or maybe the G value of B becomes smaller as u travel from C to B. If this is the case, then B will be removed from the "Closed" list and put back into "Open" list. So , sometime in the future we will again process the node B and WOULDN'T this create an infinite cycle ? [ Because this time the cost to reach C will be smaller than the previous case and C will be again pushed into the "Open List" ] Someone please clear me out

#### Share this post

##### Share on other sites
What do you mean by "-ve edges"? If you mean edges with negative cost, I don't think A* supports them.

#### Share this post

##### Share on other sites
What would be the point of a negative cost anyways? Someone mind explaining that to me?

#### Share this post

##### Share on other sites
A* supports negative-cost edges, but you'll need to pick your heuristic very carefully. Remember: you want a heuristic that NEVER overestimates the cost. If that heuristic fails to take into account the potential for a negative-cost edge, you will run into problems.

#### Share this post

##### Share on other sites
DrEvil, a negative cost could represent some goodies that you can collect in some corridor. In the real world, some friends of mine bought a special train ticket that allowed them to travel through Europe for a month. Sometimes they took a train overnight that they didn't need to, so they could sleep in the train and not spend money in a hotel. That trip would have a negative cost.

Ahh gotcha

#### Share this post

##### Share on other sites
Ok, Sneftel, but what about the possibility of INFINITE CYCLE, apparently it seems that like there COULD be INFINITE CYCLES if heuristics are not set carefully or if there are negative edges.

However, if we dont have negative edges [ letz just assume we are careful enough with the heuristic thingy ] then why should we bother about "Closed List." We can simply follow the Dijkstras method, ie. "Once a a node is popped ( relaxed ) , it's never relaxed again." Hence we dont have to worry about taking a node from the "Closed" list and putting in back into the "Open" list. Wouldn't that cut down on space and time requirement for the algorithm?

#### Share this post

##### Share on other sites
Quote:
 Original post by brownyOk, Sneftel, but what about the possibility of INFINITE CYCLE, apparently it seems that like there COULD be INFINITE CYCLES if heuristics are not set carefully or if there are negative edges.
Yep. In those cases, there is no finite shortest-path, so the algorithm has trouble. I don't know of any pathfinding algorithms that correctly handle this situation (and I suspect that detecting all such negative loops is NP-complete).

#### Share this post

##### Share on other sites
Instead of using negative costs, add the absolute value of the minimum node cost to all nodes such that all costs are greater than or equal to zero. It might not have the exactly same effect, but it should act simmilarly and would make things much easier to process (no need to worry about infinite loops, and there is a finite 'best path' if there is one at all).

#### Share this post

##### Share on other sites
If you want to take rewards into account when performing pathfinding, then one method is to define a value function over possible paths. There was a good thread earlier this year on this exact problem, wherein I explained the procedure and provided one such function for use with A*.

Here's the link

Cheers,

Timkin

#### Share this post

##### Share on other sites
There is a way for handling negative edges, and that's the the Floyd-Warshall algorithm, unless there exists a "negative cycle" somewhere (can easily be checked by running the algorithm twice and see if the second value is "lower" than the first one).

The algorithm is very simple:

for i = 1 to n do
for j = 1 to n do
for k = 1 to n do
if(dist(j,i) + dist(i,k) < dist(j,k)) then
dist(j,k) = dist(j,i) + dist(i,k)

dist(A,B) is now the shortest path for going from A to B

Unfortunely, it runs in O(n^3) time, which isn't so good compared to the Dijkstra's/A* algorithm, and it's necessary to modify it for getting the actual path.

But, it's still enough for searching for negative cycles... :)

#### Share this post

##### Share on other sites
For single-source shortest path, with negative edges to be handled in finite time, i think Bellman Ford algorithm is the best.. though it runs in O(VE).

#### Share this post

##### Share on other sites
and what about this ?

#### Share this post

##### Share on other sites
Quote:
 Original post by brownyHowever, if we dont have negative edges [ letz just assume we are careful enough with the heuristic thingy ] then why should we bother about "Closed List." We can simply follow the Dijkstras method, ie. "Once a a node is popped ( relaxed ) , it's never relaxed again." Hence we dont have to worry about taking a node from the "Closed" list and putting in back into the "Open" list. Wouldn't that cut down on space and time requirement for the algorithm?

and what about this ? Timkin...Sneftel.. anyone ?

#### Share this post

##### Share on other sites
The problem is that if you can't move something from open->closed->open, it is possible you don't find the shortest path if the heuristic isn't perfectly accurate (and if you had a perfect heuristic you wouldn't need to search in the first place) because a node might be searched one way (when the heuristic underestimated the cost of an expensive route) and then later again from a lower-cost route that should add the node to the open list again because the best cost to get there is lower and its neighbors probably need to be reprocessed (and their neighbors as well etc).

#### Share this post

##### Share on other sites
Perhaps I've missed something here, but I don't see the need to move a node from the closed list to the open list. At any time you generate a leaf node and find it already in the closed list, then you necessarily test at that time which path cost to that node is lower. In the event that it is the new path, you then swap the new for the old node in the closed list and propagate the new costs down the subtree below it. There is no need to discard that subtree and place the rediscovered node back into the open list. In the event that there are negative edge costs, this should be no different.

However, as has been pointed out, if there is a loop with path cost less than zero, then A* will not terminate and neither will any other search method that seeks to minimise the cost without regard to path length.

I seriously urge anyone dealing with rewards in their pathfinding to consider the problem not as simple pathfinding, but more correctly as path planning. There is a distinct difference. Have a read of the thread I linked. Algorithms like A* can still be utilised to explore the plan or state space in such problems.. but not without consideration of the nature of value within the probem space.

Cheers,

Timkin

#### Share this post

##### Share on other sites
I'm not sure how you would do the "propagate the new costs down the subtree below it" step. To me, propogating the new cost means putting the node found to have a lower cost into the open list and then resuming searching. Is there a more efficient way of doing it? It could propogate to many nodes, and it seems like you'd essentially have to search all over again to figure out which nodes are affected by the cost change and which are not (unless you store something more than the closed list..?)

#### Share this post

##### Share on other sites
Quote:
 Original post by ExtrariusI'm not sure how you would do the "propagate the new costs down the subtree below it" step. To me, propogating the new cost means putting the node found to have a lower cost into the open list and then resuming searching. Is there a more efficient way of doing it?

Yes, there is. Remember that each nodes path cost is the sum of the cost from the root node to the current node (typically denoted as g) plus the heuristic estimate of the cost from the current node to the goal node (h). Thus, any change to the path cost of an ancestor node affects the cost g of each of its descendents.

So, when you discover a new lower cost path to a node i, call it g'i, when i is already in your closed list, simply traverse the subtree below that node and subtract from each descendent the difference between the old path cost and new path cost at the new node; gi-g'i.

Quote:
 It could propogate to many nodes

Yes, however a data structure utilising pointers and a recursive traversal algorithm mean that this operation is easily managed. How much of a computational burden it is depends on the connectivity of your state space. However, it will always be cheaper than searching that subtree again, since the number of operations on each node is greatly reduced.

Timkin

#### Share this post

##### Share on other sites
So basically you store some kind of 'closed tree' instead of a true 'closed list'? I'm not so sure that is a good idea really (at least in the specific case of a standard grid of nodes). I've always used a 'closed bit' in each node instead of a full closed list (because it takes much less space and is much faster than maintaining a list, though it does limit the engine to one search at a time) and a tree would be even worse. IME (with the typical grid of nodes often used in games), it is rather rare that a lower cost will be found for a node, and when it does happen it only needs to be propogated for a few nodes (maybe 30 max)

#### Share this post

##### Share on other sites
Quote:
 Original post by TimkinYes, there is. Remember that each nodes path cost is the sum of the cost from the root node to the current node (typically denoted as g) plus the heuristic estimate of the cost from the current node to the goal node (h). Thus, any change to the path cost of an ancestor node affects the cost g of each of its descendents.So, when you discover a new lower cost path to a node i, call it g'i, when i is already in your closed list, simply traverse the subtree below that node and subtract from each descendent the difference between the old path cost and new path cost at the new node; gi-g'i.Timkin

Timkin, you are making the (silent) assumption that the cost of a path g is the linear combination (sum) of all steps along the path, in which case it is possible to subtract gi-g'i from the costs of each descendent.
However, cost functions need not be a linear combination of all steps along the path: I've used cost functions that partially depended on the preceding nodes on the path (to differentiate between long and short stretches of cover), and I can think of other cost functions that do the same (vehicles preferring straight lines over tight curves). With these kinds of cost functions, moving a node from the closed list to the open list is a better approach.

William

#### Share this post

##### Share on other sites
I am trying to understand why A* is the best solution for this problem. It would seem that you are trying to figure out the best set of actions to take between states S and G (best meaning greatest rewards reaped), not specifically the shortest path between S and G.
Perhaps you would be better served using policy/value iteration using a Markov Decision Process, which uses rewards and penalties (thus solving the negative value problem). Furthermore, using an MDP might give you the best policy set quicker than A*.
Then again, I am just a dumb college kid! Someone explain if I'm totally off, please!

I learned about MDP's from Dr. Vincent Cicirello, affiliated with Carnegie Mellon and Drexel Universities. I'm sure he'd welcome an e-mail, and even offer some of his notes on MDP's if you'd like them --> cicirello AT cs.drexel.edu

Hope this suggestion helps!

Best Regards,
Adam

## Create an account or sign in to comment

You need to be a member in order to leave a comment

## Create an account

Sign up for a new account in our community. It's easy!

Register a new account

## Sign in

Already have an account? Sign in here.

Sign In Now

• ### Forum Statistics

• Total Topics
628379
• Total Posts
2982351

• 10
• 9
• 15
• 24
• 11