• ### What is your GameDev Story?

#### Archived

This topic is now archived and is closed to further replies.

# A* and monotonically crescent functions?

This topic is 5364 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

Hi! I was wondering... If the cost along a path may not be monotonically crescent, we can''t be sure to reach the best path without considering all options, right? Would this make A* very ineficient? Or negative node costs are commonly used with A* pathfinding (of course, knowing that there could be a much better path that wasn''t found)? Thanks, Cesar

##### Share on other sites
In general, doing a graph search with negative cost edges (or vertices) is tricky. For example, you may get a cycle of edges with negative cost, which means that you have to go around that cycle infinitely many times and then jump out and proceed to the destination in order to get the path with the least cost. This would cause A* to never return. One algorithm that does handle negative edges properly (and can detect a negative weight cycle) is the Bellman-Ford algorithm, but it runs in O(V*E) time which can be very slow for a large graph.

[edited by - Matei on May 6, 2004 2:26:00 PM]

##### Share on other sites
Yes, I know you can run into a loop. But it''s possible to make it work if you, for example, make sure not to make a cicle or some other, more simple, things (like changing the cost after you cross a negative valued node). What I wanted to know is if it is used...

LEt me explain. I''m new to game AI. I want to make a node more attractive (this sounds weird!) and I thought I could use the pathfinding algorithm, instead of using some other technique on top of that. A node with a negative value would be a point of attraction, right? But then it occured to me that things could get weird and that for the usual A* to work we would need a monotonically crescent path cost...

If a negative cost is not the way to go, what is then? There must be a way to combine this with pathfinding...

##### Share on other sites
Are you looking at a ''good place to move to'', or are you trying to make certain paths ''better'' than others?

If I am following what you are saying, it sounds like you were definitely on the correct path. Avoid transformating a weights value additively. Scale them instead, as this maintains the true cost, but allows modification.

For instance, take the cost you have by default (based on manhattan distance, or whatevery you use), and scale it. A path you determine to be particularly desirable could have a weight scaled by 0.9. Compared to an identical path next to it, the AI would prefer the path you scaled by a 0.9.

##### Share on other sites
You are right, Brian, I''m trying to make certain paths better than others. But I''m not sure I understand what you are saying.

If I change a node''s cost without making it negative, It doesn''t change the regular path much. Think about it: will you go out of the best path to go through a node with cost zero? No, unless the cost to get there is zero too (or the node is right next to the best path).

I could multiply the whole path by a factor, dependant on the nodes. But then I wouldn''t know the cost by the time I reach a particular node, which is very important for the algorithm to work, as you know (If I apply the factor when I first hit a special node, the cost will be decreasing, with all the implications already discussed...)

##### Share on other sites
Cesar: Think of your problem not as one of minimising cost but one of maximising value. Either directly transform all costs into values, or implement a second utility variable directly (although now you require a function of two utilities over which to evaluate possible paths).

My recommendation would be to transform your costs into rewards using a reciprocal relationship: e.g., R(i) = A(i)+ 1/B(i), where B(i) is the resource cost of a path through node i, A(i) is the reward given for using node i in the path and R is the total reward for node i. Now search for a path that maximises R(goal). This is fairly trivial to implement, although even more trivial if you invert it again so that it is a minimisation problem once more. Thus,
        1         B(i)C(i) = ---- = ------------       R(i)   A(i)B(i) + 1

You can see that if we remove the reward for using a node, A, by setting it to zero, this reduces to the usual cost function for your search: simply C(i) = B(i). You should also be able to see that as A increases, the overall cost decreases, making node i more attractive.

You can play around with different forms of the original equation R(i), but this is the most basic and the one that I have found to work particularly well.

Cheers,

Timkin

##### Share on other sites
It''s a very nice solution, but it works better for maximizing reward then minimizing cost, for the same reason I gave to the multiplier idea Brian had (at first I thought it would work, because maximizing reward seems to be the same as minimizing cost).

Imagine that the best path with all nodes having reward zero is composed of nodes A,B,C,D and E, all with the same base cost. Now supose node F has a huge reward. It''s cost is very close to zero, then.

If node F is right next to, say, node D, then the best path will be A->B->C->F->E. Just like we wanted. Now supose that between node D and node F, there''s an intermediary node with reward zero and base cost equal to [A,...,E]''s. Then The best path will still be A->B->C->D->E, because to get to F, it will have to go through at least one extra node with reward zero.

So a single node with a high reward is attractive as long as it''s really close to the best path (which is not necessarily a bad thing! Just not what I had in mind). Am I wrong?

The difference from maximizing reward is very easy to see. if you walk through a zero reward path, but along the way you find a single node with a reward high enough, it''s worth it. But maximizing reward is kind of the same as minimizing cost in a map with negative costs: It has the same loop problem (that can be solved, of course)and we can never be sure to have reached the best solution until we search all connected nodes.

##### Share on other sites
another possibility is to propagate the 'attractiveness' of a node by multiplying edges with a value (tending toward 1)that increases at a rate proportional to their depth from the node. In this way the decreased cost 'fans out' from the node.

[edited by - fup on May 6, 2004 12:47:24 AM]

##### Share on other sites
I had thought of that, but then I would have to do some preprocessing of the map everytime I want to run the pathdinfing algorithm. Maybe it's not so bad, cause it would be the same for all units at that turn, but still...

Anyway, I'l start playing with that. Timkins forula with propagation: "Reward propagation pathfinding".

Thanks! Any other ideas will be welcome, but I will start playing with what we got so far.

[edited by - Cesar on May 7, 2004 7:32:50 AM]

##### Share on other sites
If the pre-processing idea is not practical, I''d be tempted to try a simple 2-stage system here. First, I''d plot a normal path using A*. Then while the path is being followed, at regular intervals a quick limited breadth-first or depth-first search is done of the local environment to see if one of these beneficial nodes are close by. You can then plot a short detour to that node, and then another short path back to the original path. This system can''t give you any guarantees of path quality but should work well if the path length is a more important factor than the number of beneficial nodes you visit.

I think this is quite an interesting problem as it applies to many realistic situations that might crop up in games, such as an FPS bot returning to base but attempting to pick up health on the way, or a RTS resource harvester trying to fill its capacity without travelling all the way across the map.

[ MSVC Fixes | STL Docs | SDL | Game AI | Sockets | C++ Faq Lite | Boost
Asking Questions | Organising code files | My stuff | Tiny XML | STLPort]

• ### What is your GameDev Story?

In 2019 we are celebrating 20 years of GameDev.net! Share your GameDev Story with us.

• 10
• 11
• 13
• 9
• 11
• ### Forum Statistics

• Total Topics
634087
• Total Posts
3015444
×