A* pathfinding

Started by
4 comments, last by Steadtler 16 years, 10 months ago
I am building an implementation of the A* algorithm using triangles for my newest game. So far I know pretty much what I am doing and how to do everything except for one crucial factor: Assuming the equation to get the path square ranking is F = G + H where G is equal to the distance from the current square to the new square and H is the distance from the new square to the end point, how the heck can you get H with triangles? The tutorial I read explained the Manhattan Method for squares, but so far there is nothing for triangles. I considered using point to point distance, but that would be very costly as there is a square root function involved. Does anyone have any ideas on this?
Advertisement
Correction: G is the length of the path from the start state to the newly scanned state, computed by adding the path from the start to the previous state and the cost from the previous state to the new state.

You can still use a manhatttan distance-like heuristic by computing the length of the path if there were no obstacles. This could just be a bit trickier, but it's still possible. On most platforms, the straight line distance will be faster to compute but less accurate than this triangular manhattan distance, even if there is a square root involved.
Remember that H is just a guess at how much you think it is going to cost to get from any given node to the goal node in your graph. So long as it never over-estimates the true cost, A* provides certain guarantees. If it does over-estimate by only a small amount, A* can also guarantee that the path returned is within a small amount of the optimal path; a useful trick in some simple domains.

In your case, I'd still use straight line distance and the square root, but just replace the square root operation with a series approximation for the function. Make sure you choose a truncation point in the series that causes an under-estimate of the true value and you're set: you get a useful heuristic at a cheap cost. You can make this even faster by using lookup tables for fast function approximation. Try google for more info on series approximations of functions, or grab any decent mathematics text (particularly those for engineering mathematics, such as Kreysig).

Cheers,

Timkin

[Edited by - Timkin on June 7, 2007 11:10:04 PM]
This is the Taylor Series for the square root function.


The only problem is that since I have not yet completed high school and we have not covered factorials (I think that is the funky E). Could someone eliminate the E from this? I get the basic purpose of the factorial, but I do not know how to expand it so that it could be eliminated. Also, when does the factorial stop (from a simplification of a different problem I saw that it went on to infinity)?
Just use the sqrt function. It really isn't that expensive.

The "funky E" is a capital sigma, representing the summation of an infinite series. A taylor series approximation would only use the first few terms of the series. Factorial is the "!" after a number. n! is n * (n-1)!, and 0! is 1.
Quote:Original post by Vorpy
Just use the sqrt function. It really isn't that expensive.

The "funky E" is a capital sigma, representing the summation of an infinite series. A taylor series approximation would only use the first few terms of the series. Factorial is the "!" after a number. n! is n * (n-1)!, and 0! is 1.


Plus, depending on your compiler and your optimization settings, it might just be doing that, only choosing the first few terms of the series, or even using pre-computed tables. Also, modern processors instruction sets have a fast approximation for 1/sqrt(), which might be used.

On the other hand, Taylors series are something any programmer should learn.

This topic is closed to new replies.

Advertisement