Archived

This topic is now archived and is closed to further replies.

Greg K

A* on a quad-tree

Recommended Posts

I am developing A* pathfinding to run on a fairly large map. I am using a quad-tree to partition the space. The map is not tile based so an array implementation would not work as well as a Quad-Tree as far as I know (anyone do testing on this?). Anyway, I am having trouble with this quad-tree solver. The quad-tree works fine, the A* works fine, and together they work just as I programmed it... fine. Unfortunatly the paths are not optimal. Here is my problem: To calculate the distance between blocks I calculate the distance from the two blocks centers. What if the optimal path should pass through the very corner of a large block? The distance would be completely wrong, much higher in most cases. This causes some pretty ugly paths. I can post some screenshots of a 2D test bed I made if my description was not clear enough. -Greg

Share this post


Link to post
Share on other sites
I''m not clear on this, are your paths simply aesthetically unpleasing, or are you getting legitimately bad paths?

Share this post


Link to post
Share on other sites
While this reference doesn't implement A* (but rather a 3D version of the Distance Transform algorithm), it does cover path planning in a binary space partition of the state space (in this case using an oct-tree)... so you might get some pointers out of it.

Williams, M and Jones, D. I.
"A Rapid Method for Planning Path in Three Dimensions for a Small Aerial Robot"
Robotica, 19 2001, pp125-135


As for your specific problem with costs, I suspect that the problem you are finding is caused by your discretisation of the state space into large boxes. If you use finer resolution, do you get the same sub-optimal paths (you should, but they should be less sub-optimal the finer the resolution).

If this is not the cause, then what are you using for a heuristic function?

Cheers,

Timkin

[edited by - Timkin on April 20, 2004 10:26:03 PM]

Share this post


Link to post
Share on other sites
Legitimatly bad paths. I will give an example:



-----------
| |
| |
------ |
| .| |
| | |
----------------
| | . | |
| | | |
----------------


From one point to another it will go through the smaller square instead of the larger. It does that because the center point of the small square is closer than the center point of the big square. The algorithm thinks that the big square is less efficient when in fact it is more efficient. Is there a better way to calculate the distance which takes into account how far the unit actually has to travel?

[edited by - Greg K on April 20, 2004 11:09:03 PM]

Share this post


Link to post
Share on other sites
Ok, I can think of two changes you can make.

One, instead of considering new nodes for the A* at the center of the squares consider new nodes at the center of the borders of each square. If we take and number your diagram:

-----------
|2 |
| |
------ |
|1 .| |
| | |
----------------
|3 |4 . |5 |
| | | |
----------------

Assuming you''re pathing from the dot in 1 to the dot in 4, you are currently taking the dot in 1 and comparing it to the center of 2 and the center of 3. So it jumps to 3 before 2. However, if you consider the distance from the dot to the midpoint of the 1-2 border and the 1-3 border, then the 1-2 border will win out and the algorithm will push through 2 first.

The other solution is examine the use of your heuristic: A* only works properly if the heuristic used is either exact or underestimating. If you use an overestimating herustic A* may not work properly. And using a herusitic that goes through the center of 2 when moving from 1 to 4 would be overestimating. So instead, consider only the horizontal or veritcal distance to the border to the next square. This allows you to continue using the quadtree nodes as nodes in A* instead of the midpoint of the borders in your search.

Share this post


Link to post
Share on other sites
Actually the optimal way to deal with your problem is a slight extension of SiCrane''s suggstion. Optimally, you want to compute the path from one square to another based on the point of intersection between the two squares that minimises the path length from the entry point of the given node to the exit of the next node. However, given that these points are also interpolations, you need to solve a set of simultaneous linear equations (constraints) along the length of the path, with the start and end points being the constraints that close the system.

The sub-optimal approximation to this is as SiCrane described: simply use the midpoint of the edge as the interpolated point.

There''s a fair bit of literature on this sort of problem out their, particularly in robotics and deterministic path planning.

Cheers,

Timkin

Share this post


Link to post
Share on other sites
[Timkin] Precisely! That is exactly what I would need to do. Unfortunatly that would require ever so much calculation as it would be required each time I open a node. On the resulting path I was actually planning to smooth it using the linear constraints you mentioned.

I think that I will try the approximation that SiCrane has put foreward in hopes that the paths will at least be tolerable.

Share this post


Link to post
Share on other sites
I ran into this exact issues with a nav mesh implementation. The cost was based on the center of the poly, which easily overestimated.

I ended up converting it to use, the cost to enter the poly:


-----------
| |
| |
------ |
| 1| |
| | |
--------X-------
| Y 0 | |
| | | |
----------------

In this case, I use the cost from 0 to X and 0 to Y as the cost for entering. This still isn't perfect, as it overestimated because X isn't colinear with 0 and 1, but it does solve some of the significantly worse cases.

This is really the same solution SiCrane proposed. Just had to mention it again as it worked well for me.

A few thoughts:

1) Do a check for a straight line before you search. If an unobstructed line exists, you can go stright there.

2) Do a post search optimization pass to 'slide' waypoints along the borders of meshes. For instance, a waypoint at X would be slid to the left to be colinear with 0 and 1.

Edit: Attempted to clean up ascii art.

[edited by - BrianL on April 21, 2004 11:04:09 AM]

Share this post


Link to post
Share on other sites
You can use an additional data param to get a better estimate of travel cost by taking into account the entering and exting nodes. Using a edge quadrant param which classifies an edge from A-H ( see image )


A B
------------
H | | | C
| | |
|----------|
G | | | D
| | |
------------
F E


As you can see there are 8 possible edge classes. This can be represented within 3 bits. A small precalcuated table (64 entires) can capture all combinations of enter/exit nodes. By combing the enter and exit bits into a single param and indexing that into the precaulcated travel cost modifer table, it should give you a better estimation. Also a node can't have multiple edge classes, so just choose the lower cost edge.

So the travese estimate = estimiate_table[combine_param]*node_length;

This should produce the best performance, as the precalcuated table will fit in the cache and all the edge classes also be predetermined.

Good Luck!

-ddn[/source]

[edited by - ddn3 on April 21, 2004 7:11:15 PM]

[edited by - ddn3 on April 21, 2004 7:11:32 PM]

Share this post


Link to post
Share on other sites
Taking your initial idea of using the centres, you can then optimise the route by checking to see if, for every node n, there is an unblocked route between it and node n+2, if there is, you can delete node n+1 and join up n and n+2. You can repeat it over and over until no further optimisations can be made.

Share this post


Link to post
Share on other sites
The problem with this idea NytzovNee is that it may have been the case that the path between nodes n+1 and n+3 was more optimal than the path between nodes n and n+2. If you had deleted node n+1, you would now never observe the optimality of n+1 -> n+3.

Timkin

Share this post


Link to post
Share on other sites
Using the sides has reduced the overestimation. Is there any way to completely remove the overestimation? Even if I have to calculate the distance each node (worst case) what options do I have?

To visualize the problem I am thinking of having a chain of points each on the border of a quad-tree block. They are all connected by a string connect the dots style. I want to tighten the string by moving the dots up and down their borders until I have the best (shortest) possible string (path).

I was thinking of having generations of path smoothing where each time I go through the list I move the points a little in the direction they need to go (n, n+2, move n+1 a bit). After the movement stops (or slows enough) the path will be the best it can be. Am I making sense? Are there easier ways to do this?

Share this post


Link to post
Share on other sites
GregK, there is tonnes of literature on such problems available, even online. You really should be reading that to work out your problem. Essentially you have an optimisation problem subject to constraints. You might want to look up mixed integer linear programming, or other linear constraint satisfaction algorithms.

Cheers,

Timkin

Share this post


Link to post
Share on other sites