Sign in to follow this  
Zefrieg

WOW, I have REALLY optimized A*

Recommended Posts

Guest Anonymous Poster
"but with random search spaces of a million nodes and over a million connections"


how much "over a million conections" ???


network complexity - average number of connections per node


the higher the complexity the longer it will take to find a path


example a 2D cartesian map grid has 4-way or 8-way complexity
(1000x1000 grid - 1 million nodes and upto 4/8 million connections)

Share this post


Link to post
Share on other sites
Actually, I had that on debug, and had some things running in the background. Also, my cost function includes the sqrt() function and two multiplies, and the start to goal nodes are usually are far apart. Here are some tests on different numbers of nodes

10,000 nodes = 20 ms
100,000 nodes = 100 ms
1,000,000 nodes = 2342 ms

Seems like it grows in time complexity by O(n). There is only one O(log n) function in the main loop, and both a O(log n) and O(1) functions in the inner loop.

Share this post


Link to post
Share on other sites
There are at least two per node, and there are on average an extra two per node. So I guess that would be 4.

The 1,000,000 test had 4,000,000 connections.

It isn't done in a 2d grid. Instead it goes more like this


............-------------------
.........../
.....------
.../
./oooo
|oooooo
.\oooo
........------
...........\-------------------



o - random x,y coordinate of start
goal is somewhere on the outer edge of the opposite side.

Anyway, I haven't done any realistic tests. I will try to test against other methods. Any idea what the best way to set up the search space would be?

Share this post


Link to post
Share on other sites
Quote:
Original post by Kylotan
So, is all you're saying that you're using a more efficient structure than a linked list for storing nodes?


Well, I eliminated the need for searching through an open and closed list of nodes.

Share this post


Link to post
Share on other sites
Hi.

First, are the paths optimal? If not, how unoptimal are we talking about here?
Second, what is the size of the grid you're testing on? 1000x1000?
Finally, have you checked the results with another algorithm, a stable, and easy to debug one?

I find those results EXTREMELY difficult to belive!!!

Thanks.

Share this post


Link to post
Share on other sites
I had this A* algorithm, and i was changing some code, while testing it, i got some results similiar to those you just posted, i was intrigued.

It turns out the funtion was returning errors every single time very early in the processing fase, so it spent little time actually processing, because of a bug.***hint***

Share this post


Link to post
Share on other sites
Yes, I had some code on here in another thread with an A* algorithm. It was flawed because of the way I handled repeatedly searched nodes.

I will go ahead and test out my new one a bit more. From tests with other A* algorithms, mine seems to output the same results.

Share this post


Link to post
Share on other sites
I deleted the first post till I verified it against another A* search.

Well, I coded an alternative A* search like the one found in Game Programming Gems using open and closed lists. I used stl methods for all the required data structures. The results it gave were the same as my results, but that algorithm was quite a bit slower.

As a note, I did notice that my A* implementation is VERY similar to some other optimized versions of A*. I just went about it in a slightly different way.

If you tried to do what I did, then you probably got it wrong somehow. The code I posted is misleading in a couple of parts if you don't know the types of node classes I used.

Share this post


Link to post
Share on other sites
I should probably call it an optimization for the best-first search. I don't change anything that makes A* what it is. I just change the best-first part and redundant node checking of the method. I have applied the same optimization to my uniform cost and greedy searches. Heck, even my uniform cost is pretty fast now.

As far as whether or not it is optimal and complete. Well, I am using euclidean distance. I guess that means 100% since the heuristic doesn't over-estimate.

Share this post


Link to post
Share on other sites
I didn't use a list for my open nodes, instead I had a priority queue based on path length, so I could always pick the shortest trivially (I used std::multi_map)

The closed nodes I stored in a std::map from a location (x,y) to the node, so I could find out whether there was a closed node in a location quickly.

So in fact, neither of them was a "list"

Mark

Share this post


Link to post
Share on other sites
The point is not wether he is using lists or not, no one uses lists for a decent A* algorithm anyway. The point is that, this guy is saying he has an implementation that can calculate 1000000 paths in 6 seconds in a 1000x1000 map.

That is just wrong! A million PIV cluster couldn't do that!

You said you already tested against another implementation and the results were the same so repost the code.

[Edited by - White Rabbit on September 23, 2004 8:45:38 AM]

Share this post


Link to post
Share on other sites
Quote:
Original post by White Rabbit
The point is not wether he is using lists or not, no one uses lists for a decent A* algorithm anyway. The point is that, this guy is saying he has an implementation that can calculate 1000000 paths in 6 seconds in a 1000x1000 map.

That is just wrong! A million PIV cluster couldn't do that!

You said you already tested against another implementation and the results were the same so repost the code.


Where did I say those things? The time it takes my A* to run is just sqrt(runtime(regular-A*)) from the tests that I have done. That is just because I eliminated a O(n) search in the inner loop which is what every other one has done.

If you want to know, it takes my algorithm about 18 seconds to check through all one million nodes in a graph where each vertex has 4 connections. This is the case when the goal is not in the search space. Now, that is ONE (1) search. Where exactly did you get me saying that I generated 1000000 paths in 6 seconds?

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
he didn't say that WR, I think he said it took an average of 6 seconds to find a path in a graph of 1,000,000 nodes.

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
Quote:
Original post by Zefrieg
Actually, I had that on debug, and had some things running in the background. Also, my cost function includes the sqrt() function and two multiplies, and the start to goal nodes are usually are far apart. Here are some tests on different numbers of nodes

10,000 nodes = 20 ms
100,000 nodes = 100 ms
1,000,000 nodes = 2342 ms

Seems like it grows in time complexity by O(n). There is only one O(log n) function in the main loop, and both a O(log n) and O(1) functions in the inner loop.




Old game programming trick :

If you are comparing distances, you can compare the squared value just as well -- you dont need to do the SquareRoot function and you can simply compare the dX*xX+dY*dY to calculate which distance is shorter.

Share this post


Link to post
Share on other sites
Quote:
Original post by Anonymous Poster
Old game programming trick :

If you are comparing distances, you can compare the squared value just as well -- you dont need to do the SquareRoot function and you can simply compare the dX*xX+dY*dY to calculate which distance is shorter.
If I'm reading this section of this article correctly (I'm not very experienced with A*, but I understand the concepts pretty well), you certainly would not want to do this. I suppose most of the people in this thread already realize this, but in case someone else happens along, this may help avoid an hour or more of frustration.

Share this post


Link to post
Share on other sites
Quote:
Original post by Agony
Quote:
Original post by Anonymous Poster
Old game programming trick :

If you are comparing distances, you can compare the squared value just as well -- you dont need to do the SquareRoot function and you can simply compare the dX*xX+dY*dY to calculate which distance is shorter.
If I'm reading this section of this article correctly (I'm not very experienced with A*, but I understand the concepts pretty well), you certainly would not want to do this. I suppose most of the people in this thread already realize this, but in case someone else happens along, this may help avoid an hour or more of frustration.

Right, you can't just square H because then it gets much larger than G.
The formula is F = G + H, where H=sqrt(h), so you have F = G + sqrt(h)
to get rid of the square root, you have to square both sides getting
F^2 = (G + sqrt(h))^2 = G^2 + h + (2 * G * sqrt(h))
You might be able to get away with using F^2 = G^2 + h, but I'm not sure how significant the last term is. Maybe you could use a very crude approximation of sqrt(h) to get a reasonably accurate F^2 and compare them instead of F (I think the crude approximation would effect F more than F^2 but I'm just guessing blindly).

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this