Binary heap vs list A* - my humble performance comparison [updated with JPS]

Started by
14 comments, last by Pink Horror 10 years, 2 months ago

*** UPDATED ***

Added JPS information, I added some information about the implementation in another post, be sure to check it.

Quite some time ago I released a C implementation of the A* algorithm using grids as the graph and a LinkedList for the OpenList, the topic can be found here:

http://www.gamedev.net/topic/632734-a-c-implementation-code/

After that I got really busy with a lot of different work and couldn't really touch the code for quite some time.
Recently I managed to put some work on it and implement a version using a binary heap for the open list, as well as a version that uses memory pooling. The pool would increase its size by 4096 * struct size each time it run out of elements.

My objective with this post is to show the performance gain you can expect to achieve by optimizing an A* implementation, so that people can decide if they should optimize their own. I won't release the code as of now because it is still a bit ugly and it depends on a container library that I use (it has no package yet, so people would need to compile themselves).

Tests Info:

The tests were run on a core 2 duo 8400 3.00GHz, the code runs on a single thread.

The OS used was Ubuntu 12.04 32 bits.

All the times are in microseconds (10^6 = 1s).

Test on a random generated 64x64 map.

List implementation: 235.
Heap implementation: 161.
Pool + Heap implementation: (first path) 124 | (following paths) 105.

JPS + Pool + Heap: (first path) 118 | (following paths) 117.

Test on a random generated 640 x 640 map:
List implementation: 322823.
Heap implementation: 2006.
Pool + Heap implementation: (first path) 1823 | (following paths) 1652.
JPS + Pool + Heap: (first path) 1724 | (following paths) 1542.
Test on a random generated 3600 x 1800 map:
List implementation: 8075664
Heap implementation: 131072.
Pool + Heap implementation: (first path) 126942 | (following paths) 124395.
JPS + Pool + Heap: (first path) 7218 | (following paths) 6321.

The results are pretty much what I would expect them to be, as the path size increases the heap implementation will get a lot faster than a list implementation. The pooling advantage isn't really that big as the path size increases, but it does increase your performance.

My conclusion would be that optimization would be important only if your paths can get big. I used the list based version on a prototype dungeon game (something like a gauntlet), the monsters would only aggro players up to 20 squares distance, and the performance was not a issue at all.

Comments, questions and suggestions are always welcome.

Currently working on a scene editor for ORX (http://orx-project.org), using kivy (http://kivy.org).

Advertisement

Thanks for your post, it got me thinking about mine. I was lazy and just used a built-in structure, the SortedDictionary in C#, but looking at it, it's a red-black tree under the hood so the two main operations I do are inserts, and removal of the top entry and are:

Insert O(log n) Delete O(log n)

While for binary heaps, they are:

Insert O(1) Delete O(log n)

Which, as you can see, will probably give me a decent speed up on maps with large amounts of nodes, though there is no existing BinaryHeap class in C#, so I'd have to write it myself. (And for now, I'd wager it is not the slowest part of my algorithm by a longshot -- but I'm glad this post made me think about it.)

What if you combine it with jump point search?

@ ferrus:

The main problem I found is that a generic heap implementation doesn't support the value update, so you need a modified one.

In my case I used a function pointer to a set index and get index. Like this:


struct modified_heap_t{
    void** info;
    int (*compareFunction)(void*, void*);
    unsigned (*getIndexFunction)(void*);
    void (*setIndexFunction)(void*, unsigned);
    unsigned used;
    unsigned currentSize;
};

To update de value we just need to swap it from the first and call the heapify:


void modifiedHeapUpdatedValue(modifiedHeap* h, void* data){

    unsigned index = h->getIndexFunction(data);

    if(index == 0 && h->used == 1){
        return;
    }

    void* swap = h->info[0];

    h->info[0] = data;
    h->setIndexFunction(h->info[0], 0);
    h->info[index] = swap;
    h->setIndexFunction(h->info[index], index);

    heapify(h);

}

This link gives you more details:

http://www.policyalmanac.org/games/binaryHeaps.htm

@ polyfrag:

Had never heard of it, to be honest.

Right now I am working on a navmesh like approach. I have coded the graph and the astar on graph with heap, both are working fine. The main problem is the determining the region that each node represents, I used an ad hoc grid flood algorithm, but it is too slow for bigger nodes (I ran it for 20 minutes yesterday for the 3600 x 1800 map and it didn't finish), I believe it is O(n^4). I will try to use this one instead:

http://www.math.unl.edu/~s-dstolee1/Presentations/Sto08-MinRectPart.pdf

But I will take a look at jump point search and check if it is easier to implement.

Currently working on a scene editor for ORX (http://orx-project.org), using kivy (http://kivy.org).

Thanks for your post, it got me thinking about mine. I was lazy and just used a built-in structure, the SortedDictionary in C#, but looking at it, it's a red-black tree under the hood so the two main operations I do are inserts, and removal of the top entry and are:

Insert O(log n) Delete O(log n)

While for binary heaps, they are:

Insert O(1) Delete O(log n)

Which, as you can see, will probably give me a decent speed up on maps with large amounts of nodes, though there is no existing BinaryHeap class in C#, so I'd have to write it myself. (And for now, I'd wager it is not the slowest part of my algorithm by a longshot -- but I'm glad this post made me think about it.)

Your running times for binary heaps are wrong? I'm not aware of any binary heap that functions with those running times. Fibonnaci heaps may give you that insert time (if my memory serves correctly) but I don't think the delete is O(log n) in the worst case.

I found it strange too, here is wikipedia explaination:

"The number of operations required is dependent on the number of levels the new element must rise to satisfy the heap property, thus the insertion operation has a time complexity of O(log n). However, since approximately 50% of the elements are leaves and approximately 75% of the elements are in the bottom two levels, it is likely that the new element to be inserted will only move a few levels upwards to maintain the heap. Thus, binary heaps support insertion in average constant time, O(1)."

http://en.wikipedia.org/wiki/Binary_heap#Insert

Anyway, as I said, for A*, he will need to implement his own or find one that supports value update.

Currently working on a scene editor for ORX (http://orx-project.org), using kivy (http://kivy.org).

Thanks for your post, it got me thinking about mine. I was lazy and just used a built-in structure, the SortedDictionary in C#, but looking at it, it's a red-black tree under the hood so the two main operations I do are inserts, and removal of the top entry and are:

Insert O(log n) Delete O(log n)

While for binary heaps, they are:

Insert O(1) Delete O(log n)

Which, as you can see, will probably give me a decent speed up on maps with large amounts of nodes, though there is no existing BinaryHeap class in C#, so I'd have to write it myself. (And for now, I'd wager it is not the slowest part of my algorithm by a longshot -- but I'm glad this post made me think about it.)

Your running times for binary heaps are wrong? I'm not aware of any binary heap that functions with those running times. Fibonnaci heaps may give you that insert time (if my memory serves correctly) but I don't think the delete is O(log n) in the worst case.

I just snagged them from wikipedia, and I snagged the average times, not the worst case.

Binary Heap

Average / Worst case

Insert O(1) O(log n)

Delete O(log n) O(log n)

@KnolanCross

Why would I need a version with value update? In my current implementation I am never updating a node that is already in the open list.

Nice. If you make these numbers relative (so the highest is 100%) or if you plot them, the results become even more interesting. First, you see that for small datasets, the difference between list and heap is not all that pronounced (not even a 2x gain, hardly worth mentioning), whereas for large datasets any of the "heap" results are almost not visible any more in comparison to the list figure:

untitled.png

Now that is an improvement worth showing!

What is further interesting is that heap also becomes slightly slower again relative to list as the dataset becomes huge. That very much suggests that most (or at least a significant amount) of the performance improvement doesn't come from big-O but comes from the more cache-friendly access pattern employed by a heap. Since caches are of finite size, the effect eventually limits itself, which explains why it gets "worse" for huge datasets.

@ ferrous

When you are checking the nodes' neighbors they may already be in the open list, if so and the new path is better you need to update the score.

Here is the pseudo code from the wikipedia A* article, the part I am refering to is marked.


function A*(start,goal)
    closedset := the empty set    // The set of nodes already evaluated.
    openset := {start}    // The set of tentative nodes to be evaluated, initially containing the start node
    came_from := the empty map    // The map of navigated nodes.
 
    g_score[start] := 0    // Cost from start along best known path.
    // Estimated total cost from start to goal through y.
    f_score[start] := g_score[start] + heuristic_cost_estimate(start, goal)
 
    while openset is not empty
        current := the node in openset having the lowest f_score[] value
        if current = goal
            return reconstruct_path(came_from, goal)
 
        remove current from openset
        add current to closedset
        for each neighbor in neighbor_nodes(current)
            if neighbor in closedset
                continue
            tentative_g_score := g_score[current] + dist_between(current,neighbor)

>>>>>>>>>>>>>>>>> HERE 
            if neighbor not in openset or tentative_g_score < g_score[neighbor] 
                came_from[neighbor] := current
                g_score[neighbor] := tentative_g_score
                f_score[neighbor] := g_score[neighbor] + heuristic_cost_estimate(neighbor, goal)
                if neighbor not in openset
                    add neighbor to openset
<<<<<<<<<<<<<<<<<
 
    return failure
 
function reconstruct_path(came_from, current_node)
    if current_node in came_from
        p := reconstruct_path(came_from, came_from[current_node])
        return (p + current_node)
    else
        return current_node

Currently working on a scene editor for ORX (http://orx-project.org), using kivy (http://kivy.org).

On a 640x640 map your method was 195 times faster. But on a 3600x1800 it was only 65 times faster. Do you know why?

This topic is closed to new replies.

Advertisement