Binary heap - resorting

Started by
23 comments, last by FlowingOoze 18 years, 8 months ago
Quote:Original post by DauntlessCrowd
You should try it out on paper... The explenation from the tutorial actually works... You still end up with a valid heap after the change... Exept afcourse when you want to change the first node.
It might just work, it does remind me of the insertion algorithm anyway..
Why didn't they teach us that trick in school?
Advertisement
Realize that in A*, you normally insert MANY MANY MANY more nodes than you extract from the open list.
A common optimization I've seen is to store an unordered open list along with a small sorted list of 10 or so elements from the open list. When you need the next node, get it from the sorted list. If you need the next node and the sorted list is empty, use an insertion sort to move items from the unordered list to the smaller sorted list (moving the worst nodes back to the ordered list when the sorted part has too many).

When you insert a node to the open list, check if the new node is better than one in the small sorted part. If so (or if the sorted list isn't full), insert the node into the short list at the appropriate spot. Otherwise, put the node in the unordered part of the open list.

This way you only sort a few nodes at a time and only occasionally have to sort the whole thing.
"Walk not the trodden path, for it has borne it's burden." -John, Flying Monk
Quote:Original post by Extrarius
Realize that in A*, you normally insert MANY MANY MANY more nodes than you extract from the open list.
A common optimization I've seen is to store an unordered open list along with a small sorted list of 10 or so elements from the open list. When you need the next node, get it from the sorted list. If you need the next node and the sorted list is empty, use an insertion sort to move items from the unordered list to the smaller sorted list (moving the worst nodes back to the ordered list when the sorted part has too many).

When you insert a node to the open list, check if the new node is better than one in the small sorted part. If so (or if the sorted list isn't full), insert the node into the short list at the appropriate spot. Otherwise, put the node in the unordered part of the open list.

This way you only sort a few nodes at a time and only occasionally have to sort the whole thing.


Interesting :). Just 1 question: When my sorted list is empty, then i have to fill it again. So I loop through the whole unsorted upen array and when I vind that the node is lower to one of the nodes in the array, I add it, and then remove the last element or something?

Looks a bit weird that this would work faster, since you could might as well just sort the whole unsorted open array (you run through it anyhow) and now you just do the same, except deleting all the nodes you wouldnt need...
Basically you delete the last one in the sorted part, yes. The reason it's better not to sort the whole thing is that sorting the whole thing is basically O(n^2) and sorting the large part into the small part is O(m*n) with m=10 or so and n=much more you can see it'd be faster. Well, not really, it'd be O(n ln(n)) but I'm not sure what it'd reduce to with the m (but it would reduce quite a bit still). Because you're only comparing to a few in the short list, it's much faster to sort things into it.
"Walk not the trodden path, for it has borne it's burden." -John, Flying Monk
Quote:Original post by Extrarius
Basically you delete the last one in the sorted part, yes. The reason it's better not to sort the whole thing is that sorting the whole thing is basically O(n^2) and sorting the large part into the small part is O(m*n) with m=10 or so and n=much more you can see it'd be faster. Well, not really, it'd be O(n ln(n)) but I'm not sure what it'd reduce to with the m (but it would reduce quite a bit still). Because you're only comparing to a few in the short list, it's much faster to sort things into it.

Wauw... I dont understand squat :D . Euhm, where can I find those formula structures explained? I don't really get that kind of math in school... I also found those formulas on wolfram and wikipedia, but since I didnt knew how to read them... Some explenation please? Or a site, or something else...
If you are using C++, why don't you just use std::priority_queue? Here is how you might initialize and use it:

node temp;std::priority_queue< node, std::vector< node >, std::greater< node > > node_queue;temp = node_queue.top();node_queue.pop();node_queue.push(temp);


You can use any of the comparison operators, but std::greater and std::less are the only ones that would probably make sense.

For searches like A*, I create a node called informed_node that looks a bit like this:

struct informed_node{    node* ptr_node;    int cost;    int current_cost;    int projected_cost;    void clone(node& n)    {         ptr_node = &n         cost = 0;         current_cost = 0;         project_cost = 0;    }    const informed_node& operator=(cosnt node& n) { clone(n); return *this; }    bool operator>(const informed_node& n) { return cost > n.cost;    informed_node(const node& n) { clone(n); }};


Then I define my priority queue like this:

std::priority_queue< informed_node, std::vector< informed_node >, std::greater< informed_node > > 


When you push regular "node" objects into the priority queue, it creates an informed_node and calls the copy constructor. Why do this? Well, so you can skip having a closed list of nodes. You will have to keep some internal information in the node pointed to by ptr_node. Basically, just the current cost and whether or not it has been visited. You will also need to keep a list of alterned nodes, so you can change the current cost and the visited status back to their defaults. Also, whenever you come to a previously searched node, you can actually change the current cost of EVERY SINGLE NODE pointed to by ptr_node in the informed_nodes.

Doing this removes the need for a closed_list (because you can use the node pointed to by ptr_node to get the current cost and visited status), it also changes having to set all nodes in the search space to default values in the beginning to having to set only searched nodes to default values at the end. This will speed up your searches quite a bit.

Here, I'll just show you mine:

template <class Object, class Cost>int search<Object, Cost>::a_star(node<Object, Cost>& start_node, node<Object, Cost>& goal_node, CostFunc cost_func){	int result;	m_nodes_searched = 0;	result = a_star_search(&start_node, &goal_node, cost_func);	if(result)	{		this->store_goal(&start_node, m_goal);	}	this->revert_altered_nodes();	return result;}template <class Object, class Cost>int search<Object, Cost>::a_star_search(node<Object, Cost>* start_node, node<Object, Cost>* goal_node, CostFunc cost_func){	std::priority_queue< informed_node<Object, Cost>, std::vector< informed_node<Object, Cost> >, std::greater< informed_node<Object, Cost> > > node_queue;	std::list< node<Object, Cost>* >::const_iterator itr;	informed_node<Object, Cost> child;	informed_node<Object, Cost> node;	Cost current_cost;	start_node->m_visited = true;	start_node->m_cost = Cost();	m_altered_nodes.push(start_node);	node_queue.push(*start_node);	while(!node_queue.empty())	{		node = node_queue.top();		node_queue.pop();		if(node.m_node->m_object == goal_node->m_object)		{			m_goal = node.m_node;			return 1;		}		itr = node.m_node->get_connections().begin();		m_nodes_searched++;		while(itr != node.m_node->get_connections().end())		{			child.m_node = (*itr);			if(child.m_node->m_parent != node.m_node)			{				if(!child.m_node->m_visited)				{					child.m_current_cost = node.m_current_cost + cost_func(*node.m_node, *child.m_node);					child.m_projected_cost = cost_func(*child.m_node, *goal_node);					child.m_cost = child.m_current_cost + child.m_projected_cost;					child.m_node->m_parent = node.m_node;					child.m_node->m_visited = true;					child.m_node->m_cost = child.m_current_cost;					node_queue.push(child);					m_altered_nodes.push(child.m_node);				}				else				{					current_cost = node.m_current_cost + cost_func(*node.m_node, *child.m_node);					if(current_cost < child.m_node->m_cost)					{						child.m_current_cost = current_cost;						child.m_projected_cost = cost_func(*child.m_node, *goal_node);						child.m_cost = child.m_current_cost + child.m_projected_cost;						child.m_node->m_parent = node.m_node;						child.m_node->m_cost = current_cost;						node_queue.push(child);					}				}			}			itr++;		}	}	return 0;}

Quote:Code...

Actually i'm using a super crapy language called Actionscript (comes with flash by MacroMedia). So... I AM planning on learning Java though...
Well, never-the-less, you shouldn't have to rebuild (build heap) any kind of binary heap in an A* search. I would suggest looking into functions called percolate up and percolate down, and try to understand how those work. They are all you need to insert and delete things from a priority queue which is what you need for A*.
You will need to reconsider any nodes that you come to more than once too. There are quite a few times when you will come to a previously searched node with a longer path than the previous time. If you update the current path without checking if the currently searched path is shorter than the previous one, then you will end up with a messed up path. Also, you might come to a previously search node with a shorter path too. If this occurs, then you will need to change the path to the shorter one.

What is your search space like, anyway? Is it just a grid? Is everything uniform cost (ie moving from tile to tile is just a cost of one)? If so, then you probably wouldn't get cases where coming across a previously search node would result in a lower cost. You could probably ignore that case completely. If not, then you will have to do what I described above.

Quote:Original post by Zefrieg
You will need to reconsider any nodes that you come to more than once too. There are quite a few times when you will come to a previously searched node with a longer path than the previous time. If you update the current path without checking if the currently searched path is shorter than the previous one, then you will end up with a messed up path. Also, you might come to a previously search node with a shorter path too. If this occurs, then you will need to change the path to the shorter one.

What is your search space like, anyway? Is it just a grid? Is everything uniform cost (ie moving from tile to tile is just a cost of one)? If so, then you probably wouldn't get cases where coming across a previously search node would result in a lower cost. You could probably ignore that case completely. If not, then you will have to do what I described above.


Both variable cost & uniform costs are possible... But actually, my A* on its own is finished now, it works perfect and does the thing you descripe (if(newPath.f < oldPath.f){ changeParent; etc;};

All I need now is optimalisation tips, 'cause Flash really isnt fast, and I do WANT to make it FAST. Any general topics on that issue?

This topic is closed to new replies.

Advertisement