Out of ideas for optimizing A* search, got any?

Recommended Posts

Here are the two main files:

A few things I've done to optimize the search:

1. Rather than a dictionary to keep track of scores and parents, I use a 2d array of objects that are mapped to the grid i.e. [x,y] represents grid point (x,y).  Objects have F,G,H and Parent fields, these fields are reset to default values during each search.

2. Cheap weighted heuristic

3. Orthogonal neighbors (searching 4 neighbors rather than 8)

Paths don't have to be beautiful because the point reduction smooths them out quite nice.

In Path.cs you can see the two functions to reduce points, Incremental and Stretch.  Stretch is much faster because it has the potential to reduce points all the way to the end right off the bat or early on thus reducing calls to the LineOfSight function, but it isn't so reliable around small areas/corners.  The incremental function is reliable but it calls to the LineOfSight function for each and every point along the path, this causes the path reduction to be just about as slow as calculating the path itself.

Line of Sight is determined by getting all points between 2 points and checking that they are all flagged as Walkable.

My grid is dynamic and roughly 250x250.  I tried out Theta* and JPS, both were slower than my current A*.  I'm trying to avoid using a navmesh or worker thread, I suppose mostly just looking for ideas to tighten up my code

Edited by Crayz92

Share on other sites

Have you run a profiler against your code?

Share on other sites
50 minutes ago, ApochPiQ said:

Have you run a profiler against your code?

I have.  The two biggest bottlenecks are in point reduction and the priority queue (using BlueRaja's which claims to be the fastest)

Here's a screenshot from the Unity profiler:

Profiling alone adds a lot of overhead but I think it gives a good idea of where things sit.  I've never used a profile other than Unity's

Share on other sites

Actually, the profiler is showing you something slightly different.

Keep in mind that what you care about in this context is not the sum total cost of every call to a function - just the average cost of a single call. If you divide the time spent in a function by its call count, you'll see that the priority queue is dirt cheap per-call, you just call it a whole lot.

This is important data on two fronts. First, you should focus your effort on your algorithm primarily. Look for ways to avoid calling code that has no effect on the final results of the path search. Second, if you look at the actual places your code is burning a lot of time, you'll get much more visible performance wins.

Unity has a "deep" profiler option that you should also look into.

Share on other sites

I took a look at some of the code:

- Heuristic.  Is distance-squared admissible for A*?  I've never seen it used for A* before and that is a warning sign to me.

- FastPriorityQueue is being called an extremely high number of times, but each operation should be EXTREMELY short, so the profiler overhead is likely to be polluting the numbers to the point where you should no longer trust them.  You'll have to move your A* code into a standalone .Net project in order to use a sampling profiler with it, as far as I know.

- I looked at the github repo for FastPriorityQueue, and there are some obvious microoptimizations that can be made to it if you determine that it really is worth it.  Change the Node to a struct, change those properties to plain old fields, perhaps change the binary heap to a cache-friendly heap (after the first few layers of a binary heap you will cache miss on every single layer).

- JPS+ should be tremendously faster than A*, so if it wasn't then there's something significantly wrong with the implementation that you tried.

Edited by Nypyren

Share on other sites

I might be wrong here, but it looks like you might be enqueueing more nodes than you really ought to need to, unless you're hitting a really pathological layout.  250x250 is 62500 total nodes, and your profiler result is showing 59600 calls to Enqueue(), which appears to indicate that it is considering a pretty hefty fraction of the total nodes available.  Assuming no obstacles and an appropriate heuristic, A* ought to pretty much go in a straight line from the start to the goal, and only have to put the neighboring nodes of the ideal path on the open (priority queue) set.

You might want to try tweaking your code so that you can step through the algorithm one iteration at a time, and visualize how nodes are getting added to your open and closed sets at each step, so you can see if it is clearly doing something weird.  Five seconds is really far too long for this to take.

Where your connections all appear to be orthogonal, you might want to look at something like Manhattan Distance as a heuristic, which tends to work very well with A* in such situations.  That would tend to look something like this, lifted from my pathfinding code:

public static float ManhattanDistance(Point start, Point goal) {
var dx = Math.Abs(start.X - goal.X);
var dy = Math.Abs(start.Y - goal.Y);
var h = dx + dy;

return h;
}

Share on other sites
1 hour ago, Nypyren said:

I took a look at some of the code:

- Heuristic.  Is distance-squared admissible for A*?  I've never seen it used for A* before and that is a warning sign to me.

- FastPriorityQueue is being called an extremely high number of times, so the profiler overhead is likely to be polluting the numbers to the point where you should no longer trust them.  You'll have to move your A* code into a standalone .Net project in order to use a sampling profiler with it, as far as I know.

- I looked at the github repo for FastPriorityQueue, and there are some obvious microoptimizations that can be made to it if you determine that it really is worth it.  Change the Node to a struct, change those properties to plain old fields, perhaps change the binary heap to a cache-friendly heap (after the first few layers of a binary heap you will cache miss on every single layer).

- JPS+ should be tremendously faster than A*, so if it wasn't then there's something significantly wrong with the implementation that you tried.

I've played around with a variety of heuristics, this is the fastest I've come across and it produces acceptable results.  I'll give JPS another try, it's very possibly my implementation is messed up.

1 hour ago, ericrrichards22 said:

I might be wrong here, but it looks like you might be enqueueing more nodes than you really ought to need to, unless you're hitting a really pathological layout.  250x250 is 62500 total nodes, and your profiler result is showing 59600 calls to Enqueue(), which appears to indicate that it is considering a pretty hefty fraction of the total nodes available.  Assuming no obstacles and an appropriate heuristic, A* ought to pretty much go in a straight line from the start to the goal, and only have to put the neighboring nodes of the ideal path on the open (priority queue) set.

You might want to try tweaking your code so that you can step through the algorithm one iteration at a time, and visualize how nodes are getting added to your open and closed sets at each step, so you can see if it is clearly doing something weird.  Five seconds is really far too long for this to take.

Where your connections all appear to be orthogonal, you might want to look at something like Manhattan Distance as a heuristic, which tends to work very well with A* in such situations.  That would tend to look something like this, lifted from my pathfinding code:


public static float ManhattanDistance(Point start, Point goal) {
var dx = Math.Abs(start.X - goal.X);
var dy = Math.Abs(start.Y - goal.Y);
var h = dx + dy;

return h;
}

Ah I didn't mention that profiler screenshot is the result of 100 paths in a single frame.  Here's a better shot for a single path:

I fixed up some code and gave Manhattan distance a test run.  Here's a  comparison between my current heuristic & Manhattan

Manhattan without weight puts it at about 2,200ms.  While Manhattan makes for a better looking path, post-processing takes care of that so I'm mostly interested in speed

Edited by Crayz92

Share on other sites

Use an instance of the Stopwatch class to time SetDestination with and without deep profiling enabled, and see how much it differs.  I still think the instrumenting profiler is causing too much overhead on the FastPriorityQueue and it would be good to find out how much there is.  It's the biggest offender if the profiler is telling the truth.

Share on other sites
3 minutes ago, Nypyren said:

Use an instance of the Stopwatch class to time SetDestination with and without deep profiling enabled, and see how much it differs.  I still think the instrumenting profiler is causing too much overhead on the FastPriorityQueue and it would be good to find out how much there is.  It's the biggest offender if the profiler is telling the truth.

I have my debug script set up just like that

The results in the two screenshots above are from this Stopwatch snippet with deep profile & profiler disabled completely

Share on other sites

Ok, so with profiling one call takes 31ms, and without you have 1-2ms.  Definitely too much overhead from Unity's instrumenting profiler.

If you can, take all of your pathfinding code and make a standalone .Net app out of it, and use a sampling profiler on it.  I think newer versions of Visual Studio should have a profiler built into them.  I can't tell if you're on Windows or not from the screenshots, though.  If not, there should also be some freely available profilers available that will run on other OSes.

Create an account

Register a new account

• Similar Content

• I recently worked on a path-finding algorithm used to move an AI agent into an organically generated dungeon.
It's not an easy task but because I've already worked on Team Fortress 2 cards in the past, I already knew navigation meshes (navmesh) and their capabilities.
Why Not Waypoints?
As described in this paper, waypoint networks were in the past used in video games to save valuable resources. It was an acceptable compromise : level designers already knew where NPCs could and could not go. However, as technology has evolved, computers got more memory that became faster and cheaper.
In other words, there was a shift from efficiency to flexibility.
In a way, navigation meshes are the evolution of waypoints networks because they fulfill the same need but in a different way.
One of the advantages of using a navigation mesh is that an agent can go anywhere in a cell as long as it is convex because it is essentially the definition of convex. It also means that the agent is not limited to a specific waypoint network, so if the destination is out of the waypoint network, it can go directly to it instead of going to the nearest point in the network. A navigation mesh can also be used by many types of agents of different sizes, rather than having many waypoint networks for agents of different sizes.
Using a navigation mesh also speeds up graph exploration because, technically, a navigation mesh has fewer nodes than an equivalent waypoint network (that is, a network that has enough points to cover a navigation mesh).
To summarize, a navigation mesh is a mesh that represents where an NPC can walk. A navigation mesh contains convex polygonal nodes (called cells). Each cell can be connected to each other using connections defined by an edge shared between them (or portal edge). In a navigation mesh, each cell can contain information about itself. For example, a cell may be labeled as toxic, and therefore only those units capable of resisting this toxicity can move across it.
Personally, because of my experience, I view navigation meshes like the ones found in most Source games.

However, all cells in Source's navigation meshes are rectangular. Our implementation is more flexible because the cells can be irregular polygons (as long as they're convex).
A navigation mesh implementation is actually three algorithms :
A graph navigation algorithm A string pulling algorithm And a steering/path-smoothing algorithm In our cases, we used A*, the simple stupid funnel algorithm and a traditional steering algorithm that is still in development.
Finding our cells
Before doing any graph searches, we need to find 2 things :
Our starting cell Our destination cell For example, let's use this navigation mesh :

In this navigation meshes, every edge that are shared between 2 cells are also portal edges, which will be used by the string pulling algorithm later on.
Also, let's use these points as our starting and destination points:

Where our buddy (let's name it Buddy) stands is our staring point, while the flag represents our destination.
Because we already have our starting point and our destination point, we just need to check which cell is closest to each point using an octree. Once we know our nearest cells, we must project the starting and destination points onto their respective closest cells.
In practice, we do a simple projection of both our starting and destination points onto the normal of their respective cells.
Before snapping a projected point, we must first know if the said projected point is outside its cell by finding the difference between the area of the cell and the sum of the areas of the triangles formed by that point and each edge of the cell. If the latter is remarkably larger than the first, the point is outside its cell.
The snapping then simply consists of interpolating between the vertices of the edge of the cell closest to the projected point. In terms of code, we do this:
Vector3f lineToPoint = pointToProject.subtract(start); Vector3f line = end.subtract(start); Vector3f returnedVector3f = new Vector3f().interpolateLocal(start, end, lineToPoint.dot(line) / line.dot(line)); In our example, the starting and destination cells are C1 and C8 respectively:

Graph Search Algorithm
A navigation mesh is actually a 2D grid of an unknown or infinite size. In a 3D game, it is common to represent a navigation mesh graph as a graph of flat polygons that aren't orthogonal to each other.
There are games that use 3D navigation meshes, like games that use flying AI, but in our case it's a simple grid.
For this reason, the use of the A* algorithm is probably the right solution.
We chose A* because it's the most generic and flexible algorithm. Technically, we still do not know how our navigation mesh will be used, so going with something more generic can have its benefits...
A* works by assigning a cost and a heuristic to a cell. The closer the cell is to our destination, the less expensive it is. The heuristic is calculated similarly but we also take into account the heuristics of the previous cell. This means that the longer a path is, the greater the resulting heuristic will be, and it becomes more likely that this path is not an optimal one.
We begin the algorithm by traversing through the connections each of the neighboring cells of the current cell until we arrive at the end cell, doing a sort of exploration / filling. Each cell begins with an infinite heuristic but, as we explore the mesh, it's updated according to the information we learn. In the beginning, our starting cell gets a cost and a heuristic of 0 because the agent is already inside of it.
We keep a queue in descending order of cells based on their heuristics. This means that the next cell to use as the current cell is the best candidate for an optimal path. When a cell is being processed, it is removed from that queue in another one that contains the closed cells.
While continuing to explore, we also keep a reference of the connection used to move from the current cell to its neighbor. This will be useful later.
We do it until we end up in the destination cell. Then, we "reel" up to our starting cell and save each cell we landed on, which gives an optimal path.
A* is a very popular algorithm and the pseudocode can easily be found. Even Wikipedia has a pseudocode that is easy to understand.
In our example, we find that this is our path:

And here are highlighted (in pink) the traversed connections:

The String Pulling Algorithm
String pulling is the next step in the navigation mesh algorithm. Now that we have a queue of cells that describes an optimal path, we have to find a queue of points that an AI agent can travel to. This is where the sting pulling is needed.
String pulling is in fact not linked to characters at all : it is rather a metaphor.
Imagine a cross. Let's say that you wrap a silk thread around this cross and you put tension on it. You will find that the string does not follow the inner corner of it, but rather go from corner to corner of each point of the cross.

This is precisely what we're doing but with a string that goes from one point to another.
There are many different algorithms that lets us to do this. We chose the Simple Stupid Funnel algorithm because it's actually...
...stupidly simple.
To put it simply (no puns intended), we create a funnel that checks each time if the next point is in the funnel or not. The funnel is composed of 3 points: a central apex, a left point (called left apex) and a right point (called right apex). At the beginning, the tested point is on the right side, then we alternate to the left and so on until we reach our point of destination. (as if we were walking)

When a point is in the funnel, we continue the algorithm with the other side.
If the point is outside the funnel, depending on which side the tested point belongs to, we take the apex from the other side of the funnel and add it to a list of final waypoints.
The algorithm is working correctly most of the time. However, the algorithm had a bug that add the last point twice if none of the vertices of the last connection before the destination point were added to the list of final waypoints. We just added an if at the moment but we could come back later to optimize it.
In our case, the funnel algorithm gives this path:

The Steering Algoritm
Now that we have a list of waypoints, we can finally just run our character at every point.
But if there were walls in our geometry, then Buddy would run right into a corner wall. He won't be able to reach his destination because he isn't small enough to avoid the corner walls.
That's the role of the steering algorithm.
Our algorithm is still in heavy development, but its main gist is that we check if the next position of the agent is not in the navigation meshes. If that's the case, then we change its direction so that the agent doesn't hit the wall like an idiot. There is also a path curving algorithm, but it's still too early to know if we'll use that at all...
We relied on this good document to program the steering algorithm. It's a 1999 document, but it's still interesting ...
With the steering algoritm, we make sure that Buddy moves safely to his destination. (Look how proud he is!)

So, this is the navigation mesh algorithm.
I must say that, throughout my research, there weren't much pseudocode or code that described the algorithm as a whole. Only then did we realize that what people called "Navmesh" was actually a collage of algorithms rather than a single monolithic one.
We also tried to have a cyclic grid with orthogonal cells (i.e. cells on the wall, ceiling) but it looked like that A* wasn't intended to be used in a 3D environment with flat orthogonal cells. My hypothesis is that we need 3D cells for this kind of navigation mesh, otherwise the heuristic value of each cell can change depending on the actual 3D length between the center of a flat cell and the destination point.
So we reduced the scope of our navigation meshes and we were able to move an AI agent in our organic dungeon. Here's a picture :

Each cyan cubes are the final waypoints found by the String pulling and blue lines represents collisions meshes. Our AI is currently still walking into walls, but the steering is still being implemented.

• Let's say, on abstract level, the path, namely, A->B->C->D->E is valid, but the agent must choose portal #1 to reach E.... Presumably the agent has chosen portal #2, and go to B, C and D and finally ending up finding itself getting stuck at D and cannot move over to E... The whole computation is wasted. How do I avoid this problem?
thanks
Jack

• There are a bunch of path finding implementations online. But, to be honest, I wasn't much satisfied with  most of them, for one of these reasons:
Dynamic memory allocation in the middle of the algorithm Algorithm that does too much (more than what is needed) Too many files for just a single task So I made this two-files (uastar.c and uastar.h) library: https://github.com/ferreiradaselva/uastar
No memory dynamic allocation. Straight to the point (the README.md explains how to use).
It's nothing biggie, but certainly useful.
Path finder at work:

I'm leaving this in announcements, because I probably won't add more features (it's pretty much done).

• I am not sure I can ask questions about a specific library here, but if you haven't already. I'd like to tag some polys in a navigation mesh that correspond to grass or road etc, I can give an extent to do so, or in another way, I can directly feed a geometry in and the polys are tagged this way. But I am looking into alternative ways such as allowing the user to tag the polys using a text file or bitmap file (like the way heightfields are done).. If I define a area map which is a grayscale  image, and the values range from 0-255, and for example, if the value of the first char is 0, then I can map this index to certain place in the navigation mesh, and say this is a walkable ground etc, unlike heightfields, where you define an image and the resultant thing is some terrain, but when you start off with a bitmap for area map, you end up with what? you see, I had the geometry already, the area map probably doesn't make sense here, same way as the text file thing....
Any ideas?
Jack

• Hello guys, I just registered this site and heard from my lecturer that this a good site to talk about certain topics since my research topic are mostly programmer who are experienced with AI can answer the survey.

The reason of the survey below is to understand which is suitable solution for 2d platformer pathfinding for AI and which one is easier to implement for 2D platformer.

I would appreciate if you guys give your responses for the survey link shared and thank you for spending time answering the survey. Sorry if the survey is a bit hard to understand, I tried to make it understandable as best as I can. Again, thank you!

https://goo.gl/forms/S0etAlAAHL6S5kTI2
• By baabaa
Hello hello. I'm in the preliminary design phase for a space based game, and I need some advice on how to approach the AI side of things.
Here's the situation in a nutshell. Say I'm a space explorer with a spaceship, and I am competing with other space explorers to be the first one to discover things. I have a procedurally generated 2D top-down solar system, and to make things a little simpler, let's say all the planets in the system are static, meaning they are not orbiting their sun. But they all have their gravity wells of varying strength. As a player I have to negotiate newtonian physics around these planets, using engine thrust at the right amounts and timing, to get to where I want. That part is not a problem. I'm also willing to assume non-newtonian rotation so that AI and player do not need to account for appyling torque to get a correct bearing.
So far I have not mentioned whether this is real-time or turn-based and that's because of my uncertainty around AI.
Problem is I'm not sure how to approach the AI side of things either way. Ideally I'd like to have an AI that can optimize trajectory for speed and/or fuel efficiency, but I have been able to find precious little on the topic on the interwebs. The best I've found so far is the following article from a decade ago, and it does not really point to a solution: http://playtechs.blogspot.ca/2007/05/pathfinding-in-space.html
If I can find a good resource on how to pull this off in realtime, I'd like to go that route. At the moment my fallback is using a turn based system for exploration and visualizing the system as a hex grid. Then using A* I could get AI agents to naively assume use of thrust to come to a stand still each time they want to change trajectory, and then add extra thrust to move in the next direction, but I'm worried this would be extremely under-optimized in terms of fuel efficiency and the reach of rival ships as a result. I could also factor in the AI ship's current velocity into the graph search, which would likely greatly increase the search space for pathfinding, but I haven't experimented with it yet to be able to say whether it's viable.
Any thoughts?

• Hello, I'm trying to implement enemy pathfinding algorihtm, but i have problem with empty tile collision when moving enemy to node.

For example this image shows how it should move like shown in example:

But it stucks at last tile:

It happens because enemy collides with right side of "air" tile and then it removes from node list because it "collided", but it works with not "air" tiles of course. How do fix this problem?
Code:
void Enemy::generateMoveToNode(AStar::Vec2i lastNode) { auto lastSave = AStar::Vec2i{ 0.0f, 0.0f }; while (!target.empty()) { if (target.back().y == lastNode.y) { lastSave = target.back(); target.pop_back(); } else { moveToNodes.push_back(lastSave); moveToNodes.push_back(target.back()); generateMoveToNode(target.back()); return; } } moveToNodes.push_back(lastSave); } void Enemy::updateTarget(std::shared_ptr<InputManager> inputManager) { if (moveToNodes.empty()) return; // Calculate half sizes. float halfWidthA = getSize(0) / 2.0f; float halfHeightA = getSize(1) / 2.0f; float halfWidthB = 32.0f / 2.0f; float halfHeightB = 32.0f / 2.0f; // Calculate centers. auto centerA = glm::vec2(getPosition(0) + halfWidthA, getPosition(1) + halfHeightA); auto centerB = glm::vec2((moveToNodes.front().x * 32.0f) + halfWidthB, (moveToNodes.front().y * 32.0f) + halfHeightB); // Calculate current and minimum-non-intersecting distances between centers. float distanceX = centerA.x - centerB.x; float distanceY = centerA.y - centerB.y; float minDistanceX = halfWidthA + halfWidthB; float minDistanceY = halfHeightA + halfHeightB; setKey(inputManager->getKeyBinding("Move Left"), false); setKey(inputManager->getKeyBinding("Move Right"), false); setKey(inputManager->getKeyBinding("Jump"), false); setKey(inputManager->getKeyBinding("Duck"), false); // If we are not intersecting at all, return (0, 0). if (abs(distanceX) >= minDistanceX || abs(distanceY) >= minDistanceY) { if (moveToNodes.front().y > ceil(getPosition(1) / 32.0f)) setKey(inputManager->getKeyBinding("Jump"), true); else if (moveToNodes.front().y < ceil(getPosition(1) / 32.0f)) { if (getCanClimb()) setKey(inputManager->getKeyBinding("Duck"), true); } else { if (moveToNodes.front().x < ceil(getPosition(0) / 32.0f)) setKey(inputManager->getKeyBinding("Move Left"), true); else if (moveToNodes.front().x > floor(getPosition(0) / 32.0f)) setKey(inputManager->getKeyBinding("Move Right"), true); } updateInput(inputManager); return; } // Calculate and return intersection depths. float depthX = distanceX > 0 ? minDistanceX - distanceX : -minDistanceX - distanceX; float depthY = distanceY > 0 ? minDistanceY - distanceY : -minDistanceY - distanceY; updateInput(inputManager); moveToNodes.erase(moveToNodes.begin()); } generateMoveToNode: recursive function to generate all nodes.
updateTarget: updates enemy every frame to check if it hits node and then removes it from list and checks next till no nodes left.

• When my program was generating navigation meshes for my geometry, and the extent is 5000x5000 (m), with a cell size of 5.0 meters, it has to generate 250,000 cells, which makes my computer halt, the target object is a truck, which has a radius of about 20 meters, should I give the cell size a value of 20, because I don't really want to use this navigation mesh on one type of vehicle only. I want to use it for vehicles like cars, which has only about 5 meters in radius... the generation speed is a problem while the memory consumption is another.
Any ideas how to improve this?
Thanks
Jack

• 3D Rigger Mixamo Is Making Some Changes
Originally founded in 2008, Mixamo has been a go-to source for 3D animation and rigging for nearly a decade. On August 28th, 2013, Mixamo launched a platform called Face Plus. It was designed to allow developers or animators turn facial expressions recorded with even a webcam into top-notch 3D animation. This is not unlike what Holotech Studios’ “FaceRig” does for streamers and video creators as well.
Back this past May 2017, Adobe (publishers of the Mixamo platform since 2015) announced the end of an era. Mixamo’s Face Plus will be removed on August 22nd, 2017, as well as several other free features.
According to the Adobe post on the Mixamo website, they lay out what is changing and what is staying the same.

Set out as bullet points in the post, they seem to be cleaning house on the “free” features. But, they are keeping a small handful of their main key points, such as:
Select and use characters from Mixamo’s Character Collection
Upload 3D characters to get them ready to animate using the Auto-Rigger
Apply animations to characters
Noticeably, the list that’s getting 86’ed includes major tools and the ability to save assets from their site:
The ability to save and retrieve assets in “My Assets”.
The Face Plus facial animation plugin
The Decimator polygon reduction tool
Download types for .bvh and .biped that streamline integration with 3rd party applications
Mixamo forums will close and all help articles will be moved to Adobe’s HelpX
Don't Worry! There's still time!

They’re allowing people to still download the tools and plug-ins until August 22nd, as well as utilizing their easy-to-download features of saving Assets directly. Those that use Mixamo, utilize Face Plus, or even just do animation are highly encouraged to take advantage and download the tools for the next week.
UPDATE (8/15/17): Adobe would like to stress that Mixamo, itself, is not going anywhere, just the Face Plus features, the tools mentioned above, and a couple other features to make the platform more "streamlined and a little modernized". Once the dust is cleared next week from the changes, I'll reach out to Adobe from a more guided tour of the new set-up. Additionally, I have changed the title of this article to reflect the clarification. Apologies for the inconvenience to those that have already read the article.

• 10
• 9
• 48
• 12
• 10
• Forum Statistics

• Total Topics
631383
• Total Posts
2999687
×