Path finding for a continuously chasing agent?

Started by
11 comments, last by alvaro 14 years, 6 months ago
I am currently coding the shortest path finding behavior for an agent in a level. This agent is supposed to continuously chase the player character wherever he goes in the level. Since the player character will often change its location during the course of the game, the agent will have to continuously evaluate its path as it keeps on updating the start and destination graph nodes. I am worried about the costs of re-running the shortest path code every few frames. Are there any tricks or techniques for an enemy agent to successfully chase the player character without needing to continuously run the path finding code?
Advertisement
It is highly likely that you can keep the same path as before the player moved, and simply extend it by calculating the shortest path from the old player-position to the new player-position.
Then you need to add some checks to see if the player has changed direction etc, in which case you might need to discard part of the path and re-calculate it from an earlier position. It all depends on the environment, and exactly how accurate and responsive the chasing agent should be. Does it need to instantly and always know the exact closest path to the player, or is it enough to take a good approximation?
I can see a problem with Erik's proposed solution (EDIT: I realize Erik is aware of the problem, but I am clarifying it, in case it wasn't obvious to the OP). Let's say I am standing near a column. The agent plans to get to me passing the column on one side. Now I move around the column and start running towards the agent. If the agent simply concatenates pieces of path to the original path, he will have to go around the column and then come back, even if we are next to each other. The situation gets even worse: If I decide to loop three times around the column before I flee, the agent may do the same thing. Perhaps this behavior can be patched in some ways, but I would be worried about this type of not-so-smart behavior.

Depending on the complexity of your environment, perhaps you can get by with a much simpler approach. If your environment is quite open, you can probably just use steering behaviors. If it's more like an urban environment but it's static, perhaps you can have a very rough description of your level as a graph of N connected areas (think rooms) and you can have a precomputed NxN table with the distances between any two areas. There is a number of ways in which you can use that information to get decent paths quickly. For instance, you can run A* "on a bugdet", where you can stop the search before it's finished and use the best path found so far, and use the NxN table as part of the heuristic.

Perhaps if you describe the type of environment you are working with, we'll be able to give you better advice.
The level is a somewhat open terrain with 2 open spaces connected by a narrow path with slopes on both sides. On one of the open spaces, there is an aeroplane. There are also a few trees in the open terrain.

Originally, the chasing code of the agent is simple. The agent just rotates towards the direction of the player character and chases it. The areas where the agent becomes "stuck" is in the narrow path with slopes on both sides and and the area where the aeroplane is.

Hence, I decided to implement path finding to prevent the agent from getting stuck in these trickier areas.

There are a few things you can do which can reduce the problem.

The first is quite obvious and also the most time consuming - speed up your path finding so that its processing cost for short paths is negligible. Of course, without knowing more about how your represent your environment, its hard to make any detailed suggestions about this one.

Secondly, you could incorporate steering behaviour as previously suggested. The problem is that your agents might get stuck in concavities, meaning that you'll need to fall back on the path finding to resolve these situations. Thus, meshing the two systems seamlessly together might not be easy, as I imagine identifying that your agent is stuck could only happen once it is stuck, in which case the player will notice that the agent is obviously controlled by a computer.

Thirdly, when you calculate your agents new destination as a point relative to the player, see if it can move directly there by performing a swept volume or combination of raycasting collision test/s. If so, there is no need to do run your pathfinder. Otherwise, run the path finder at even intervals. This would constitute a dramatic improvement, because if your agent follows the player closely, the player will be in sight most of the time. It also gives very smooth visual movement as the agent is revising its destination every frame most of the time, thereby allowing you to use greater time gap between pathfinds when the player is out of sight.

Hope this helps,
Jackson Allan
Another thought to throw out there: What about variational methods? Presumably last frame's optimal path is a good guess for this frame's optimal path, since neither the player nor the agent can move very much in one turn.

Rather than trying to get uselessly technical here, I'll use a physical analogy: Pretend that the path between agents is a rubber band. As agents move, you "physically simulate" what happens to the rubber band.
Quote:Original post by Emergent
Another thought to throw out there: What about variational methods? Presumably last frame's optimal path is a good guess for this frame's optimal path, since neither the player nor the agent can move very much in one turn.

Rather than trying to get uselessly technical here, I'll use a physical analogy: Pretend that the path between agents is a rubber band. As agents move, you "physically simulate" what happens to the rubber band.


Following your analogy, the rubber band is likely to get caught around a column, repeating the problem with Erik's suggestion.
Quote:Original post by alvaro
Following your analogy, the rubber band is likely to get caught around a column, repeating the problem with Erik's suggestion.


Yeah, it's not a total solution; I think you'd still need to do a fresh graph search every once in a while. It would generate better paths than just concatenating paths though; these paths would at least be locally optimal.

Another, unrelated thought: Instead of chasing where the agent is, you should really be chasing where the agent will be, no? If your predictions are good then you may be able to reduce the frequency with which you replan?
One solution that hasn't been mentioned so far is using a hierarchy. If you have sectors/areas, then the high-level path through them won't change much. All you have to recalculate is the path within the last sector around the player.

When the player changes sectors, you need to recalculate the whole thing, but due to the hierarchy you can lazily avoid calculating low-level paths until you need them...

Join us in Vienna for the nucl.ai Conference 2015, on July 20-22... Don't miss it!

I seem to recall seeing a solution somewhere. You sub-divide your existing path somehow... let's say pick the mid-point... and run your concatenation from there. This has some of the benefits of adding on to the end (in that you are only doing a partial pathfind rather than a full) and it helps to mitigate the situation of doubling back mentioned above.

You can even select your starting point (rather than the mid-point) based on your current path length compared to the travel speed of the target. That is, as you get closer, you potentially repath more of your existing path. If the target is far away, you can pick a point closer to your current endpoint.

Example. If your current path is 100 nodes long and the target moves 1 per turn, you could elect to repath from the 90th step. If your current path is 20 nodes long, you would do it from the 10th step. Either way, your repath is only the final 10 steps.

Dave Mark - President and Lead Designer of Intrinsic Algorithm LLC
Professional consultant on game AI, mathematical modeling, simulation modeling
Co-founder and 10 year advisor of the GDC AI Summit
Author of the book, Behavioral Mathematics for Game AI
Blogs I write:
IA News - What's happening at IA | IA on AI - AI news and notes | Post-Play'em - Observations on AI of games I play

"Reducing the world to mathematical equations!"

This topic is closed to new replies.

Advertisement