Using flocking to avoid non-circular obstacles in an RTS setting

Started by
7 comments, last by IADaveMark 10 years, 7 months ago

I'm working on a movement system for a pet project of mine. I'm using an a* algorithm on a navmesh created with Delauney triangulation (where each triangle vertex for a navmesh will be on the vertex of some obstacle). The a* also accounts for unit radius and prevents units from moving through choke points and corridors that are too small. The navmesh is only created to path around terrain, and immobile structures (anything that is a static object). I've also started to implement steering to keep units from colliding with dynamic objects and to flock allied units.

Here's the problem I have. When I don't include steering forces, units are guaranteed not to run into any static objects. When I include steering forces, the units in theory will behave nicely in relation to each other, but they can be pushed into static objects now. I would deal with my static objects with the same unit avoidance I plan to use for my avoidance behavior for dynamic objects, but this approach requires both objects to have a radius (be circular).

Is there any nice way to prevent my units from running into walls that has relatively quick running time? Keep in mind that I have to perform this operation every frame for possibly 50-100 units.

Advertisement

its sounds like you have both systems (A* and steering) affecting things at once. they should at most affect things alternately, not simultaneously. but it also sounds like you have a more fundamental problem of using two rather incompatible systems. as soon as you use steering to avoid a moving object, your A* solution goes out the window. when you're heuristics kick in and do collision avoidance steering, since you use A* for static collision avoidance, you must immediately select a new location to move to (to steer to), then run A* to that point, adding in the object you're avoiding, since you fundamentally rely on A* to get from A to B. this means no real "steering", just recalculating A* when things are about to go wrong.

the other way is to switch from A* to heuristics for collision avoidance, then switch back once done. but the heuristics would have to handle it all (static and dynamic obstacles), or like in the first method, call A* as needed.

if A* isn't fast enough to be called as often as needed, then you'll have to get clever.

Norm Barrows

Rockland Software Productions

"Building PC games since 1989"

rocklandsoftware.net

PLAY CAVEMAN NOW!

http://rocklandsoftware.net/beta.php


its sounds like you have both systems (A* and steering) affecting things at once. they should at most affect things alternately, not simultaneously. but it also sounds like you have a more fundamental problem of using two rather incompatible systems. as soon as you use steering to avoid a moving object, your A* solution goes out the window. when you're heuristics kick in and do collision avoidance steering, since you use A* for static collision avoidance, you must immediately select a new location to move to (to steer to), then run A* to that point, adding in the object you're avoiding, since you fundamentally rely on A* to get from A to B. this means no real "steering", just recalculating A* when things are about to go wrong.

I'm using A* to set subgoals for my units, and steering to arrive at the goals. It's the way many rts games do pathfinding, I feel quite confident that they can both be compatible. That's not really the issue though, pretty sure my problem would occur even in a simulation that only used steering with seek and flocking behaviors.

Edit: Just for clarification, the only thing a* is set the destination for the "seek" steering behavior

Have you gone through all of Craig Reynolds stuff? You can use avoidance to other agents as well as avoiding obstacles... just not necessarily with a pure flocking system. Instead, just local avoidance steering with all agents heading toward the same goal should do the trick. After all, if they have the same goal, they will automatically do the "alignment" part of flocking, won't they?

Dave Mark - President and Lead Designer of Intrinsic Algorithm LLC
Professional consultant on game AI, mathematical modeling, simulation modeling
Co-founder and 10 year advisor of the GDC AI Summit
Author of the book, Behavioral Mathematics for Game AI
Blogs I write:
IA News - What's happening at IA | IA on AI - AI news and notes | Post-Play'em - Observations on AI of games I play

"Reducing the world to mathematical equations!"

 

I'm using A* to set subgoals for my units, and steering to arrive at the goals.

That's the problem: you should do the opposite. Choose goals with heuristics about nice places to be at (e.g. uniformly dispersed units in open spaces), and reach the goals with an algorithm that guarantees obstacle avoidance (A* with mobile obstacles added to the navigation graph).

If A* is expensive, you can "cheat" by not recomputing the whole path and giving up optimality: when an obstacle appears between intermediate waypoints A and B, you can recompute (every turn) a shortest path through moving obstacles from the current position to B (in most practical cases a very small graph). Insisting on reaching the "natural" waypoints should be a reasonably smart behaviour for most units and most environments.

Omae Wa Mou Shindeiru

 

I'm using A* to set subgoals for my units, and steering to arrive at the goals.

That's the problem: you should do the opposite. Choose goals with heuristics about nice places to be at (e.g. uniformly dispersed units in open spaces), and reach the goals with an algorithm that guarantees obstacle avoidance (A* with mobile obstacles added to the navigation graph).

If A* is expensive, you can "cheat" by not recomputing the whole path and giving up optimality: when an obstacle appears between intermediate waypoints A and B, you can recompute (every turn) a shortest path through moving obstacles from the current position to B (in most practical cases a very small graph). Insisting on reaching the "natural" waypoints should be a reasonably smart behaviour for most units and most environments.

Not to sound rude, but I'm pretty sure the way I'm doing it is a very viable way to simulate movement in an rts. I'm fairly certain StarCraft2 uses the same method I'm using (in fact, that's where I pulled many of my ideas from). If you could explain why the method I'm using is unfeasible and point me to a link further detailing the method you are proposing, I'd be very grateful :)

Have you gone through all of Craig Reynolds stuff? You can use avoidance to other agents as well as avoiding obstacles... just not necessarily with a pure flocking system. Instead, just local avoidance steering with all agents heading toward the same goal should do the trick. After all, if they have the same goal, they will automatically do the "alignment" part of flocking, won't they?

I've gone through his stuff. The obstacle avoidance algorithm he explains relies on the obstacles having being circular (or giving them a radius) but the obstacles I am using are polygons.

Not to sound rude, but I'm pretty sure the way I'm doing it is a very viable way to simulate movement in an rts. I'm fairly certain StarCraft2 uses the same method I'm using (in fact, that's where I pulled many of my ideas from). If you could explain why the method I'm using is unfeasible and point me to a link further detailing the method you are proposing, I'd be very grateful :)


Not to sound rude, but *you* are worried about flocking forces pushing agents into obstacles, *you* admit that getting guarantees from flocking algorithms is quite impractical, and *you* think that your method isn't good enough.

Applying A* to a graph of moving obstacles is simply the most straightforward way to avoid collisions with obstacles reliably; if you cannot accept collisions with obstacles, the use of flocking-like heuristics needs to be restricted to choosing movement destinations (e.g. surrounded agents want to go to the centroid of their designated neighbours) because they cannot be allowed to move agents directly.

Regarding object and obstacle shapes, you can generalize from point-point distances (two circles) to point-polygon distances (circle and polygon) and if needed to polygon-polygon distances. Polygons can be assumed to be convex (after decomposing concave ones appropriately).

Omae Wa Mou Shindeiru

Not to sound rude, but I'm pretty sure the way I'm doing it is a very viable way to simulate movement in an rts. I'm fairly certain StarCraft2 uses the same method I'm using (in fact, that's where I pulled many of my ideas from). If you could explain why the method I'm using is unfeasible and point me to a link further detailing the method you are proposing, I'd be very grateful smile.png


Not to sound rude, but *you* are worried about flocking forces pushing agents into obstacles, *you* admit that getting guarantees from flocking algorithms is quite impractical, and *you* think that your method isn't good enough.

Applying A* to a graph of moving obstacles is simply the most straightforward way to avoid collisions with obstacles reliably; if you cannot accept collisions with obstacles, the use of flocking-like heuristics needs to be restricted to choosing movement destinations (e.g. surrounded agents want to go to the centroid of their designated neighbours) because they cannot be allowed to move agents directly.

Regarding object and obstacle shapes, you can generalize from point-point distances (two circles) to point-polygon distances (circle and polygon) and if needed to polygon-polygon distances. Polygons can be assumed to be convex (after decomposing concave ones appropriately).

Sorry if I came across as sounding asshole-ish but that was not my intention! It seems like you're a bit annoyed at me lol, sorry! When you said "A* with mobile obstacles added to the navigation graph", what did you mean? Having the navigation graph account for mobile obstacles by (for my example I'm using triangular meshes) creating additional meshes? Or did you mean something else?

Also when I said that A* determined my subgoals/waypoints, I didn't mean that it determines exactly where it goes. A* will determine a waypoint destination, but the conditions for the object arriving at the waypoint are not "I am at point (x,y)". The conditions are more like finding if the agent is within some radius of the waypoint (I'm not sure if this is the best way to do it yet, I'm still working on arrival behaviors haha).

Finally, could you link some example for the obstacle avoidance for point-polygon if you don't mind? On the Reynolds webpage there isn't any method for dealing with point-polygon that I could easily find. Thanks a bunch!

Feeler-based steering works with any type of object. I don't know where you're getting this "circular" bit from. See obstacle. Steer away from obstacle.

http://www.red3d.com/cwr/steer/Containment.html

http://www.red3d.com/cwr/steer/Unaligned.html

http://www.red3d.com/cwr/steer/Doorway.html

The typical method of doing long distance navigation is A* for the path, local steering for dynamic obstacle avoidance. In fact, that's what Starcraft 2 did. They only used a subset of the traditional flocking stuff here and there for effect.

Looking back at your OP, it sounds like you are thinking the wrong way about steering forces. If things are getting "pushed" by steering, something is wrong. Things should just turn and change speed based on potential collision (as in the above links).

Dave Mark - President and Lead Designer of Intrinsic Algorithm LLC
Professional consultant on game AI, mathematical modeling, simulation modeling
Co-founder and 10 year advisor of the GDC AI Summit
Author of the book, Behavioral Mathematics for Game AI
Blogs I write:
IA News - What's happening at IA | IA on AI - AI news and notes | Post-Play'em - Observations on AI of games I play

"Reducing the world to mathematical equations!"

This topic is closed to new replies.

Advertisement