I should begin this by giving credit where credit is due. The idea for contextual steering described herein is lifted heavily from work by Andrew Fray
as described in his excellent GDC 2013 AI Summit lecture
. For the most part the ideas and hard work are based on his lecture, although I've added a few twists of my own. (As a bonus, the above lecture includes a great introduction to steering systems in general, which is a solid starting point if you're not already familiar with the concepts.)
Now that that's taken care of, let's get started!
The Problem With Local Steering
Local steering is a powerful model and highly worth exploring for almost any game that requires freeform movement in an environment. Unfortunately, one of the biggest difficulties with getting good results from local steering systems lies in its context-free nature. Each steering force that is implemented needs redundant logic for obstacle/collision avoidance, for instance. As the number of forces increases and the complexity of logic scales up, it can be cumbersome to create clean code that is both efficient and does not needlessly repeat complex calculations.
Generally, good implementations of steering become a layered, twisted maze of caches, shared state, and order-dependent calculations - all of which fly in the face of good engineering practices.
Thankfully, there's a way to have steering and
clean code - and it's actually very straightforward.
Enter the Context
The solution to these problems lies in adding contextual information
to a steering decision maker. Instead of taking the weighted sum of a number of input forces, we flip the problem on its head. First, we generate a series of slots
which correspond to fixed directions. These directions can simply be compass headings; there can be as few as 4 or as many as hundreds of slots, depending on the resolution your steering needs. I find that for most purposes 8 slots is about perfect, but 32 is another good choice if you're willing to do a little more computation to get better, smoother results.
Once these directions are established, we pass them to a series of evaluators
. Each evaluator determines a score
for the direction. This can be thought of as analogous to the idea of steering forces; the more likely a character is to want to go in the given direction, the higher the score should be. I tend to normalize scores to [0, 1] for simplicity, but you can use any scoring method that makes sense.
Scoring can be used to give a low
score to a direction if the direction should be avoided. This is excellent for steering around traps, obstacles, other characters in a flock, and so on.
Now we have a list of potential directions to take, along with a series of scores from each evaluator. The next step is to combine the decisions made by each evaluator. This is where game-specific logic can come into play. For example, if your directions are evenly distributed around a circle (as would be the case for moving in a free 2D world, or along the ground in a 3D environment) combining evaluated scores is pretty simple. If you use normalized scores, you can simply multiply all of them together for a given direction to determine the overall "desire" for a character to move in that direction.
Andrew Fray's original lecture also describes doing this a different way for a racing game where most directions are roughly "forwards". In this case, you can do more sophisticated things like eliminate entire sets of directions based on the presence of obstacles. Normalized scores are slightly less useful here, but still handy for simplicity's sake.
Either way, once all of our directions are scored and the scores are combined, it's time to decide where to go. The basic principle is that we take the highest scoring direction and move towards it; but there are other tricks that can lead to smoother steering without needing to score a large number of directions. For instance, you can look at the highest scoring direction, then the scores of two or three directions to either side, and do a weighted average to pick a general trending direction
instead of just going straight for the highest-scoring vector.
Once the final direction is chosen, you can easily blend it with the character's current orientation to achieve gentle turning. I do this by simply performing a linear interpolation between the chosen direction and the current direction of movement; by adjusting the weights of the interpolation, it's easy to get characters to turn faster or slower depending on what looks and feels best.
Examples and Tricks
So we have a general framework for this "contextual steering", but that leaves a major question: what, exactly, is
This will vary heavily based on the type of game you're building, but the basic idea is straightforward: context is anything that might influence how likely a character is to move in a certain direction. This can be expressed in terms of "desirability" and "danger" - the more desirable a direction, the higher its score. The more dangerous
a direction, or undesirable if you prefer, the lower its score.
Pursuing a Target
Steering towards a target position is easy: for a given direction being evaluated, take the dot product of the (normalized) direction vector and the (normalized) vector from the character to the target point. You can clamp this score to [0, 1] if you like, or keep the negative scores for directions that face away from the target, depending on how you want to combine the results of each evaluator.
Avoiding a Target
As before, take the dot product of the candidate direction vector with the vector towards the target to avoid. Flip the sign of the result, and you're done! The character will now faithfully steer away from the given point.
Static obstacles that lie between a character and the desired destination can be handled by simply setting the blocked direction's score to zero. The character will naturally tend to steer away from the obstacle and try and move around it instead. A simple way to do this is cast a ray in the direction of the candidate vector, similar to any other line-of-sight check, and if a direction is obstructed, zero out its score.
A simple trick to accomplish speed control is to scale the score of a candidate direction by how fast you want the character to move. If you're using multiplication of normalized scores, speed control is simply a matter of adding an evaluator that chooses how fast to move. This can be combined with other systems to coordinate moving through choke points, for example.
If you have a large number of characters all steering using this system, you can coordinate them into "flocks" fairly easily. The trick is to add a pre-processing step which computes a "dispersion" or "separation" force just like in traditional flocking. Then, we add an evaluator which takes the dot product of the dispersion vector with each of the candidate vectors, and adds
that score to the other scores produced by different evaluators. The result is that characters will tend to favor directions which move them towards the separating positions, leading to very visually pleasing grouping behavior. As a bonus, when combined with movement speed scaling, we can have characters flow in crowds and self-organize with minimal effort.
Sometimes, when steering huge numbers of characters, it can be impractical to have each character steer every frame. With contextual steering, it's trivial to address this performance problem. Simply have each evaluator score the candidate directions, and
provide an estimate for how long the character can move in that direction before the score becomes invalid. When it comes time to choose a final direction, simply pick the lowest time for which that direction can be valid, and don't steer again until that time elapses. Better yet, combine this with simple movement speed scaling to have characters move at slightly different speeds, and you can get free time-slicing of your steering computations!
Steering remains a powerful paradigm for controlling character movement. However, with some simple adjustments to the concept and a little clever application of logic, we can accomplish highly context-specific behavior with a minimum of effort and zero code duplication, since each considered evaluator only has to run once per direction.
Depending on how many candidate direction slots we use, and depending on the complexity of each steering evaluator, it might be more expensive to do this than to use naive steering in some cases. However, the more complex the steering logic becomes, the better the win for using contextual information. Careful coding can also allow many context-specific decisions to be ignored when they are invalid, dropping the computation overhead substantially.
In any case, contextual steering is an excellent tool to have in your arsenal. A good implementation framework can be built in a day or two, and scoring evaluators added on as needed to produce arbitrarily rich steering behavior.
For extra credit, consider combining context information with flow fields, navmesh pathfinding, or whatever other movement control techniques strike your fancy.