AI for large numbers of RTS like units

Started by
6 comments, last by frob 5 years, 8 months ago

Looking at getting started on some actual AI rather than just pre-programmed drones that do simple things like only move towards the player or follow a pre-made path.

I have a range of different unit that need their own behaviors (infantry, helicopters, harvesters/resource gathers, etc.), but also in very large numbers across several factions (Supreme Commander scale would be good). So far I have just made something "that works" with a state machine for all the operations (idle, move, attack, etc.). But I am not really happy with this code, since it is hard to edit and puts everything in one massive piece of code (otherwise I was finding I had to make near-duplicates of the state machine for each unit type/behaviour).

e.g.


void update()
{
  switch (state)
  {
    case State::ATTACK: update_attack(); break;
    case State::MOVING: update_moving(); break;
    case State::VERTICAL_TAKEOFF: update_vertical_takeoff(); break;
    ...
  }
}
void on_takeoff_complete()
{
  if (target_entity)
  {
    state = State::ATTACK;
    ...
  }
  else if (patrol_route)
  {
    start_patrol_route();
  }
  else if (...)
}
void start_patrol_route()
{
  state = State::MOVING;
  move_pos = patrol_route[0];
  patrol_i = 0;
  ...
}

void set_patrol(std::vector<Vector2F> route)
{
  patrol_route = route;
  if (state == State::IDLE || state == ...)
    start_patrol_route();
}

...

I thought of adding like separate command objects to do the higher level stuff by having the command reference the unit and having its own state (e.g. patrol gets moved out to its own thing that checks when the unit gets State::IDLE and then issues the next move. Also commands for attack move, guard, etc.) but not sure that really solves all the problems or gives a good general architecture.

I also want nearby units to be able to cooperate, both for performance (e.g. sharing path finding over long distances) and just generally (e.g. not all standing next to each other waiting for that AOE shell to land in the middle), which a state machine does not really seem to help me with, without a lot of extra states or making existing states more complex (e.g. when transitioning to move search around for nearby units with the same destination... maybe...).

 

Is there a simpler architecture that allows each AI "feature" to be self contained and reusable, while still being suitable for large numbers of units? Iv'e seen a lot of talk about FSM's but not seeing how to get away from this central blob. Is an FSM suitable at all for the more complex behaviours?

Advertisement

I haven't worked on any RTS-style projects to speak of, and I imagine there are some standard solutions to the problems you mention that someone might be able to direct you towards. I'll offer a couple thoughts in the meantime though.

I'd at last consider switching to a scripting system for your AI (and perhaps for the majority of your game logic). Lua would be an option, or JavaScript with an embedded JS engine. One advantage of using JavaScript is that if your code is modular enough, you can do at least some development and testing in a browser, which I think can speed up development considerably. Integrating a scripting system can take a little work, but I think it'd be worth it for something like this.

Orthogonal to that, I imagine there are more flexible ways of handling this than e.g. enums and switch statements. You may already be doing something like this, but the first thing that comes to mind is something like an entity/component system. For AI purposes, various behaviors could be components, which you could mix and match in various combinations and add and remove as needed.

You could also use something like the 'run and return successor' idiom to chain behaviors in sequence. For example, you'd have a 'vertical takeoff' component that carried out that particular behavior. When the behavior was complete, the component would remove itself, and possibly add a new component in its place (this would mirror the on_takeoff_complete() function in your code).

That obviously doesn't address all the issues you mentioned, but I think something more modular like an entity/component system, possibly along with a scripting system, might be a step in the right direction.

11 hours ago, SyncViews said:

Is there a simpler architecture that allows each AI "feature" to be self contained and reusable, while still being suitable for large numbers of units? Iv'e seen a lot of talk about FSM's but not seeing how to get away from this central blob. Is an FSM suitable at all for the more complex behaviours?

Yes, it is very suitable.

Describing it sounds more complex than it really is. Here is an article with a few variations, the last example with animals running around is closest to what you describe

Several of the games I've worked on included variations of nested state machines mixed with some degree of utility functions.

Hopefully they don't scare you off with descriptions. The individual state machines can be self contained and composed neatly.  One state machine goes through all the actions it needs to complete its goal. Actions are made available as part of a pool of actions associated with the entity, and behaviors are chained together through state machines both by nesting and by utility functions.

 

You're talking about a large-scale game with many unit types, this type of system tends to grow rapidly and is best suited for teams of programmers and artists/animators. 

In your example you've got attack, patrol along a route, move, take off, and more.  Each one of those can be broken down into smaller bits.  You probably will have several move actions: walk, run, search, crawl, fly, etc. Those can be encapsulated by a bigger command like MoveTo(), that would accept a target to move to along with a preferred moving style. Infantry may have available actions of walk, run, search, and crawl. Tanks may have a "walk" that moves slowly but retains the ability to fire, and "run" that moves quickly but cannot fire. Aircraft may only have fly, which may require transitions for landing and launching.  MoveTo() would look at the unit's available actions and choose an appropriate movement to get somewhere.  If moving somewhere requires moving along waypoints it could run one move action to the first point, then another move action to another point, then another move action to another point.  MoveTo() could also be smart about distances, if a unit has a short distance to travel it could choose the walk action, if it has a long distance to travel it could the run action. Some units may have complex forms of movement, such as tanks or aircraft that can accurately fire while moving. 

Now you can compose that MoveTo() into bigger behaviors. Patrolling means running MoveTo() with a particular set of parameters, plus on every update also search for enemies.  Attacking can check for distance, and if they're too far away they can call MoveTo().  

After that you can compose behaviors out of a longer series of actions.  "Collect resource" behavior means the action of identifying a nearby resource, moving to it, harvesting the resource, traveling back to the storage location, and depositing the resource. 

Occasionally you'll want units to look for other actions to do. Generally you don't want this to be continuous so the work load is distributed over time. A patrolling soldier may re-evaluate if it should do something else every time it completes a walk cycle. A unit playing the unit's idle could wait until a full animation has completed before re-evaluating. 

Having a "keep doing this behavior" is an important behavior for chaining. When the unit is done harvesting a resource it might repeat the behavior.  But it is important that actors are constantly re-evaluating what they are currently doing, and they generally need a way to stop doing whatever they are doing. For example, units that are harvesting resources or that are patrolling should have the ability to stop and re-evaluate their actions when they're attacked. 

When re-evaluating the system can make a short list of all available actions. Exactly how you do that will depend on the game, maybe attaching actions to game objects like enemy actors or to invisible spawn points, maybe having free actions that search for proximity around characters, or maybe through some other way that works for your game. Use a utility function to see how much the character wants to do something.  Actions like idle have very low desire, actions like "defend myself" are very high desire.  When you want units to behave nearly autonomously you want many options with high levels of desire. In life simulators like The Sims the actors should have a long list of potentially interesting activities to choose from. In military simulations where characters do very little unless commanded, the action should usually be "keep doing whatever I was doing before".

 

As these types of games grow, they typically start out with a small number of behaviors and a rapidly-growing number of actions.  Already mentioned there will be several actions for moving between places. Aircraft will have different move actions than tanks, which will have different move actions than infantry. Moving while carrying resources may be different than moving while unloaded. As the number of actions increases you gain the ability to build more complex behaviors out of them.  As mentioned, harvesting resources involves at least four actions in a chain. Building a structure may involve several steps as well.

 

In modern gameplay, those tasks are done via Behavior Tree rather than the simple state maschine. I worked on games with very large scale of units that have had to be controlled (500 AI units) for a traffic simulation. Those units each have had driver behavior from simply follow there road, change lanes and taking other ways on cros-roads up to complex traffic behavior and seeking for a parking spot.

This was all done using a single behavior tree of actions and conditions. It has certain types of nodes to code your actions with; split nodes to link certain sub-trees together, condition nodes to evaluate what sub-tree to follow and action nodes that map to functions n your gameplay code.

If you have several units sharing the same tree, there should be a way so that your BT is a singleton object and each unit manages it's state by it's own. This way you keep your units away from wasting multiple instances of the same tree and just hold the information needed. Make several different trees for different kinds of units or link trees together via conditional actions so you could setup your AI in an additive system from basic actions to more complex ones for certain types of unit.

34077-stuckonroot.png

16 hours ago, Zakwayda said:

For AI purposes, various behaviors could be components, which you could mix and match in various combinations and add and remove as needed.

Not sure how your suggesting the mixing here. How are several versions of the same "component" meant to play together? Which one gets to run? Do they both run, but then what order? My understanding was that for a given entity there should only be zero or one instance of some component.

16 hours ago, Zakwayda said:

For example, you'd have a 'vertical takeoff' component that carried out that particular behavior. When the behavior was complete, the component would remove itself, and possibly add a new component in its place (this would mirror the on_takeoff_complete() function in your code).

Kinda along the lines of some ideas I was thinking of, but then the on_*_complete currently are also needing a "view of everything" that I don't like (e.g. that if, else if, chain in the little bit of sample code I had).

11 hours ago, frob said:

Describing it sounds more complex than it really is. Here is an article with a few variations, the last example with animals running around is closest to what you describe

Several of the games I've worked on included variations of nested state machines mixed with some degree of utility functions.

Will look at those articles and what nested state machines might look like.

11 hours ago, frob said:

You're talking about a large-scale game with many unit types, this type of system tends to grow rapidly and is best suited for teams of programmers and artists/animators. 

Well OK, not initially, but want to get out of this every time I do something different its almost from scratch because the state machine and rules don't match so either make a new FSM entirely or I have to refactor a whole bunch of stuff. For example adding in the "helicopters need to take off" rule effected every nearly every basic order they could be issued and a lot of the state transitions.

8 hours ago, Shaarigan said:

In modern gameplay, those tasks are done via Behavior Tree rather than the simple state maschine. I worked on games with very large scale of units that have had to be controlled (500 AI units) for a traffic simulation. Those units each have had driver behavior from simply follow there road, change lanes and taking other ways on cros-roads up to complex traffic behavior and seeking for a parking spot.

Will look into behaviour trees as soon as I get a chance as well. They are looking suspiciously like what I was thinking with separate commands, but not had a chance to read many articles yet. Putting the state data for them alongside the unit data so the tree does not need to be replicated also seems a good idea, although not sure if in practice that would mean having to have all the possible variables there in one place...need to think about that bit.

 

What I had could look a bit like:


class VerticalTakeoffCommand : Command
{
  public void update()
  {
    if (unit->vertical_takeoff_step())
      unit->set_command(do_after);
  }
  private Command do_after;
};
class PatrolCommand : Command
{
  public void update()
  {
    if (unit->move_step())
    {
      if (++route_i == route.size())
        return unit->set_command(do_after);
      else unit->set_move_destination(route[route_i]);
    }
    if (target = find_target_in_range())
    {
      unit->set_command(new AttackCommand(this, target, max_attack_range));
    }
  }
  private int route_i;
  private List<Vector2F> route;
  private Command do_after;
};

 

The entity/component idea may be tangential to the problems you're trying to solve here, but just to answer this question:

Quote

Not sure how your suggesting the mixing here. How are several versions of the same "component" meant to play together? Which one gets to run? Do they both run, but then what order? My understanding was that for a given entity there should only be zero or one instance of some component.

You're correct that you'd typically have at most one instance of a given component type per entity. What I mean by 'mix and match' is that you can combine different component types in different ways (with each entity having at most one of a particular component type).

Again though, this may be somewhat orthogonal to the issues you're concerned with.

What I think we're all saying is that having one state machine is what blocks extensibility.  Having a collection of state machines that do the work ends up being easier as projects grow.

Large behaviors are composed of many smaller state machines.  The example was harvesting:  Runs the Move To state machine to get to where harvesting takes place, then the harvest loop, then the move to state machine again, then the depositing loop.

Even inside those sub-machines another sub-machine could be useful.  When moving to a location when you are blocked, a sub-machine could trigger behavior to wait, or to replace the current action. In most systems I've worked with the state machines were able to replace themselves; the easiest example is where a target moved and the Move To action could replace itself with a new Move To action, or could replace itself with a Wait action followed by a new Move To action.

 

Having one giant state machine can work when systems are small, but they don't grow very well.

This topic is closed to new replies.

Advertisement