AI component-based architecture

Started by
7 comments, last by wodinoneeye 11 years, 5 months ago
I currently have started implementing a real-time simulation and I've realized that I will have to design the basic AI architecture soon.

I've found many resources about the different AI algorithms and their implementations but what I miss is the principal (component) based, overall design. The final game AI will be complex, so I want to design it from the beginning in a way that it can later be easily extended.

Can anybody recommend a resource? I'm not searching for high level ppts but a concrete example.

Thanks.
Advertisement
The most important thing is a clear definition of the interface between the AI and the rest of the game. You will typically end up with some notion of agent, which gets sensory data from its environment and pick actions. You then need to figure out how to implement the action selection, but that's probably what all those resources you found talk about.

Is that what you needed?
No, that was pretty clear for me ;) but anyways thx for you contribution.

I've should written in which areas I see the challenges:

  • realtime simulation...how to wire the AI component in the game loop ( probably it should run in an own thread to leverage multi-core). My idea was in the end that one core (if available) is reserved for AI. (AI is really very crucial for my simulation and should gain all needed systems resources, I have only 2D graphics).
  • any experience how to unit test AI stuff? Any best practices how to test it in general?
  • layering: is there an AI pattern for an AI master, which delegates stuff to tactical/strategic AI layers


implementing it with C#/.NET 4.5

  • realtime simulation...how to wire the AI component in the game loop ( probably it should run in an own thread to leverage multi-core). My idea was in the end that one core (if available) is reserved for AI. (AI is really very crucial for my simulation and should gain all needed systems resources, I have only 2D graphics).


I have some ideas about that (in short, interfacing with the AI is part of the scene update), but perhaps people that have actually implemented something like this should give you their opinion.


  • any experience how to unit test AI stuff? Any best practices how to test it in general?

[/quote]
This is not specific to AI. Designing your AI as something that has clearly defined inputs and outputs will make it much easier to do unit testing.


  • layering: is there an AI pattern for an AI master, which delegates stuff to tactical/strategic AI layers

[/quote]

There's something called "subsumption architecture" which sounds pretty much like what you are looking for.
Kevin Dill has been writing and speaking lately about component-based architectures where all your pieces and parts are modular. This really comes in handy for large sprawling AI like RTS games with many different unit that may share some of the same logic reasoners.

I will attempt to summon him.

Dave Mark - President and Lead Designer of Intrinsic Algorithm LLC
Professional consultant on game AI, mathematical modeling, simulation modeling
Co-founder and 10 year advisor of the GDC AI Summit
Author of the book, Behavioral Mathematics for Game AI
Blogs I write:
IA News - What's happening at IA | IA on AI - AI news and notes | Post-Play'em - Observations on AI of games I play

"Reducing the world to mathematical equations!"


I will attempt to summon him.


Thanks Dave.
I've read 1/3 of your book "Behavioral Mathematics for Game AI" and so far I can say, it's a great book :)
Your summons has been successful. :)

This is indeed something I've been thinking a lot about - and writing a lot about - in recent years. I've found the more that I can make my AI modular, where I'm plugging behavior together out of large, reusable pieces, the faster I can implement AI (we did all of the boss AI - something like 8 unique bosses - for Iron Man in under 4 months, from scratch, without even a working path planner to start with), the more of my code I can reuse in future projects, the easier it is to think about what I'm doing (because I'm thinking in terms of human-sized concepts, not lines of C++ code), and the easier it is to tune and debug my behavior.

The Papers:

I've written a fair bit on this, so I'll give a quick overview, but if you're interested in more detail then you can look at the papers. The original modular AI paper was one I wrote for Game Programming Gems 8. It was based on the Iron Man AI (although i didn't have permission to talk about it directly). It's early work, however, and my thinking has continued to evolve. More recent papers include the one I wrote for I/ITSEC 2011 (which won best paper) and the one I wrote for SISO/SIW 2012.

http://www.iitsec.org/about/PublicationsProceedings/Documents/11136_Paper.pdf
http://www.sisostds.org/conference/View_Public_Document_Info.cfm?Document_Num=12S-SIW-046&Phase_ID=2

I have two more papers coming out at this year's I/ITSEC (in December), one of which is specifically about modularity (the other is about design patterns for utility-based AI). Ask me again (or get Dave to ask me again ;) ) after the conference and I'll share them. You might also look at the work of Christopher Dragert (http://www.cs.mcgill.ca/~cdrage/), who is a graduate student at McGill university, and is also interested in modular AI.

Quick Overview:

So the key to modular AI, as I practice it anyway, is to split your AI into 4 categories of things: reasoners, options, considerations, and actions. A reasoner is a decision-maker. It can use any decision-making algorithm, whether that be a finite state machine, or a utility-based AI, or just a list of simple rules or even if statements. Each reasoner contains a collection of options, and selects one option at a time (or alternately, multiple options - but that adds more complexity than I think is necessary). How it goes about doing that is an implementation of the particular reasoner.

Options are really just containers that hold considerations and actions.

Actions are either subreasoners (allowing us to build hierarchical AI, which is a trick that's been applied to pretty much every architecture ever created - but now we can do it in an architecture-independant way) or hooks back into the engine. For example, the Move action moves us to a specified position. The Fire action fires at a specified enemy. The LookAt looks at a specified target. The Say action plays a line of dialog. And so on. What your actions are is a little bit game specific - but I've found that there are a great many actions which are fairly universal, and so we can define fixed interfaces for them and require the game to implement those interfaces. This makes it *much* easier to move our AI to a new game (especially if we do the same thing on the sensing side, which we do with considerations).

The last item is the considerations. These are pieces of decision-making logic that are used by the reasoner to make its decision. For instance, a consideration might look at the distance between two targets. It might look at the elapsed time since this option was last selected, or the amount of time that it has been executing (if it's currently selected). It might check whether you have any ammo in your weapon, or check how much health you have left, or how many of your allies remain in the fight. Each consideration will do exactly one of these things, and then the results will be combined with those of the other considerations on that option, and used by the reasoner to select an option to execute. Again, exactly how that happens is reasoner specific - but I've found that the scheme I outline in my papers (particularly the SISO paper iirc) works pretty well with a wide variety of reasoners - and can be extended if you need something fancier.

The process of building behavior then becomes one of first selecting the type of reasoner to use for each decision, and then enumerating the actions that the AI might take, placing those into options (and note that an option can have more than one action, making it possible to walk and chew gum at the same time), applying the considerations that specify when that option is appropriate, and putting the resulting options onto the reasoner.

Since I started working this way, I've found numerous other opportunities to create various aspects of my AI in a modular way. For instance I have modular targets, modular utility function specifications, a modular character filter system, and so forth. I've built a generic factory system that lets me specify the AI in XML and load the modules in without anybody other than the factory itself having to know which instance of a modular concept is being used (so the option and the reasoner don't know anything about the types of considerations they contain, and the considerations don't know anything about the specific types of targets or utility functions that they use - that's all just specified in the XML).

In any case... hope that helps.
Thank you so much Kevin !!! smile.png


  • any experience how to unit test AI stuff? Any best practices how to test it in general?






Visualization ...

I was doing a FSM driven behavior system for a simulation and because of the complexity of how the objects interacted --- a VERY important part of correcting/composing the AI logic was being able to visialize what all the objects were actually doing in the simulation (graphicly instead of reams of numbers), and beyond that was a way to show selected state data (visually/immediately instead of stop motion poking in the debugger) and then when all else failed falling back to some kind of ad hoc logging to show the critical calculations (to spot why it wasnt behaving the way I wanted it to)

The FSM (the high level logic) was part of the system which used proximity scanning to build target lists and pathfinding to sort target priorities/validity and multiple tasks/target types that were all being considered in parallel (priority picking) and it included the ability for an object to abandon a current task ( a multi-turn sequence of actions) when a better opportunity was newly detected. Each object would potentially consider dozens of object in its proximity. (Objects internal state also changed goals that shifted priorities).

So the visualization was the way I spotted something that looked like it wasnt acting right and then the internal data presentation could be turned on to try to trace how /what decsion was being made (with logging for very complex cases)

---

Unit testing would be having an 'arena' setup to place the require mix of objects in a canned situation to force them to interact. Usually tou do it in increasing complexity (to first test undistracted proper behavior and then to try where they conflict to make sure decision priorities were correct and the transitions(like abandoning a previous 'task') were done correctly.

Unit testing the tools used by the AI logic was fairly straight forward - displaying target lists, priority orderings, A* paths was easy to see that they were behaving properly (for a particular simulation situation)

---

For your AI structuring having things set up (from the start) to facilitate the testing is needed (like piping logging info to the screen and to files)

Presentation of the AI logic (CODE) to be easily readable (for when you are tracing the logic to figure out why something didnt act the way you think it should...) I created a high level language for the 'scripting' that was actually a macro expansion system (was in C/C++) -- the high level stuff was much simpler to read/understand (and went far beyond the 'high level subroutine calls' method in what it simplified.)


---

One thing that complicated matters was that the AI logic for object behavior was broken up into 'phases' where execution was broken up into grouped processing (I did a lock step calculate -act- resolve ... to get rid of the issues of the AI to hanlde changing world data). The macro system allowed each 'state' to have its associated logic visually grouped even though they executed in different places (phases) of the 'turn' logic.

Organizing the execution framework that way also allowed mutli-threading, as each objects logic was independant with the input 'world state' data static for each 'turn'. They could all be executed independantly with no 'data change' interlocking needed (which can be a huge source of overhead).


--------------------

Other data constructs for your all-in-one-system :

Target Lists

Priority assignment and sorting and picking (for boths tasks and targets)

flexible temporary state data used for persistant decisions

Map searching (and building relevant symbolized maps from the simulations map data)

Object attribute retrievals/ scanning

Visualization-oriented data (symbols for visual presentations)

Logging data templates (to facilitate tools for log playbacks)

My project was still fairly simple in the structuring of the AI logic and I hadnt yet gotten into 'planners' and hierachical goals/solutions/tasks (but I was headed that way because of limitations of the simplex system)

If your AI processing requirments are such that you exceed what can be done on one multi-core CPU then data replication/update systems will be needed for your AI-engine (that gets ugly).

----

Im not sure how you would structure this - but I found because of the huge amount of processing that AI takes (my simulation was running real-time) that you need methods of culling unneeded processing (ie- quantum priorities). The data driving that is usually imbedded throughout the main AI logic (both the engine code and the AI 'script' data) and the culling itself can happen at many points (In my system it happened in each processing phase in different ways)


I had to have many flags to block certain types of AI processing depending on the current state/actions of the object (which included being in the middle of atomic animations - which could incude locking out another object being interacted with) ---- So thats another consideration ---- of how closely you need the AI to have to interact with other game-engine functions (versus being largely independant)
--------------------------------------------[size="1"]Ratings are Opinion, not Fact

This topic is closed to new replies.

Advertisement