Paris Game AI Conference

Published June 23, 2009 by Stefan Maton, posted by Myopic Rhino
Do you see issues with this article? Let us know.
Advertisement

Introduction

After a first successful AI specific conference last year, the people behind AIGamedev.com, Alex Champandard et Petra, have managed to invite not only more speakers but also some of the most interesting ones. In fact within 2 days there have been so much different, highly quality presentations and panels, that I fear it will be difficult to condense my 24 pages of notes into an article without interfering with the slides, videos and audio cuts that will be made available to premium members of AIGamedev.com. And I hope that my article will do justice to the event.

The event was held in one of the rooms of the Conservatoire National des Arts et Metiers on the 9th and 10th of June. About 15 speakers and panel contributors participated and informed the ~170 listeners per talk about their research and their approach to different AI problems, and talked about themes such as emotions and body language.

State of the industry

To introduce all the speakers to the public, Alex has chosen to have a short discussion about the current state of the industry. Every single participant had the chance to introduce himself, answer some questions and make a little statement related to the topic. I'll not introduce every single speaker here, but I'll do it when appropriate: when talking about their speech.

There have been made many interesting statements during this first session that it's worth an article of its own. But I'll pick some of the short statements that interested me most:

Phil Carlisle mentioned that independent game developers tend to do more experimentation than main-stream developers.

Mieszko Zielinski stated that designers want total control but when the player does something the designers had not foreseen the AI looks broken.

Alex J. Champandard said that one of the achievements of the last 10 years is that the technology is now in place and the programmer is not in charge of everything anymore. Designers can actually start focusing on the design without having to interact with programmers for every single requirement.

Emotion in Game Characters

This session has been presented by Phil Carlisle who's a British independent game developer and lecturer/researcher at the Bolton University, U.K. He's been working with Team17 on the Worms franchise.

You have to meet him in person to understand Phil's charismatic presence whenever he starts to talk. It's hard not to listen to him when he's talking about game characters and the need to give them emotions and how they can create emotions. Perhaps it's for this reason that he's also the best to inform us about the research that has been done until now and where it probably will head in the future.

His motivation is to create "real" emotion and to understand the gamers emotions with the final target to create actors instead of insensitive, emotionless puppets that are in-game characters now.

To illustrate this, he showed us three video examples: Fallout 3, Team Fortress 2 and Wall-E. While the character shown in Fallout 3 seemed to be completely emotionless, the Heavy of Team Fortress 2 made heavy (sigh) usage of facial expressions and body language. Finally, Wall-E was an example of how to create the impression of deep emotions even without using speech. Sometimes simple movements can express more than 1000 words.

Phil talked to us about "Artificial performance" which should create convincing characters taking into account verbal (language, paralinguistic) and non-verbal communication (facial expression, gaze, posture ...)

When talking about the current research, he pointed us to Antonio R. Damasio's "Descartes' Error: Emotion, Reason, and the Human Brain". Phil said: "People are not logical beings. People are emotional beings." With this he introduced us to the idea that it's perhaps more important to make usage of recognition systems than logic, because "emotions help regulate what we do."

Phil told us about different models for emotions, personality and mood used in Psychology:

  • OCEAN ("Big Five")
  • PAD (Pleasure-Arousal-Dominance)
  • OCC Model of Emotions (Ortony, Clore, & Collins)
He also listed a couple of books which might be interesting when learning about emotions and characters.

Finally he talked about future works in this field and the player's perception of agent communication (verbal and non-verbal), models of emotion, and procedural animation. He shortly described emotional characters and a possible way to include emotions in AI.

He's been talking about much more, and it would take much more than a conference report to elaborate in a more detailed way the different approaches Phil mentioned throughout his speech.

Nevertheless, for those who are interested in learning more about all this, I've added a couple of links:

http://en.wikipedia.org/wiki/Paralanguage
http://en.wikipedia.org/wiki/Descartes'_Error
http://en.wikipedia.org/wiki/Big_Five_personality_traits
http://www.springerlink.com/content/g071r4n59240u537/ (PAD)
http://www.bartneck.de/publications/2002/integratingTheOCCModel/bartneckHF2002.pdf
http://en.wikipedia.org/wiki/Procedural_animation
http://www.actormachine.com/products.html
http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.32.6716

Coordinating Agents with Behavior Trees

Ricard Pillosu is working at Crytek as Lead Game Programmer where he has been working on the Crysis franchise. Ricard used an example behavior tree of a soldier in a first person shooter (what else) as the context for this session to introduce us into the concept of squad and group coordination.

He mentioned that current GFX is creating high expectations to the behavior of the characters on screen, and that the AI has to keep up with the increasing complexity of today's computer games. AI must deal with more information, be easier to debug, permit rapid iteration, use less hardware resources and enable easy coordination of AI agents.

Behavior trees (BT) can be used to handle some, if not all, of the above mentioned issues. BTs display a tree-like view of the complex behavior and thus are more readable.

Before one can make usage of the behavior tree, it's necessary to represent some game data in an abstract way. Ricard used a simple table to represent the agent knowledge. To generate this knowledge, filter functions are used to simplify the world data which is then easy to read, easy to pack and easy to debug.

Typical content of this simplified agent knowledge is i.e.:

  • current_weapon = 3 (UZI)
  • Health = 100
  • Ammo = 60
  • ...
Ricard explained with a little bit more details the construct of a BT which I will not add here (See the links for a more detailed explanation). But let's say that a BT contains actions and conditions that are checked before those actions are executed.

Using the above mentioned example of a soldier, he elaborated where and why to include a branch in the BT to introduce the idea of a squad/group management. The newly included squad/group management is called a "tactic" (i.e. "Flank") and is used when some given conditions (such as "use this tactic if 2-4 agents are available") and the context make sense for this given tactic.

In order to achieve a deeper coordination between the involved characters, a tactic manager is used which monitors the BT activity. The tactic manager analyses the situation, elaborates the list of possible tactical candidates (agents available for any given tactic) and re-evaluates the agent's BTs once a tactic has been triggered.

I would have liked to add a lot of interesting links about this topic but unfortunately, when searching about Behavior Trees, you mostly will find papers related to natural speech processing or similar. The best approach (which is also referenced over and over again) is Alex J. Champandard's BT overview: http://aigamedev.com/open/article/bt-overview

Discussion on Squads & Group Behaviors

Right after the Ricard's session a discussion about squads and group behaviors has been held. During this discussion it has been stated that it makes more sense to use a top-down approach when managing group behavior than bottom-up.

It has also been stated that it would make sense to incorporate environmental hints to point the AI to good areas for flanking or a good place to concentrate suppressive fire.

The AI of Killzone 2's Multiplayer Bots

Remco Straatman, Lead AI Programmer of Guerilla games, and Alex J. Champandard, AiGameDev.com, joined forces to give us a two part presentation of Killzone 2's Multiplayer Bot AI.

The first part was held by Remco, who told us about the overall scope of the game. He told us that the bots in Killzone 1 were a well received feature and thus they wanted to focus more on bots for multiplayer games in Killzone 2. While the bots in Killzone 1 were more meant to accompany the player, the new bots had also the role of a teacher, telling the player how to play the game.

Focusing more onto the multiplayer part with 32 players in a game also allowed for more team based game modes. Per map multiple modes such as "Capture and Hold" (Domination), "Body Count" (Team Deathmatch), "Search and retrieve", "Search and destroy" and "Assassination" are available. Due to this, the AI had to adapt to the game mode on the map. Additionally, the AI had to handle badges adding capabilities dependent on the agents assigned character type (Scout, Tactician, Assault, Engineer, Medic or Saboteur).

While the previous version of the Killzone AI made usage of Lua, the new architecture is layered and split between Strategic AI, Squad AI and Individual AI. You can consider this as being a top down way of handling situations. The Strategic AI will hand orders to the squads like "Defend this point", "Advance to that point" or "Attack that unit". The squad will try to solve those orders and report to the strategic AI about the outcome of its operations.

The squad AI, while trying to obey to the strategic AI's orders, will order the individual AI to move to a specific location and receive feedback about combat information. The strategic AI may also assign targets to the individual AI and receive requests for assignments from it, but this strategic AI to individual AI communication seems not to be the "normal" way to handle this but more of a special case when the situation and game mode makes it necessary.

The individual AI makes use of hierarchical task network planning (HTN) after having created a "picture" of the current world state. This picture is created with daemons which collect orders, messages and threads (which have been recognized through stimuli and perception). The planner creates a plan which contains tasks which themselves create controller input (thus, the game AI makes usage of controller input to move around the NPCs).

Killzone 2's HTN planner creates one domain per individual AI. Each domain will contain one or more methods containing one or more branches with preconditions and task lists. A task can be a primitive such as "Shoot" or "Reload" or a compound which resolves recursively. Using this, a general combat behavior is created which is opportunistic and ordered.

Some numbers of the individual AI:

  • 360 methods
  • 1048 branches
  • 138 behaviors
  • 147 continue branches.
During multiplayer games which normally contain about 14 bots, 10 turrets and 6 squads, about 500 plans are generated per second, 8000 decompositions per second are handled which makes 15 to 16 decompositions per planning. 24000 branch evaluations are done per second. To note: the AI is planning with a frequency of 5hz but frequent behavior changes are prevented and unnecessary checks are avoided using some optimizations (i.e. the behavior only changes when a better plan is available or if the current plan isn't feasible anymore).

The squad AI makes usage of the same planning structure. The difference is that the daemons will receive strategic orders and the task execution will contain individual orders member messages which will then trigger individual AI planning as depicted above.

Alex then took over the presentation and reminded us of the challenge they encountered: 1-14 bots per side put into squads. He compared Killzone 2's AI with Halo 3: While Halo 3 has disposable bots, which are level specific and made to be easily handled by designers. The AI is mostly declarative and story driven. Killzone 2 on the other hand has persistent bots which have are applied in a more general way and have been programmed by programmers for programmers. It's mostly procedural and strategic.

As a reminder: The AI has to separate the "what" and the "how" in order to achieve good plans in a goal-driven approach. Killzone 2 did this my creating mission-specific AI (one for "Search and Destroy", one for "Capture and Hold", etc) containing sets of C++ classes. Also dependent on the game mode different objectives are generated and specific hints are set onto the maps to inform the AI where to find offensive and defensive locations.

Using these objectives a base strategy is created which is used to create tasks for the squad and individual AI. The objectives scale up and down with the number of bots which need rich and detailed information about the objective to reach. The objectives can be laid out this way:

Static Dynamic Offensive AdvanceWaypoint AttackEntity Defensive DefendMarker EscortEntity
Alex provided also an algorithm for Squad bot assignment:
  1. Calculate ideal distribution of bots, then squads.
  2. Create new squads if necessary.
  3. Remove extra squads if too many assigned to any objective.
  4. Pick an objective for each squad. This is based upon a weighting of each objective.
    1. If the objective is already active, pick a new sub-objective on a regular basis.
    2. Assign the best objective to each squad if the above is not valid.
  5. Un-assign bots if too many for squad or objective.
  6. Process all free bots and assign them to the best squad.
Alex told us that the assignment of bots to the squads is based upon class combinations (which is rarely remarked) and distance to the squad center (average member location). To assign an objective to a squad the first to come is the first to be served, and the badges assigned to the bots is based upon a global policy and chosen by design.

He then elaborated in a more detailed way the hints placed upon the map. The annotations are placed manually on the map ("create the information by hand first, then automate later if necessary") and include regroup locations, mission specific points to defend, sniping and hide locations. Using this information, the level is then automatically processed to create a strategic graph which helps support runtime strategic decision making and interpret the manual annotations dynamically.

The strategic map contains a set of areas which are groups of waypoints. These areas are connected and the number of connections already gives a hint about choke points in the level. The strategic map is used to make a rough squad movement planning (the actual waypoints are then used to do the actual path-finding). This high level overview of the map enables also an easy representation of faction influence on the map.

The influence map enables the AI to determine who controls which areas on the map. It is calculated using all bots, turrets and deaths. Through this information, the strategic planning can decide where to choose regrouping areas, where to attack, what the current progress is, etc. To select i.e. a regroup selection the strategic AI must identify candidates using recently used (by other squads) locations and previously selected locations. The influence map is also used to calculate movement on the map. Its information provides hints about areas appropriate for flanking or movement without getting into a fight.

Finally Alex told us about the future work which will include data mining to find ideal positions and hot spots. The usage of bots in early versions of the maps will help to improve the level design because this enables the designers to let the bots test run the levels and find weak points in their design. The bots role of a teacher will become more important. They will teach us how to play the game. In the future the interaction between a squad and other friend-squads and/or the human player will become more important, too. Currently a squad is not able to be of meaningful use to a player (from an automated point of view).

To learn more about the topic, see:

"Realistic Autonomous Navigation in Dynamic Environments" (Alex J. Champandard)
"HTN planning for flexible coordination of multiagent team behavior" (Obst, Mass, Boedecker)
"HTN planning: Complexity and Expressivity" (Erol, Hendler, Nau)
"Using Player Models to Improve Robustness of HTN Plans in Multi-Agent Domains" (Semsch, Jakob, Doubek, Pechoucek)
"Hierarchical Task Network Planning: Formalization, Analysism and Implementation" (Erol)

AI Multithreading & Parallelization

Bjoern Knafla, the one and only parallelization consultant in the gaming industry, is a research associate at the University of Kassel in Germany. His talk provided an overview of the concepts and techniques used for parallelizing code.

When working with parallel programming most of the problems one might encounter are related to race conditions and deadlocks.

Race conditions appear when operations of the multi-task system depend on the order in which the code parts are executed. Race cConditions also arise when several threads of a multi-thread application try to get access to data simultaneously and one thread is performing write operations on that data. This can lead to unexpected results. Deadlock describes a condition where two or more processes are waiting for an event or communication from one of the other processes. Thus all threads involved in this deadlock are blocked and cannot work anymore.

Another problem that might arise is the lack of performance in such a multi-threaded application if there are contention point situations. Bjoern talked about the memory wall related to this: This is when you hit the bandwidth limit of throughput from the memory and latency of accessing memory. You might want to share as little data as possible and minimize the amount of times you synchronize data to improve latency. You have to find the right balance between minimizing sharing and maximizing locality.

Bjoern elaborated on different methods for synchronization. The first one is asynchronous calls, which has the benefit to scale well (with task pools) and has potentially better memory locality. But then you have to take care of determinism and be aware of side effects because it's difficult to get the syncing right. The second system he introduced to us was parallel agents. This system is a little bit more complicated because it not only keeps the idea of task pools (with agents ordered into it), but this also introduces the problem of the ownership management.

Ownership management must be done when agents share data and memory (i.e. objects or entities) and must communicate with each other to regulate the access to that data and memory. A lot of synchronization is necessary at this point. To solve this issue, a two step approach is taken: In the first step, which is the collection stage, the agents only can write private data (read only from objects and/or memory) and request changes to public data. Between the first and the second step, the system determines who can have access to which object. The second step then allows the agents to alter the objects they have been granted.

The advantages of this system is that it guides the parallelization, scaling is possible and has implicit syncing. The cons of the system are that the implementation effort is high. You have a lot of work to do to achieve and implement this system. Also, it has bad locality and it's hard to achieve some kind of determinism. You still have to cope with getting the synchronization right and avoid side effects. And finally, since the access to the data is deferred, you don't have immediate access to the data but must wait until the next frame (if no other agent wants to access the object/data).

The last way to synchronize cited by Bjoern was Data Parallel Systems. This means one might slice the agents and group aspects of them into separate tasks (such as pathfinding, sensing, logic, ...). Basically it works as the parallel agents but breaks them down into separate tasks. The advantage of this system is that lean data structures are used which improves scalability and enables us to scale the system appropriately. It's deterministic and has implicit synchronization while working in an explicit context. On the other side, this system requires quite some implementation efforts and deferring is still necessary. Getting the synchronization right is still an issue.

He concluded his talk stating that we really need to understand both, our game and our target hardware. We must be sure that our code is error-free first before jumping into parallelization (because it would be hard to make the difference to determine if a problem arises due to some problem in the code or in determinism). And finally KISS (Keep It Simple Stupid)...

The Art of Concurrency and AI System Architecture

After Bjoern's presentation he was joined by Markus Mohr, Lead AI Systems programmer in R&D at Crytek, and Julien Hamaide, CEO of Fishing Cactus, an independent studio, and former technology guru at 10tacle Belgium (Who were working on Totem).

The contributors of this panel agreed upon the need to have a more general sub-system framework which doesn't grant access to low level threading or to shared data. While we have learned the pros and cons of shared data access (and how to improve that situation), the thread masking is based upon platform dependent threading problems. Each platform has its own approach to parallel processing and thus, the implementation of threading on those platform is platform specific. Masking or hiding this platform dependency from the user might help him to concentrate more on AI than the technical background.

The participants also agreed that one should plan his own AI system with multi-threading in mind. Most often it's problematic, if not impossible, to port an already existing, single-thread oriented AI solution into a multi-threaded environment. When porting, try to not use explicit synchronization. Markus put an emphasis on clean code which will help pin-pointing problems.

Julien went a step further and added that you also should have a clean structure. But then, you don't only have to take a look at the object oriented side of the coin but also take into account the data flow (As a personal note I'd like to direct you to "Object thinking" by David West which treats this topic in an highly interesting way).

"Object thinking" by David West

Advice and Tales from the Trenches

This panel again brought several people onto stage: Phil Carlisle, Mieszko Zielinski, Alex J. Champandard and Eduardo Jimenez. It soon turned from evocation of personal experience to advice for getting into the industry as a games (AI) programmer. Consensus was that those who want to break into the industry should start with small games to show off their skills. They should focus on their main skills during any job interview. Phil stated that if you've got the right background (know programming), you can implement quite anything. Therefore (as Miesko stated) you should start as a game play programmer to later get into AI. This way you learn the basics of game development.

Phil was astonished that there aren't that many students who start small indie companies. It would show their mental capability to create, design, implement, and actually finish a project. It would also show off their skills. Eduardo put emphasis on the fact that this would also help you create a portfolio which proves that you can do your work. Miezko said that the last few percents of a game take fifty to sixty percent of the time.

The speakers also talked about creating for designers. Since designers are not necessarily technical people, it's better to have a system that allows them to avoid breaking the content too easily and to avoid creating impossible setups or designs. You also should train designers to build self created examples which can be used to show the concept and proof functionality.

Planning Multi-unit Maneuvers using HTN and A*

William van der Sterren is an AI consultant at CGF-AI. He's been a contractor for Killzone 1 and Shellshock: Nam '67. William presented us the application of hierarchical task network planning (HTN) HTN and A* to help plan and coordinate groups of units. His project "PlannedAssault", an online web-based mission generator, allows for large scale battle planning where the final result looks like a project plan with tasks for each unit.

HTN planners are great for problems with hierarchy as i.e. resource or unit planning. William decided to combine HTN planners with A* because this allows for finding the best plan. To do so, he starts with an initial top-level plan which he expands until the plan consists of primitive tasks. The planner generates multiple alternative plans and estimates the costs for each of it based upon the world state. The costs are then evaluated to select the best possible plan. Target of this way to handle things is to generate good solutions and not many solutions. The implemented planner must be efficient, generate useful mission briefings and create error messages when the planner fails to generate a valid plan. The planner to implement must be able to easily add new methods, and to do forward and military style reverse planning. The quality of the generated plan is important. While any plan can have bad positions, bad coverage, and make insufficient usage of resources, a good plan will contain good unit positioning and manage usage of all units in an optimal way.

When using HTN planners the top-down approach in planning reflects the problem domain of the plan to create. For each (partial) plan, costs are assigned which are calculated using the world state delta plus the risk to execute this plan plus the preferences of that plan. The multitude of plans are called plan space. Each plan space keeps track of the world states. This is necessary because if we can define world states for the start and end task of the plan, then we can estimate the plan costs.

As stated in the previous paragraph, the plan costs are based upon the delta of the world states. Since the planning breaks down the plans and sub-plans into tasks, we have to determine the costs for each task to perform (since they will, in the end, create the delta between the world state at the start and the end of the plan). The task costs are the sum of the task duration and the task risk. To weigh the resulting costs, a task preference influences (i.e. via a subtraction) the task costs. But now, how to we get the task duration? The task duration can either be the effective task duration if it's a primitive and if the inputs to that primitive are available, or the difference between the maximum time of the children execution time and the minimum time of the children execution time, if it's a task consisting of several sub-tasks. Finally, if none of the above apply, the task duration can be the lower bound estimate (given by a designer). The plans and their costs are then evaluated using A* "best first" search for planning.

William gave some examples of tasks to plan and how the costs are calculated. In his example he wanted to transport a group of soldiers from point A to B. The created plans included several solutions about where to pick them up, or even if it's worthwhile to pick them up. Each sub-task (i.e. drive to pick up location, let soldiers move to pick-up location, drive to delivery point, deploy, ...) is estimated and taken into account.

He talked about some technical issues he had encountered and how he solved them. When working with planners, you encounter binding problems. That is, you have to assign values to variables and check their consistency. STRIPS planners usually have implicit bindings. This binding becomes a problem when you have to cope with large range of values (he gave us the number 16,000 waypoints), when you write most of your planner code to constraint binding and if you have a lot of branches to handle in order to find a good plan. The work around consists of using procedural pre-conditions, abstracting and/or reduce the world state values and use explicit binding.

Using Military reverse planning helps to limit the options to evaluate. Reverse planning means that I take a look at i.e.for example an objective to attain, and I ask the question "What do I have to achieve that?" An answer could i.e. be "I have to send in 5 units." You then ask yourself what you have to do to send in 5 units. In this way, step by step, you will not only elaborate what you have to do to attain the objective, but also which resources you need (i.e. transportation, support, waypoints, landing points, etc). When planning this way, you might find it useful to take the output of one planned task as the input of the next tasks.

Compared to STRIPS planners, HTN planners allow more control and limit the branching. HTN even helps limiting the combinatorial explosion. To learn more about HTN planners see the links in the Killzone 2 related part of the report.

Interesting links:

STRIPS: A new approach to the application of theorem proving to problem solving.
Classical AI Planning: STRIPS planning
Goal-Oriented Action Planning (GOAP)
Automated planning and scheduling

Approaches to Interactive Narrative Generation and Story Telling

Daniel Kudenko, a computer science lecturer at the University of York, provided an overview of approaches to Interactive Drama.

He started with a short lesson in history, telling us about stories in action games, stories in adventure games and finally stories in action-adventure games. For each genre he broke the history into different stages. The stories in action games for example began with a rough background story which was weakly integrated into the game play. Later action games then used the story to connect missions. In either case, the story is pre-written and offers little to no flexibility. Within the missions the story is not included.

Adventure games also made usage of pre-defined stories and allowed for limited player interaction with the environment. They have evolved to a better interface usage and more complex story graphs but still made use of a pre-written history. Only the latest adventure games, while still pre-written, have more elaborative narrative using multiple character perspectives. The action-adventures integrate action and story. The story is still pre-written and the mission structure remains.

As a conclusion to this historical approach, one can say that the story is always pre-written and has a small branch factor. The influence by the player on the story is limited and not scalable. Daniel revealed to us his utopia, where the player is part of the story, changing it through his actions. The story should be scalable and allow large, complex worlds. Domain independency would be ideal. This would allow one to create a general engine which can generate different stories independent of the game domain or genre. Through the flexibility re-playability should be achieved and enable the player to immerse more into the game. Target is not to create better stories, but to create them dynamically, faster and cheaper.

An important part of dynamic story creation is the dramatic structure which can be described using Gustav Freytag's pyramid. This pyramid analyses the plot of a story by slicing it into different parts such as the exposition, the climax and the ending. They different parts are separated by events that lead from one part to another: the exposition passes through an inciting incident to the climax, which consists of a rising action and a falling action. The climax then uses a resolution to pass to the proper ending. Different solutions can be used to create dynamic storylines: Plot graphs, Bayesian networks, planning and drama structures.

A solution presented by Daniel is called GADIN (Generation of Adaptive Dilemma-based Interactive Narrative). The idea behind the project is that specific focal points create dramatic tension and everything in between these focal points lead up to these focal points. This is done using five dilemmas (betrayal, sacrifice, greater good, take down, favor). A story consists of a sequence of dilemmas with story events leading from dilemma to dilemma. The events are controlled by the drama engine and triggered by the player's actions.

The drama engine of GADIN makes usage of a knowledge database which contains characters, story actions and dilemmas. Through a narrative generator, which is a planner, and a user model which is used to predict user actions, the story is generated and presented to the user who reacts to it. The user actions are used to trigger new story elements in the narrative generator. To select and generate the new story elements, the narrative generator makes usage of the current game state and a dilemma. It tries to generate a story using those parameters which may either fail or succeed. If a story generation fails, the generator selects another dilemma and re-iterates the story generation. Once this succeeds the plan is presented where possible and the dilemma is presented when valid (through user actions). This leads to a new state which re-iterates the story generation. GADIN has been evaluated using a kind-of turing test: One GADIN story containing only NPCs and one soap opera have been taken and presented online to 127 users in soap and game forums. 42.5 percent of the users had chosen the second story (the soap opera) as being the one generated by the computer. Now, one has to wonder if this "small" amount of replies and comparing only two stories is valid. Personally, I would prefer giving 10 choices with 5 cg and 5 soap stories and let a larger amount of users select the cg ones. And even then... the biggest problem of the current GADIN implementation is the fact that it does not produce good storylines.

Directly after Daniel's speech a panel was held. Daniel said that combining autonomous agents and storytelling was new to academia. It has been said that it could be a good idea to separate NPC into a character agent and an actor agent, taking into account the storyline generated by the story planner. But then, where do we put the line between story line control and normal AI? The story line is high level and determines general goals for the character which have to be achieved to reach the next dilemma point. The actor AI could take into account the character's beliefs and goals to trigger accurate actions that would push him towards that goal. Also, the story generator could be used to generate content for the gaming world or avoid re-spawning of enemies at places that have already been visited. As you can see, the story generation feature will have a huge influence not only to the game play but also to the content creation in future games.

Dramatic structure
Freytag's pyramid
Generation of Dilemma-based Narratives: Method and Turing Test Evaluation
Dynamic Generation of Dilemma-based Interactive Narratives (GADIN)
Facade AI

The Racing AI in Pure

Eduardo Jimenez is Senior Programmer at Black Rock Studio. He was responsible for the AI in Disney Interactive's dirt bike racer Pure. He explained to us how the AI for riders is designed to prevent the feeling of rubber band AI. He did this by using Pure as an example and which iterations they did to create a believable racing AI.

To achieve this, as Eduardo told us, they have chosen to make usage of a race management mechanism. The reason for this is that games not using such a system usually have an AI that is either too good or too bad, and the AI won't react to the player's mistakes. Using a race management system allows developers to create games that are more challenging for everyone, which adapt to different situations and which actually are fun.

The first technique presented by Eduardo is called "Rubber band". In fact, a big rubber band is placed around the player, forcing the AI players to be around him. This is usually realized by any means necessary, if the player is too far behind or too far ahead. The player perceives this as cheating, because it's quite obvious what is happening. While this solution works almost always and removes completely the feeling of "lonely racing game", it usually breaks the illusion of fairness because the cheating used to achieve this is easy to spot.

Eduardo then talked to us about a skill based system. In a skill system a skill represents how well a player performs on a particular behavior or game play aspect. For each player and behavior, one skill will be stored which is normalized (0..1). Eventually all behaviors have the same skill value and the set of skills can be used to represent the personality of a character. In their first iteration of a skill based system in Pure, they used a static skill system which didn't work out well because it either was too hard or too easy and still could create "lonely racing".

Therefore they implemented a second skill based system called "Dynamic Competition Balancing" (DCB). This system modifies the skills dynamically during a race by applying rules to them. In their first approach to DCB, they tried to match the player index within the leader board but it proved to be too inflexible and thus didn't work well. To bypass this problem the system has been coupled with a grouping concept: the riders are split into different groups, each group having a leader and members. The members follow the leader and the player will have the possibility to jump from group to group. Unfortunately this solution wasn't very good because the groups were too loose and if the player jumped ahead of the first group, "lonely racing" still occurred.

This is where the system called "Race Script" has been implemented. It's the final method used in Pure and it worked out quite well. A race script includes a definition of a script and parts of the implementation. It's implemented by the designers and contains an explanation of how the race should ideally develop under different circumstances. The different player skills must be reflected in the document. The designer must understand that the race actually depends on the player's performance and thus the document is not a strict script but just guidelines.

The implementation is still based upon DCB but the main rule for grouping is the distance. Every rider aims to reach a point X meters in front or behind the player and the skills will be modified dependent on the distance to that point, not the distance to the player. Grouping is achieved through giving a similar aiming point to the AI riders. Since the aiming point moves during the race, the groups progress. The player is progressing from his perspective; he doesn't know the game is made easier for him because the changes are subtle. This is very rewarding for him. The difference to the rubber band technique described at the beginning of this speech, is the fact that the skills are limited to a certain range and the race follows a script.

Other mechanisms have been used in the game to ensure good playability. At the beginning of the race each AI rider only has 1 skill and the difficulty is changed every lap. In the last meters the AI will stop improving their skills to make the first place more accessible to the player.

To conclude his session, Eduardo said that the main goals have been achieved: Having challenging, fun races which do not contain "lonely racing" and which are subtle and rewarding. Throughout the entire race coherent groups are around the player, motivating him to progress. The method itself may be adapted to other types of games if a good set of skills can be generated. This mechanism also gives more control to the designers over the difficulty of the game.

AI Characters From Animation to Behavior

Christiaan Moleman, an animator who has recently been working at Arkane, brought together Phil Carlisle, Julien Hamaide and one additional person who's name I didn't catch (sorry for that...). The panel talked about challenges of next-gen animation. This was more a question & answer session, so I'll try to put the questions and answers into a coherent text, which is not an easy task.

Upon the question what AI plus animation actually is, Phil said, that it is bringing life to characters. Now, this is a nice metaphor because it not only shows how far we already could go and how far we actually go in games. Julien put his emphasis onto the fact that the animation actually is the showcase of AI because it displays what the AI wants to do. An AI without appropriate feedback is not enjoyable. He used the game Totem by 10tacle Belgium as an example: In Totem the player was able to make usage of his totems which were animals. The totem's powers were transferred to the player's character and the character moved according to the totem's animal. The player immediately understood what kind of power he was using by only taking a look at the character's movements.

Taking the example of The Muppets, Christiaan stated that with a single hand gesture and posture, huge expressions are achieved. The timing is key with the puppets posture. He stated that puppetry is interesting for animation because it generates much outcome with little efforts. The posture (gaze, body movement) seems to be a major point when it comes to generating the feeling of true emotions and real depths in characters. Most games don't catch this right because they do not use well the facial expressions and the body language. Concerning facial expressions the important point was stated that the gaze and the eye-lids play a major role. If someone is talking to me and he's not blinking with his eye-lids or if he's not looking at me, it doesn't feel right, it feels like something is missing.

This is also a point where designers have to improve and learn the subtle differences that make a character actually seem to live. But then, what most often hinders more impressive expressions is the fact that those animations are quite difficult to achieve and it's not quite clear if the effort is worth the money spent on it (means: does it generate more sales). Everyone is struggling to get the best graphics, but almost no company (if any) is trying to have the best animation system. Consensus is that almost all games that pushed the limits of animations have had success, both from the game review point of view and the sales.

The question is: Why don't we integrate better, deeper animations? While some agree that mentality has to change, others put emphasis on the fact that it's a question of how much added value it brings to the game.

Voxelization of Polygon Soups for Navigation

The last talk, by Mikko Mononen, introduced us to his R&D project called Recast. The project is based on the idea of converting polygon soups into navigation meshes using voxelization. Mikko is the Lead AI Programmer at Recoil Games and previously was Lead AI on Crysis. The speech was very technical and I must admit that I didn't take many notes because I was more interested by the things going on up on the screen. I haven't yet digged into navigation mesh generation and thus I had to listen hard to understand half of what he was telling us. Mikko used his open source project to explain his techniques. I've added a link to the project page if you're interested.

I've taken this part from the open source project page: Recast automatically generates navigation meshes. The Recast process starts with constructing a voxel mold from a level geometry and then casting a navigation mesh over it. The process consists of three steps, building the voxel mold, partitioning the mold into simple regions, peeling off the regions as simple polygons. The voxel mold is build from the input triangle mesh by rasterizing the triangles into a multi-layer heightfield. Some simple filters are then applied to the mold to prune out locations where the character would not be able to move. The walkable areas described by the mold are divided into simple overlayed 2D regions. The resulting regions have only one non-overlapping contour, which simplifies the final step of the process tremendously. Finally, the navigation polygons are peeled off from the regions by first tracing the boundaries and then simplifying them. The resulting polygons are finally converted to convex polygons which makes them perfect for pathfinding and spatial reasoning about the level. Recast

Conclusion

This is the second time the event has been held and it surely was a great experience assisting it. Last year's event only lasted one day and the move to making it a two day event really is beneficial to it. There's more of everything: more people, more lectures, more interesting discussions. The coffee breaks are used to talk about the previous sessions, and more than once you get the chance to exchange business cards or just shake hands with one or the other known name in the business. Although I've been myself in the gaming industry for almost 13 years now, I'm still impressed when I meet people having a real influence to the gaming industry, both from the technical and the game design point of view.

The VIP evening was well visited. I'd say that about 60 people were attending a relaxed small-talk event in a nice bar called "Chez Claude". If you wonder what typically small-talk topics for AI developers might look like, here are the ones I chatted about that evening: C#, Java and C++ and the programmers difficulties to transit smoothly from one language to another; Camera movement and direction using HTN planners; Difficulties to develop and distribute for IPhones; Human-machine interfaces such as direct brain plugs and brain firewalls; etc. etc. etc. As you can see, even though this was an relaxed evening we programmers aren't able to not talk about something that isn't remotely related to AI.

If there would be anything I'd like to see changed it's the time given to any speaker. Currently it's 45 minutes including the Q&A session. Although this is "standard speaker time", I'd like to see this extended to 1h or 45 minutes of speech plus 15 minutes Q&A. Every so often we ran out of time: Either the speaker could not finish his talk and Alex had to ask him to hurry, or the Q&A session had to be cut. Giving more time to the participants would give more time to the attendees to interact with the speakers, and thus get a better understanding of the topic on hand.

I'm already looking forward to next year's iteration of this event with (hopefully) a similar impressive, high quality line-up of speeches.

Cancel Save
0 Likes 0 Comments

Comments

Nobody has left a comment. You can be the first!
You must log in to join the conversation.
Don't have a GameDev.net account? Sign up!
Advertisement