Search the Community

Showing results for tags 'Behavior'.



More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Categories

  • Audio
    • Music and Sound FX
  • Business
    • Business and Law
    • Career Development
    • Production and Management
  • Game Design
    • Game Design and Theory
    • Writing for Games
    • UX for Games
  • Industry
    • Interviews
    • Event Coverage
  • Programming
    • Artificial Intelligence
    • General and Gameplay Programming
    • Graphics and GPU Programming
    • Engines and Middleware
    • Math and Physics
    • Networking and Multiplayer
  • Visual Arts
  • Archive

Categories

  • News

Categories

  • Audio
  • Visual Arts
  • Programming
  • Writing

Categories

  • Audio Jobs
  • Business Jobs
  • Game Design Jobs
  • Programming Jobs
  • Visual Arts Jobs

Categories

  • GameDev Unboxed

Forums

  • Audio
    • Music and Sound FX
  • Business
    • Games Career Development
    • Production and Management
    • Games Business and Law
  • Game Design
    • Game Design and Theory
    • Writing for Games
  • Programming
    • Artificial Intelligence
    • Engines and Middleware
    • General and Gameplay Programming
    • Graphics and GPU Programming
    • Math and Physics
    • Networking and Multiplayer
  • Visual Arts
    • 2D and 3D Art
    • Critique and Feedback
  • Topical
    • Virtual and Augmented Reality
    • News
  • Community
    • GameDev Challenges
    • For Beginners
    • GDNet+ Member Forum
    • GDNet Lounge
    • GDNet Comments, Suggestions, and Ideas
    • Coding Horrors
    • Your Announcements
    • Hobby Project Classifieds
    • Indie Showcase
    • Article Writing
  • Affiliates
    • NeHe Productions
    • AngelCode
  • Workshops
    • C# Workshop
    • CPP Workshop
    • Freehand Drawing Workshop
    • Hands-On Interactive Game Development
    • SICP Workshop
    • XNA 4.0 Workshop
  • Archive
    • Topical
    • Affiliates
    • Contests
    • Technical

Calendars

  • Community Calendar
  • Games Industry Events
  • Game Jams

Blogs

There are no results to display.

There are no results to display.

Marker Groups

  • Members

Developers


Group


About Me


Website


Industry Role


Twitter


Github


Twitch


Steam

Found 22 results

  1. (Note to Mods: Could "Input" or "Peripherals" be a good Tag to add to the tag list? Just a thought.) Hey, I'm currently working on which keys on a keyboard that a user can rebind their actions to. The trick is that I use a Norwegian keyboard, so it's not obvious which keys correspond to the actual C#/XNA Keys enum values. Are there any keys from the XNA Keys enum that, in your opinion, I've neglected to add? I don't need all Keys to be bindable, only the "most commonly used keys on a keyboard". Thanks. https://pastebin.com/n1cz8Y0u
  2. This is an extract from Practical Game AI Programming from Packt. Click here to download the book for free! When humans play games – like chess, for example – they play differently every time. For a game developer this would be impossible to replicate. So, if writing an almost infinite number of possibilities isn’t a viable solution game developers have needed to think differently. That’s where AI comes in. But while AI might like a very new phenomenon in the wider public consciousness, it’s actually been part of the games industry for decades. Enemy AI in the 1970s Single-player games with AI enemies started to appear as early as the 1970s. Very quickly, many games were redefining the standards of what constitutes game AI. Some of those examples were released for arcade machines, such as Speed Race from Taito (a racing video game), or Qwak (a duck hunting game using a light gun), and Pursuit (an aircraft fighter) both from Atari. Other notable examples are the text-based games released for the first personal computers, such as Hunt the Wumpus and Star Trek, which also had AI enemies. What made those games so enjoyable was precisely that the AI enemies that didn't react like any others before them. This was because they had random elements mixed with the traditional stored patterns, creating games that felt unpredictable to play. However, that was only possible due to the incorporation of microprocessors that expanded the capabilities of a programmer at that time. Space Invaders brought the movement patterns and Galaxian improved and added more variety, making the AI even more complex. Pac-Man later on brought movement patterns to the maze genre – the AI design in Pac-Man was arguably as influential as the game itself. After that, Karate Champ introduced the first AI fighting character and Dragon Quest introduced the tactical system for the RPG genre. Over the years, the list of games that has used artificial intelligence to create unique game concepts has expanded. All of that has essentially come from a single question, how can we make a computer capable of beating a human in a game? All of the games mentioned used the same method for the AI called finite-state machine (FSM). Here, the programmer inputs all the behaviors that are necessary for the computer to challenge the player. The programmer defined exactly how the computer should behave on different occasions in order to move, avoid, attack, or perform any other behavior to challenge the player, and that method is used even in the latest big budget games. From simple to smart and human-like AI One of the greatest challenges when it comes to building intelligence into games is adapting the AI movement and behavior in relation to what the player is currently doing, or will do. This can become very complex if the programmer wants to extend the possibilities of the AI decisions. It's a huge task for the programmer because it's necessary to determine what the player can do and how the AI will react to each action of the player. That takes a lot of CPU power. To overcome that problem, programmers began to mix possibility maps with probabilities and perform other techniques that let the AI decide for itself how it should react according to the player's actions. These factors are important to be considered while developing an AI that elevates a games’ quality. Games continued to evolve and players became even more demanding. To deliver games that met player expectations, programmers had to write more states for each character, creating new in-game and more engaging enemies. Metal Gear Solid and the evolution of game AI You can start to see now how technological developments are closely connected to the development of new game genres. A great example is Metal Gear Solid; by implementing stealth elements, it moved beyond the traditional shooting genre. Of course, those elements couldn't be fully explored as Hideo Kojima probably intended because of the hardware limitations at the time. However, jumping forward from the third to the fifth generation of consoles, Konami and Hideo Kojima presented the same title, only with much greater complexity. Once the necessary computing power was there, the stage was set for Metal Gear Solid to redefine modern gaming. Visual and audio awareness One of the most important but often underrated elements in the development of Metal Gear Solid was the use of visual and audio awareness for the enemy AI. It was ultimately this feature that established the genre we know today as a stealth game. Yes, the game uses Path Finding and a FSM, features already established in the industry, but to create something new the developers took advantage of some of the most cutting-edge technological innovations. Of course the influence of these features today expands into a range of genres from sports to racing. After that huge step for game design, developers still faced other problems. Or, more specifically, these new possibilities brought even more problems. The AI still didn't react as a real person, and many other elements were required, to make the game feel more realistic. Sports games This is particularly true when we talk about sports games. After all, interaction with the player is not the only thing that we need to care about; most sports involve multiple players, all of whom need to be ‘realistic’ for a sports game to work well. With this problem in mind, developers started to improve the individual behaviors of each character, not only for the AI that was playing against the player but also for the AI that was playing alongside them. Once again, Finite State Machines made up a crucial part of Artificial Intelligence, but the decisive element that helped to cultivate greater realism in the sports genre was anticipation and awareness. The computer needed to calculate, for example, what the player was doing, where the ball was going, all while making the ‘team’ work together with some semblance of tactical alignment. By combining the new features used in the stealth games with a vast number of characters on the same screen, it was possible to develop a significant level of realism in sports games. This is a good example of how the same technologies allow for development across very different types of games. How AI enables a more immersive gaming experience A final useful example of how game realism depends on great AI is F.E.A.R., developed by Monolith Productions. What made this game so special in terms of Artificial Intelligence was the dialog between enemy characters. While this wasn’t strictly a technological improvement, it was something that helped to showcase all of the development work that was built into the characters' AI. This is crucial because if the AI doesn't say it, it didn't happen. Ultimately, this is about taking a further step towards enhanced realism. In the case of F.E.A.R., the dialog transforms how you would see in-game characters. When the AI detects the player for the first time, it shouts that it found the player; when the AI loses sight of the player, it expresses just that. When a group of (AI generated) characters are trying to ambush the player, they talk about it. The game, then, almost seems to be plotting against the person playing it. This is essential because it brings a whole new dimension to gaming. Ultimately, it opens up possibilities for much richer storytelling and complex gameplay, which all of us – as gamers – have come to expect today.
  3. Hello, I'm a trainee for software development and in my free time I try to do various things. I decided I wanted to figure out how "Dual Contouring" works, but I haven't been successful yet. I could find some implementations in code which are very hard to follow along and same sheets of paper "explaining" it written in a way that I have a hard time even understanding what the input and output is. All I know that it is used to make a surface/mesh out of voxels. Is there any explanation someone can understand without having studied math/computer science? The problem is also that most of the words I can't even translate to German(like Hermite Data) nor I can find a explanation which I can understand. I know how Marching Cubes work but I just can't find anything to Dual Contouring also I don't quite get the sense of octrees. As far I'm aware of this is a set of data organized like a tree where each element points to 8 more elements. I don't get how that could be helpful in a infinite 3D world and why I shouldn't just use a List of coordinates with zeros and ones for dirt/air or something like that. Thanks A clueless trainee ^^
  4. Flocking Algorithm with Predators

    A predator flying though a flock of boids. Written in C++ using OpenSceneGraph. More details here:
  5. Hi everyone, A while back I read a few articles on flocking algorithms and it peaked my interest so I decided to write my own demo using OpenSceneGraph. A link to the result is below, I hope you like it!
  6. Newbie - How to start?

    Good day dear people I'm completely new here and very nervous to be honest. I started with 3D-Design at my traineeship and can slowly start with my ideas for a survival/rpg game in 3d But since I only used RPG Makers until now I wonder how is the best way to start with a game, and what I would need for that. Have a wonderful day! ^.^)/) Tootooni
  7. I'm currently testing out behaviortree's with the LibGDX AI library. I have read a lot about behaviortree's past couple days but it's hard to get clear how I need to build the tree for my specific scenario. I'm looking to use a behaviortree for a Rimworld like game where the players units are not directly controllable. In the first place I'm wondering if I should have a single big tree for the complete AI or many smaller tree's, for example a separate tree for: Moving an item Building a building Crafting an item Resting when sleepy Eating when hungry In the examples I have seen they all talk about a "single job". Like entering a building, GoTo -> Open Door -> GoTo -> Close door. But what if I need to check if I have the keys on me? And I need to check a lot of these variables. When A unit is Idle I'd like him to maintain his primary needs if he has access to them. If his needs are satisfied enough he can take on certain jobs like building walls or crafting items. I have a a lot of different jobs but jobs like building or crafting items are relatively the same with a different outcome so I could probably make a abstract job for that, it helps but I will still end up with a really huge tree though. Another issue I'm facing is that when tasks are running, and something more important pops up (enemy spotted or some kind of emergence task) the unit should stop it's current task and act accordingly to the interruption. So since the task is running I need to do those checks on each runnable task then returned failed/cancelled and further down the sequence I need to do another check for these interruptions and handle them accordingly. I have briefly read into dynamic branches, not sure if GDX AI supports this but adding a behavior to the tree to handle an interruption seems a good idea. These dynamic branches also opens the opportunity to hold behaviors at the jobs and once a unit accepts a job it inserts that branch into it's own tree. I hope I'm clear, it's hard to explain and get a global view of a complex behavior tree. I have read several times that behavior tree's are very easy to understand and implement. Well, that might be the case for those small tree's I find everywhere. On the other hand I might be over complicating things.
  8. I've just posted a pre-release edition of Curvature, my utility-theory AI design tool. Curvature provides a complete end-to-end solution for designing utility-based AI agents; you can specify the knowledge representation for the world, filter the knowledge into "considerations" which affect how an agent will make decisions and choose behaviors, and then plop a few agents into a dummy world and watch them run around and interact with things. Preview 1 (Core) contains the base functionality of the tool but leaves a lot of areas unpolished. My goal with this preview is to get feedback on how well the tool works as a general concept, and start refining the actual UI into something more attractive and fluid. The preview kit contains a data file with a very rudimentary "scenario" set up for you, so you can see how things work without cutting through a bunch of clutter. Give it a test drive, let me know here or via the Issue Tracker on GitHub what you like and don't like, and have fun!
  9. Hello, I'm using BMfont to create a Bitmap font for my text Rendering. I noticed some strange behaviour when I use the option "force Offsets to Zero". If I use this option my rendering resultions looks ok, but without it, there is some missing space between some characters. I attached the BMFont configuration files and font that I used. In the rendering result with variable Offset you can see that there is missing space right next to the "r" letter. To get the source and destination of my render rectangles I basically to following: void getBakedQuad(const fontchar_t* f, int* x_cursor, int * y_cursor, SDL_Rect* src, SDL_Rect* dest) { dest->x = *x_cursor + f->xoffset; dest->y = *y_cursor + f->yoffset; dest->w = f->width; dest->h = f->height; src->x = f->x; src->y = f->y; src->w = f->width; src->h = f->height; *x_cursor += f->xadvance; } Has somebody noticed a similar behaviour? orbitron-bold.otf SIL Open Font License.txt variable_offset.bmfc variable_offset.fnt zero_offset.bmfc zero_offset.fnt
  10. I'm building an American football simulation(think football manager), and am wondering about the best way of implementing AI based on various inputs which are weighted based on the personality of the NPC...I have a version of Raymond Cattell's16PF model built into the game to be able to use their various personality traits to help guide decisions. I am going to use this extensively so I need this to be both flexible and able to handle many different scenarios. For instance, a GM has to be able to decide whether he wants to resign a veteran player for big dollars or try and replace them through the draft. They need to be able to have a coherent system for not only making a decision in a vacuum as a single decision but also making a decision as part of a "plan" as to how to build the team...For instance it makes no sense for a GM to make decisions that don't align with each other in terms of the big picture. I want to be able to have the decisions take in a wide range of variables/personality traits to come up with a decision. There is no NPC per se...There isn't going to be any animations connected to this, no shooting, following, etc...just decisions which trigger actions. In a situation like a draft, there is one team "on the clock" and 31 other teams behind the scenes working on trying to decide if they want to try and trade up, trade down, etc which can change based on things like who just got picked, the drop off between the highest graded player at their position/group and the next highest graded player in that position/next group, if a player lasts past a certain point, etc... There needs to be all of these things going on simultaneous for all the teams, obviously the team on the clock is goifn to have to decide whether it wants to make a pick or take any of the offers to move down in the draft from other teams that might want to move up, etc.. So I am planning on making use of something called Behavior Bricks by Padaone Games( bb.padaonegames.com )which is a Behavior Tree but in conversations with others who have worked on AI in major projects like this(EA sports) they said to combine this with a State Machine. My question is would I be able to do this using just Behavior Bricks or would I need to build the state machine separately? Is there something else already created for this type of purpose that I could take advantage of?
  11. I remember seeing a name for this, but I can't find it anywhere now. In one of my courses, I use a weighted list to decide the choices for an AI. Basically, each AI class has 2 standard functions: Value and Do. [Value] returns a float, a percentage of how imperative it believes it is that it does this. I.e. when your base is under attack, sending troops to defend returns a high value. [Do] executes the AI. Then an AI Manager gets the value for each AI choice, and chooses to do the one with the highest value. I've seen a more formal name for this, but I don't recall what it was.
  12. What worries me is fairness, I don't want an AI with "god eyes". In a first attempt, the AI wasn't smart at all. It has a set of rules, that could be customized per possible enemy formation, that makes the AI acts as it has personality. For example: Wolves had a 80% prob of using Bite and 20% of using Tackle. Then I tried to make the AI do things with sense, by coding a reusable algorithm, then I can use rules to give different enemies formations different personalities, like this one is more raw damage focused and this other more status ailment focused. To achieve this I was trying to make the AI player to collect stats about the human player skills and characters by looking at the log, my game has a console that logs everything like most RPGs (think Baldur's Gate). An attack looks like this in the console: Wolf uses Bite Mina takes 34 points of damage Mina uses Fire Ball Wolf takes 200 points of damage The AI player has a function, run(), that is triggered each time a character it controls is ready to act. Then it analyzes the log and collect stats to two tables. One with stats per skills, like how many times it was used and how many times it did hit or was dodged, so the AI player knows what skills are more effective again the human player's team. These skills stats are per target. The second table is for human player's character stats. The AI player is constantly trying to guess the armor, attack power, etc, the enemies have. But coding that is quite difficult so I switched to a simpler method. I gave the AI player "god eyes". It has now full access to human player's stats, two tables aren't required anymore as the real character structure is accessible to the AI player. But the AI player pretends that it doesn't have "god eyes" by, for example, acting like it ignores a character armor again fire, until a fire attack finally hit that character, then a boolean flag is set, and the AI player can stop pretending it doesn't know the target exact armor again fire. Currently, the AI player can assign one of two roles to the characters it controls. Attacker and Explorer. More will be developed, like Healer, Tank, etc. For now there are Attackers and Explorers. These roles has nothing to do with character classes. They are only for the use of AI player. Explorer: will try to hit different targets with different skills to "reveal" their stats, like dodge rate, armor again different types of damage. They must chose which skills to use by considering all skills of all characters in the team, but I'm still figuring the algorithm so right now they are random but at least they take care to not try things that they already tried on targets that they already hit. Explorers will switch to attackers at some time during a battle. Attacker: will try to hit the targets that can be eliminated in less turns. When no stats are known they will chose own skills based on raw power, once the different types of armors are known to the AI, they will switch to skills that exploits this info, if such skills are available to them. Attackers will try different skills if a skill chosen based on raw power was halved by half or more and all enemy armors aren't known yet, but won't try for the best possible skill if one that isn't halved by as much as half its power is already known. Attackers may be lucky an select the best possible skill in the first attempt, but I expect a formation to work better if there are some explorers. This system opens up interesting gameplay possibilities. If you let a wolf escape, the next pack may come already knowing your stats, and that implies that their attackers will take better decisions sooner as the AI requires now less exploration. So, the description of an enemy formation that you can encounter during the game may be: PackOfWolves1: { formation: ["wolf", null, "wolf", null, "wolf", null, "wolf", null, "wolf", null, null, null, null, null, null], openStatrategy: ["Explorer", null, "Explorer", null, "Explorer", null, "Attacker", null, "Attacker", null, null, null, null, null, null] } What do you think about these ideas? Was something similar tried before? Would you consider unfair an AI with "god eyes"?
  13. Hello, I have designed an AI system for games that replicates cognitive psychology models and theories, so that NPCs and other virtual characters can behave in more human-like and interesting ways. I have built a prototype in the Unity game engine, and it can produce quite complex behaviour, including learning and creativity. I am now wanting to develop it further and am looking for people or organisations to help. I am thinking about how I could present my AI system, and what would be a good way of demonstrating it. If you have any suggestions it would be great to hear them. I have a website that explains my AI system in detail: www.electronicminds.co.uk If you have any comments about the AI system, or know anyone who might be interested in helping to develop it, I would really appreciate hearing from you. Thanks for the help.
  14. Hi there, it's been a while since my last post. I was creating a bunch of games but there was always something missing. Something which makes the game (maybe unique)... After a few tries I decided to start a side project for a combat system which should be used for fighting games. I did a lot of research and programming to finally get something that makes actually fun to play. Well... it is only a prototype and I do not want to share it (yet). Now I decided to share my ideas of the basics of a combat system for fighting games. Don't get me wrong... This is only my way of doing stuff and I want as many feedback as possible and maybe it will help people with their games. I will provide a few code snippets. It will be some sort of OOP pseudo code and may have typos. Content 1. Introduction 2. Ways of dealing damage 1. Introduction What makes a combat system a combat system? I guess it could be easy to explain. You need ways of dealing damage and ways of avoiding damage. At least you need something for the player to know how to beat the opponent or the game. As i mentioned before, I will focus on fighting games. As it has ever been there is some sort of health and different ways to reduce health. Most of the times you actually have possibilities to avoid getting damage. I will focus on these points later on. 2. Ways of dealing damage How do we deal damage by the way? A common way to do so, is by pressing one or more buttons at one time in order to perform an attack. An attack is an animation with a few phases. In my opinion, an attack consists of at least four phases. 1. Perception 2. Action 3. Sustain 4. Release Here is an example animation I made for showing all phases with four frames: Every one of those has its own reason. One tipp for our designers out there is to have at least one image per phase. Now we should take a closer look at the phases itself. 2.1. Perception The perception phase should include everything to the point, the damage is done. Lets say, it is some sort of preparing the actual attack. Example: Before you would punch something, you would get in position before doing the actual action, right? Important note: the longer the perception phase is, the more time the opponent has to prepare a counter or think about ways to avoid the attack. Like having light and heavy attacks. The heavy attacks mostly have longer perception phases than the light ones. This means, that the damage dealt is likely greater compared to the light attacks. You would like to avoid getting hit by the heavy ones, right? 2.2. Action The action phase is the actual phase where damage is dealt. Depending on the attack type itself the phase will last longer or shorter. Using my previous example, heavy attacks might have a longer action phase than light attacks. In my opinion, the action phase should be as short as possible. One great way to get the most out of the attack animation itself is by using smears. They are often used for showing motion. There's ton of reference material for that. I like using decent smears with a small tip at the starting point and a wide end point (where the damage should be dealt). This depends on the artist and the attack. 2.3. Sustain At first sight, the sustain phase may be irrelevant. It is directly after the attack. My way of showing the sustain phase is by using the same image for the action phase just without any motion going on. The sustain phase should be some sort of a stun time. The images during the sustain phase should show no movement - kind of a rigid state. Why is this phase so important? It adds a nice feel to the attack animation. Additionally, if you want to include combos to your game, this is the phase, where the next attack should be chained. This means, while the character is in this phase of the attack, the player could press another attack button to do the next attack. The next attack will start at the perception phase. 2.4. Release The release phase is the last phase of the attack. This phase is used to reset the animation to the usual stance (like idle stance). 2.5. Dealing damage Dealing damage should be only possible during the action phase. How do we know, if we land a hit? I like using hit-boxes and damage-boxes. 2.5.1. Hit-boxes A hit box is an invisible box the character has. It shows it's vulnerable spot. By saying "Hit-box" we do not mean a box itself. It could be any shape (even multiple boxes together - like head, torso, arms, ...). You should always know the coordinates of your hit-box(es). Here is an example of a hit-box for my character: I am using Game Maker Studio, which is automatically creating a collision box for every sprite. If you change the sprite from Idle to Move, you may have a different hit-box. Depending on how you deal with the collisions, you may want to have a static hit-box. Hit-boxes could look something like this: class HitBox { /* offsetX = the left position of you hit-box relative to the players x coordinate offsetY = the top position of you hit-box relative to the players y coordinate width = the width of the hit-box height = the height of the hit-box */ int offsetX, offsetY, width, height; /* Having the players coordinates is important. You will have to update to player coordinates every frame. */ int playerX, playerY; //initialize the hit-box HitBox(offsetX, offsetY, width, height) { this.offsetX = offsetX; this.offsetY = offsetY; this.width = width; this.height = height; } //Update (will be called every frame) void update(playerX, playerY) { //you can also update the player coordinates by using setter methods this.playerX = playerX; this.playerY = playerY; } //Getter and Setter ... //Helper methods int getLeft() { return playerX + offsetX; } int getRight() { return playerX + offsetX + width; } int getTop() { return playerY + offsetY; } int getBottom() { return playerY + offsetY + height; } } When using multiple hit-boxes it would be a nice idea to have a list (or array) of boxes. Now one great thing to implement is a collision function like this: //check if a point is within the hit-box boolean isColliding(x, y) { return x > getLeft() && x < getRight() && y > getTop() && y < getBottom(); } //check if a box is within the hit-box boolean isColliding(left, right, top, bottom) { return (right > getLeft() || left < getRight()) && (bottom > getTop() || top < getBottom()); } 2.5.2. Damage-boxes Damage-boxes are, like hit-boxes, not necessarily a box. They could be any shape, even a single point. I use damage-boxes to know, where damage is done. Here is an example of a damage-box: The damage box does look exactly like the hit-box. The hit-box differs a bit to the actual damage-box. A damage-box can have absolute x and y coordinates, because there is (most of the times) no need to update the position of the damage-box. If there is a need to update the damage-box, you can do it through the setter methods. class DamageBox { /* x = absolute x coordinate (if you do not want to update the coordinates of the damage-box) y = absolute y coordinate (if you do not want to update the coordinates of the damage-box) width = the width of the damage-box height = the height of the damage-box */ int x, y, width, height; /* The damage the box will do after colliding */ int damage; //initialize the damage-box DamageBox(x, y, width, height, damage) { this.x = x; this.y = y; this.width = width; this.height = height; this.damage = damage; } //Getter and Setter ... //Helper methods int getLeft() { return x; } int getRight() { return x + width; } int getTop() { return y; } int getBottom() { return y + height; } } 2.5.3. Check for collision If damage-boxes and hit-boxes collide, we know, the enemy receives damage. Here is one example of a hit: Now we want to check, if the damage box collides with a hit-box. Within the damage-box we can insert an update() method to check every frame for the collision. void update() { //get all actors you want to damage actors = ...; //use a variable or have a global method (it is up to you, to get the actors) //iterate through all actors foreach(actor in actors) { //lets assume, they only have one hit-box hitBox = actor.getHitBox(); //check for collision if(hitBox.isColliding(getLeft(), getRight(), getTop(), getBottom()) { //do damage to actor actor.life -= damage; } } } To get all actors, you could make a variable which holds every actor or you can use a method you can call everywhere which returns all actors. (Depends on how your game is set up and on the engine / language you use). The damage box will be created as soon as the action phase starts. Of course you will have to destroy the damage-box after the action phase, to not endlessly deal damage. 2.6. Impacts Now that we know, when to deal the damage, we should take a few considerations about how to show it. There are a few basic elements for us to use to make the impact feel like an impact. 2.6.1. Shake the screen I guess, I am one of the biggest fans of shaking the screen. Every time there is some sort of impact (jumping, getting hit, missiles hit ground, ...) I use to shake the screen a little bit. In my opinion, this makes a difference to the gameplay. As usual, this may vary depending on the type of attack or even the type of game. 2.6.2. Stop the game This may sound weird, but one great method for impacts is to stop the game for a few frames. The player doesn't actually know it because of the short time, but it makes a difference. Just give it a try. 2.6.3. Stun animation Of course, if we got hit by a fist, we will not stand in our idle state, right? Stun animations are a great way to show the player, that we landed a hit. There is only one problem. Lets say, the player is a small and fast guy. Our enemy is some sort of a big and heavy guy. Will the first punch itch our enemy? I guess not. But maybe the 10th one will. I like to use some damage build up system. It describes, how many damage a character can get before getting stunned. The damage will build up by every time the character will get hit. After time, the built up damage reduces, which means, after a long time without getting hit, the built up shall be 0 again. 2.6.4. Effects Most games use impact animations to show the player, that he actually hit the enemy. This could be blood, sparkles, whatever may look good. Most engines offer particle systems, which makes the implementation very easy. You could use sprites as well. 2.7. Conclusion By using the four phases, you can create animations ideal for a fighting game. You can prepare to avoid getting hit, you do damage, you can chain attacks and you have a smooth transition to the usual stance. Keep in mind, the character can get hit at phases 1, 3 and 4. This may lead to cancel the attack and go into a stun phase (which i will cover later). A simple way to check for damage is by using hit-boxes and damage-boxes. 3. Ways of avoiding damage Now we are able to deal damage. There is still something missing. Something that makes the game more interesting... Somehow we want to avoid taking damage, right? There are endless ways of avoiding damage and I will now cover the most important ones. 3.1. Blocking Blocking is one of the most used ways to avoid damage (at least partially). As the enemy starts to attack (perception phase) we know, which attack he is going to use. Now we should use some sort of block to reduce the damage taken. Blocking depends on the direction the player is looking. Take a look at this example: If the enemy does an attack from the right side, we should not get damage. On the other side, if the enemy hits the character in the back, we should get damage. A simple way to check for damage is by comparing the x coordinates. Now you should think about how long the character is able to block. Should he be able to block infinitely? You can add some sort of block damage build up - amount of damage within a specific time the character can block (like the damage build up). If the damage was to high, the character gets into a stunning phase or something like that. 3.2. Dodging Every Dark Souls player should be familiar with the term dodging. Now what is dodging? Dodging is some sort of mechanism to quickly get away from the current location in order to avoid a collision with the damage box (like rolling, teleportation, ...) Sometimes the character is also invulnerable while dodging. I also prefer making the character shortly invulnerable, especially when creating a 2D game, because of the limited moving directions. 3.3. Shields Shields may be another good way to avoid taking damage. Just to make it clear. I do not mean a physical shield like Link has in the Legend of Zelda (this would be some sort of blocking). I mean some sort of shield you do have in shooters. Some may refill within a specific time, others may not. They could be always there or the player has to press a button to use them. This depends on your preferences. While a shield is active, the character should not get any damage. Keep in mind. You do not want to make the character unbeatable. By using shields which are always active (maybe even with fast regeneration), high maximum damage build up / block damage build up you may end up with an almost invulnerable character. 3.4. Jump / duck These alternatives are - in my opinion - a form of dodging. The difference between dodging and jumping / ducking is, that you do not move your position quickly. In case of ducking, you just set another hit-box (a smaller one of course). While during a jump, you are moving slowly (depends on your game). The biggest difference in my opinion is, jumping or ducking should have no invulnerable frames. I hope you enjoyed reading and maybe it is useful to you. Later on, I want to update the post more and more (maybe with your help). If you have any questions or feedback for me, feel free to answer this topic. Until next time, Lukas
  15. Hey all, As the heading says I'm trying to get my head around Goal Objective Action Planning AI. However I'm having some issues reverse engineering Brent Owens code line-by-line (mainly around the recursive graph building of the GOAPPlanner). I'm assuming that reverse engineering this is the best way to get a comprehensive understanding... thoughts? Does anyone know if an indepth explanation on this article (found here: https://gamedevelopment.tutsplus.com/tutorials/goal-oriented-action-planning-for-a-smarter-ai--cms-20793), or another article on this subject? I'd gladly post my specific questions here (on this post, even), but not too sure how much I'm allowed to reference other sites... Any pointers, help or comments would be greatly appreciated. Sincerely, Mike
  16. Below is my preliminary draft design for the AI system within Spellbound. I'm slowly migrating away from scripted expert systems towards a more dynamic and fluid AI system based on machine learning and neural networks. I may be crazy to attempt this, but I find this topic fascinating. I ended up having a mild existential crisis as a result of this. Let me know what you think or if I'm missing something. Artificial Intelligence: Objectives: Spellbound is going to be a large open world with many different types of characters, each with different motives and behaviors. We want this open world to feel alive, as if the characters within the world are inhabitants. If we went with pre-scripted behavioral patterns, the characters would be unable to learn and adapt to changes in their environment. It would also be very labor intensive to write specific AI routines for each character. Ideally, we just give every character a self-adapting brain and let them loose to figure out the rest for themselves. Core Premise: (very dense, take a minute to soak this in) Intelligence is not a fixed intrinsic property of creatures. Intelligence is an emergent property which results directly from the neural topology of a biological brain. True sentience can be created if the neural topology of an intelligent being is replicated with data structures and the correct intelligence model. If intelligence is an emergent property, and emergent properties are simple rule sets working together, then creating intelligence is a matter of discovering the simple rule sets. Design: Each character has its own individual Artificial Neural Network (ANN). This is a weighted graph which uses reinforcement learning. Throughout the character's lifespan, the graph will become more weighted towards rewarding actions and away from displeasurable ones. Any time an action causes a displeasure to go away or brings a pleasure, that neural pathway will be reinforced. If a neural pathway has not been used in a long time, we reduce its weight. Over time, the creature will learn. A SIMPLE ANN is just a single cluster of connected neurons. Each neuron is a “node” which is connected to nearby neurons. Each neuron receives inputs and generates outputs. The neural outputs always fire and activate a connected neuron. When a neuron receives enough inputs, it itself fires and activates downstream neurons. So, a SIMPLE ANN receives input and generates outputs which are a reaction to the inputs. At the end of neural cycle, we have to give response feedback to the ANN. If the neural response was positive, we strengthen the neural pathway by increasing the neural connection weights. If the response was negative, we decrease the weights of the pathway. With enough trial runs, we will find the neural pathway for the given inputs which creates the most positive outcome. The SIMPLE ANN can be considered a single cluster. It can be abstracted into a single node for the purposes of creating a higher layer of connected node networks. When we have multiple source inputs feeding into our neural network cluster and each node is running its most optimal neural pathway depending on the input, we get complex unscripted behavior. A brain is just a very large collection of layered neural nodes connected to each other. We’ll call this our “Artificial Brain” (AB) Motivation, motivators (rule sets): -All creatures have a “desired state” they want to achieve and maintain. Think about food. When you have eaten and are full, your state is at an optimally desired state. When time passes, you become increasingly hungry. Being just a teensy bit hungry may not be enough to compel you to change your current behavior, but as time goes on and your hunger increases, your motivation to eat increases until it supersedes the motives for all other actions. We can create a few very simple rules to create complex, emergent behavior. Rule 1: Every creature has a desired state they are trying to achieve and maintain. Some desired states may be unachievable (ie, infinite wealth) Rule 2: States are changed by performing actions. Actions may change one or more states at once (one to many relationship). Rule 3: “Motive” is created by a delta between current state (CS) and desired state (DS). The greater the delta between CS and DS, the more powerful the motive is. (Is this a linear graph or an exponential graph?) Rule 4: “relief” is the sum of all deltas between CS and DS provided by an action. Rule 5: A creature can have multiple competing motives. The creature will choose the action which provides the greatest amount of relief. Rule 6: Some actions are a means to an end and can be chained together (action chains). If you’re hungry and the food is 50 feet away from you, you can’t just start eating. You first must move to the food to get within interaction radius, then eat it. Q: How do we create an action chain? Q: How do we know that the action chain will result in relief? A: We generally know what desired result we want, so we work backwards. What action causes desired result (DR)? Action G does (learned from experience). How do we perform Action G? We have to perform Action D, which causes Action G. How do we cause Action D? We perform Action A, which causes Action D. Therefore, G<-D<-A; So we should do A->D->G->DR. Back propagation may be the contemporary approach to changing graph weights, but it's backwards. Q: How does long term planning work? Q: What is a conceptual idea? How can it be represented? A: A conceptual idea is a set of nodes which is abstracted to become a single node? Motivators: (Why we do the things we do) Hunger Body Temperature Wealth Knowledge Power Social Validation Sex Love/Compassion Anger/Hatred Pain Relief Fear Virtues, Vices & Ethics Notice that all of these motivators are actually psychological motivators. That means they happen in the head of the agent rather than being a physical motivator. You can be physically hungry, but psychologically, you can ignore the pains of hunger. The psychological thresholds would be different per agent. Therefore, all of these motivators belong in the “brain” of the character rather than all being attributes of an agents physical body. Hunger and body temperature would be physical attributes, but they would also be “psychological tolerances”. Psychological Tolerances: {motivator} => 0 [------------|-----------o----|----] 100 A B C D E A - This is the lowest possible bound for the motivator. B - This is the lower threshold point for the motivator. If the current state falls below this value, the desired state begins to affect actions. C - This is the current state of the motivator. D - This is the upper threshold point for the motivator. If the current state exceeds this value, the desired state begins to affect actions. E - This is the highest bounds for the motivator. The A & E bounds values are fixed and universal. The B and D threshold values vary by creature. Where you place them can make huge differences in behavior. Psychological Profiles: We can assign a class of creatures a list of psychological tolerances and assign their current state to some preset values. The behavioral decisions and subsequent actions will be driven by the psychological profile based upon the actions which create the sum of most psychological relief. The psychological profile will be the inputs into an artificial neural network, and the outputs will be the range of actions which can be performed by the agent. Ideally, the psychological profile state will drive the ANN, which drives actions, which changes the state of the psychological profile, which creates a feedback loop of reinforcement learning. Final Result: We do not program scripted behaviors, we assign psychological profiles and lists of actions. Characters will have psychological states which drive their behavioral patterns. Simply by tweaking the psychological desires of a creature, we can create emergent behavior resembling intelligence. A zombie would always be hungry, feasting on flesh would provide temporary relief. A goblin would have a strong compulsion for wealth, so they'd be very motivated to perform actions which ultimately result in gold. Rather than spending lots of time writing expert systems styled AI, we create a machine learning type of AI. Challenges: I have never created a working artificial neural network type of AI. Experimental research and development: The following notes are crazy talk which may or may not be feasible. They may need more investigation to measure their merit as viable approaches to AI. Learning by Observation: Our intelligent character doesn’t necessarily have to perform an action themselves to learn about its consequences (reward vs regret). If they watch another character perform an action and receive a reward, the intelligent character creates a connection between an action and consequence. Exploration Learning: A very important component to getting an simple ANN to work most efficiently is to get the neurons to find and establish new connections with other neurons. If we have a neural connection topology which always results in a negative response, we’ll want to generate a new connection at random to a nearby neuron. Exploration Scheduling: When all other paths are terrible, the new path becomes better and we “try it out” because there’s nothing better. If the new pathway happens to result in a positive outcome, suddenly it gets much stronger. This is how our simple ANN discovers new unscripted behaviors. The danger is that we will have a sub-optimal behavior pattern which generates some results, but they’re not the best results. We’d use the same neural pathway over and over again because it is a well travelled path. Exploration Rewards: In order to encourage exploring different untravelled paths, we gradually increase the “novelty” reward value for taking that pathway. If traveling this pathway results in a large reward, the pathway is highly rewarded and may become the most travelled path. Dynamic Deep Learning: On occasion, we’ll also want to create new neurons at random and connect them to at least one other nearby downstream neuron. If a neuron is not connected to any other neurons, it becomes an “island” and must die. When we follow a neural pathway, we are looking at two costs: The connection weight and the path weight. We always choose the shortest path with the least weight. Rarely used pathways will have their weight decrease over a long period of time. If a path weight reaches zero, we break the connection and our brain “forgets” the neural connection. Evolutionary & Inherited Learning: It takes a lot of effort for a neural pathway to become developed. We will want to speed up the development. If a child is born to two parents, those parents will rapidly increase the neural pathways of the child by sharing their own pathways. This is one way to "teach". Thus, children will think very much like their parents do. Other characters will also share their knowledge with other characters. In order for knowledge to spread, it must be interesting enough to be spread. So, a character will generally share the most interesting knowledge they have. Network Training & Evolutionary Inheritance: An untrained ANN results in an uninteresting character. So, we have to have at least a trained base preset for a brain. This is consistent with biological brains because our brains have been pre-configured through evolutionary processes and come pre-wired with certain regions of the brain being universally responsible for processing certain input types. The training method will be rudimentary at first, to get something at least passable, and it can be done as a part of the development process. When we release the game to the public, the creatures are still going to be training. The creatures which had the most “success” will become a part of the next generation. These brain configurations can be stored on a central database somewhere in the cloud. When a player begins a new game, we download the most recent generation of brain configurations. Each newly instanced character may have a chance to have a random mutation. When the game completes, if there were any particular brains which were more successful than the current strain, we select it for “breeding” with other successful strains so that the next generation is an amalgamation of the most successful previous generations. We’ll probably begin to see some divergence and brain species over time? Predisposition towards Behavior Patterns via bias: Characters will also have slight predispositions which are assigned at birth. 50% of their predisposition is innate to their creature class. 25% is genetically passed down by parents. 25% is randomly chosen. A predisposition causes some pleasures and displeasures to be more or less intense. This will skew the weightings of a developing ANN a bit more heavily to favor particular actions. This is what will create a variety in interests between characters, and will ultimately lead to a variety in personalities. We can create very different behavior patterns in our AB’s by tweaking the amount of pleasure and displeasure various outputs generate for our creature. The brain of a goblin could derive much more pleasure from getting gold, so it will have strong neural pathways which result in getting gold. AI will be able to interact with interactable objects. An interactable object has a list of ways it can be interacted with. Interactable objects can be used to interact with other interactable objects. Characters are considered to be interactable objects. The AI has a sense of ownership for various objects. When it loses an object, it is a displeasurable feeling. When they gain an object, it is a pleasurable feeling. Stealing from an AI will cause it to be unhappy and it will learn about theft and begin trying to avoid it. Giving a gift to an AI makes it very happy. Trading one object for another will transfer ownership of objects. There is no "intrinsic value" to an object. The value of an object is based on how much the AI wants it compared to how much it wants the other object in question. Learning through Socialization: AI's will socialize with each other. This is the primary mechanism for knowledge transfer. They will generally tell each other about recent events or interests, choosing to talk about the most interesting events first. If an AI doesn't find a conversation very interesting, they will stop the conversation and leave (terminating condition). If a threat is nearby, the AI will be very interested in it and will share with nearby AI. If a player has hurt or killed a townsfolk, all of the nearby townsfolk will be very upset and may attack the player on sight. If enough players attack the townsfolk, the townsfolk AI will start to associate all players with negative feelings and may attack a player on sight even if they didn't do anything to aggravate the townsfolk AI.
  17. Stop hacking

    Is there a program to ensure that the game I've created does not get hacked from third party apps such as lucky patcher, game guardian, game killer etc. If so, how I do I prevent this obstacle from ruining the game. The game is online based but I just recently found out there are hackers. Is there a program I could use to stop this or is it in the coding. Thankyou
  18. AI and Machine Learning

    So - the last couple of weeks I have been working on building a framework for some AI. In a game like the one I'm building, this is rather important. I estimate 40% of my time is gonna go into the AI. What I want is a hunting game, where the AI learns from the players behaviour. This is actually what is gonna make the game fun to play. This will require some learning from the creatures that the player hunt and some collective intelligence per species. Since I am not going to spend oceans of Time creating dialogue, tons of cut-scenes and an epic story-line and multiple levels (I can't make something interesting enough to make it worth the time - I need more man-power for that), what I can do, is create some interesting AI and the feeling of being an actual hunter, that has to depend on analysis of the animals and experimentation on where to attack from. SO.. To make it as generic as possible, I mediated everything, using as many interfaces a possible for the system. You can see the general system here in the UML diagram. I customized it for Unity so that it is required to add all the scripts to GameObjects in the game world. This gives a better overview, but requires some setup - not that bothersome. If you add some simple Game Objects and some colors, it could look like this in Unity3D: Now, this system works beautifully. The abstraction of the Animation Controller and Movement Controller assumes some standard stuff that applies for all creatures. For example that they all can move, have eating-, sleeping and drinking animations, and have a PathFinder script attached somewhere in the hierarchy. It's very generic and easy to customize. At some point I'll upload a video of the flocking behavior and general behavior of this creature. For now, I'm gonna concentrate on finishing the Player model, creating a partitioned terrain for everything to exist in. Finally and equally important, I have to design a learning system for all the creatures. This will be integrated into the Brain of all the creatures, but I might separate the collective intelligence between the species. It's taking shape, but I still have a lot of modelling to do, generating terrain and modelling/generating trees and vegetation. Thanks for reading, Alpha-
  19. Hello all, My colleagues and I from the Illinois Institute of Technology are conducting a research study to develop a measure of prior videogame experience (PVGE), intended to help develop and use learning games. Currently, we are trying to link situational and dispositional factors with PVGE through an online survey to validate the new scale. Please help us with the research by participating in this brief and confidential survey, approved by the university's institutional review board. The link is provided: https://iitresearchrs.co1.qualtrics.com/jfe/form/SV_6qZvi2oTDyXwbYN I would be glad to answer any questions or concerns you may have. Thank you!
  20. While I have popped not the site from time-to-time over the last couple of years, just checked my journal entries and my last one was 2 years and a day ago, I have been stuck in a weird state of not being able to post much. To fill in the gap I thought I would write a post detailing it all out, then I thought it would be more fun to do kind of a Q&A session to see if I could answer my own hard questions. Q. Why? A. The project I had been feverishly working on Portas Aurora: Battleline came to a halt when I was deployed. Upon returning I had only a couple of weeks remaining before my move to Hawaii and rushed to get everything a move of that order requires. Q.Why didn't you continue on your game after your move? A. I took sometime off to sort things out involving a girl and by Christmas I was hired onto a non game app project and no longer with the girl. Q. Surely you were able to make some progress on your game ideas while working? A. Originally, I was only working for a month on the project. It was kind of a job interview and prototype and idea of the company's type of project. It included some bluetooth beacon technology I had previously worked with for the Air Force and the Army. If anyone would like to hear more about that I am game, it has some interesting possibilities for gaming. However, once the prototype was completed they need a lead developer that understood all of the problems the system had. I laughed seeing the issue I had created. Interested in seeing where this could lead I signed on for another 6 months of work. Q. Okay, now it is mid summer and you have completed the side quest, what about a game? A. Well as luck would have it, the company put in a bid to overhaul a game/social media app and was approved to tackle the rework. I signed up to lead the development, believing that I was coming for my game ideas and not the fact that I was more of a speed programer. The rework was played out to take 45-60 at most. In fact the in-house timetable was 37 days from receiving the source material to a final product. This would be come a grave misunderstanding between two companies as the iOS version was launched and then its completed development status stalled on minor changes that needed to be ironed out before the Android version could be build with speed. If you are curious about what a game / social app looks like check out http://jigsterapp.com. I will do a post mortem on the project in an upcoming article now that I have been cleared to discuss the adventure. Q. How long did the rework take? A. Technically, the company completed the work in April, but there is a maintenance period that is still in affect. The game/social media app came out leaps better than the original, but I still wanted to have done more with it. A fellow Developer on this site did a large chunk of the art assets upgrades, even if a good chunk went used for poor reasons. Q. April was 4 months ago, where have you been? A. After some issues of completion payments with the company I was working for a very random chance popped up at the beginning of May for me to attempt to launch a project of my choosing. I debated a bit and finally decided to throw myself to the winds of chaos. Q. So...what happened since accepting the offer? A. As life would have it about a million random things cropped up as road blocks. I had to move, take on some rush work to make extra money for the move and negations for the final deal on the project, like funding, goals, and timetables. In the first 2 months more than twice the number of things went wrong compared to right. Computer was stuck in Hawaii for the dumbest reasons, phone got taken, Family health issues. Time Warner issues at the Company house. Picking up a new game engine, the final deal in writing dragged on. When the storm clouds finally started to clear around 20th of July some of the people I wanted to group with were already involved with other projects. Still I have pressed forward and last night I returned to the site and interacted for the first time in 4 months. Q. Does that mean that you have some game development news to share? A. Yes. I have gotten the funding and the go ahead to launch forward with my game idea. Well some of the primary idea have changed, mostly to fit the new timetables and work with the budget, but I think I have a close to tight grip on the design and layout of the game. Q. Okay, going to share anymore about this new game? A. I still want to do the Battleline game, therefore I went in search of a new title and found: Precursor's Dawn. The story behind the game is one of the other races have discovered a Dyson's sphere encompassing an O type star and it is now a race to mining its secrets before they are used to destroy you. I will be doing a more complete article on it shortly. Q. Final question, for tonight, does this mean you have returned to the site? A. I never completely left it, but I will be posting once again. Excited to have something to add this site once again. That wraps up this entry, but I am excited to see the site so live and I have a ton of journal entries to read and comment on. :)
  21. As humans, we like to implement solutions which are familiar to us. We get caught up doing things the way we know how to do them, rather than the "best" way to do them. It's easy to get caught up in thinking like this and as a result we end up using outdated technologies and implement features in ways that our modern contemporaries don't understand, or are simply less effective or efficient. My purpose with this and future papers will be to expose readers to a broad spectrum of solutions that will hopefully help them in their own coding. Today I'll be covering Action Lists! Action Lists are a simple yet powerful type of AI that all game developers should know about. While ultimately not scalable for large AI networks, they allow for relatively complex emergent behavior and are easy to implement. Whether you are just getting into AI programming or are an experienced industry veteran looking to expand your toolkit, this presentation will introduce you to Action Lists and provide concrete examples to help you implement your own solutions. Let's begin the story. Once Upon A Time... Several years ago I was starting the development of King Randall's Party, a game in which the player builds a castle and tries to defend it against the King who is trying to knock it down. I needed to create a reasonably smart AI that could look at the player's castle and figure out how to circumvent or destroy it in order to reach its objective - the gold pile the player is defending. This was a big challenge, almost too big to consider all at once. So like any good programmer I broke it down into a more manageable problem set. The first challenges: I needed to get the units to have a specific set of behaviors that they would act according to. A quick internet search caused my mind to implode, of course. Searching for "Game AI" brought up results on planners, finite state machines, steering behaviors, flocking, A*, pathfinding, etc. I had no clue where to start, so I did what any reasonable, rational person would do. I asked my dog. This isn't anything new, of course. Whenever I have a tech problem, I ask my dog. Now I know what you're thinking "Jesse, that's dumb, and you're crazy... What do dogs know about computers?" Well, let's just gloss over that. It may or may not have involved some illegal technology and or college class auditing. Anyway, I said "Ok Frankie - there is so much going on with AI, I really don't have a clue where to start. How should I go about creating an AI framework for my game units?" Frankie gave me this pretty withering look that basically informed me that I was an idiot and pretty dumb, despite what my high school teacher Mr. Francis may have told me about there never being any stupid questions. I'm pretty sure Frankie wouldn't have had nice thoughts about Mr. Francis either. "Jesse", she asked, "how do you typically start your day?" Well, I write everything that I need to do that day down in a list and then prioritize it according to how important the task is and how soon it needs to be done. I explained this to Frankie and she responded "And that, is an Action list." She told me that an Action List is a list of tasks or behaviors that your game units work their way through one at a time. It is a form of finite state system, and could be described as a simple behavior tree with a single branch level. Here is how they work. According to my dog. How Action Lists Work First you write down all the behaviors you want your AI to have. Then you order them according to priority. Lowest to highest. Now we iterate over the list checking each action to see if it can execute and if it can, executing it. We then check the actions Blocking property, and if the item is Blocking further actions we exit the list. We will get into why Blocking is important later. In our example here, our first action Attack Player will only execute if the AI is close to the player. Let's say it is not, so it checks if it can build a ladder at this location (and should it), and so on until it finds an action item it can execute such as Break Door. It then executes the Break Door code. Here is where Blocking comes into play. If Break Door occurs instantly, then it will not block any further actions and the rest of the list can execute. This is typically not the case - actions usually take up more than one frame. So in this case the Break Door Action Item calls unit.Attack(door), which will change the unit's CurrentState from Waiting to BreakDoor and will return true until the door is broken. A Simple Finite State Machine Well ok. That sounds workable. Frankie had made a good suggestion and Action Lists seem like a great fit for my project. But I'd never heard of them before so I had my doubts - most of what I hear floating around about AI has to do with transition-based finite state machines, similar to what Unity3D uses for animations in Mechanim. You define a bunch of states, and identify when and how they transition between each other. In exchange for some belly scratches, Frankie explained to me when you make a transition based finite state machine, you need to define all the states you want to have, and then also define all of the transitions to and from each of those individual states. This can get complicated really, really fast. If you have a Hurt state, you need to identify every other state that can transition to this state, and when. Walking to hurt; jumping to hurt, crouching to hurt, attacking to hurt. This can be useful, but can also get complicated really fast. If your AI requirements are fairly simple, that's a lot of potentially unnecessary overhead. Another difficulty with transition-based state machines is that it is difficult to debug. If you set a break point and look at what the AI's current state is, it is impossible without additional debug code to know how the AI got into its current state, what the previous states were and what transition was used to get to the current state. Drawbacks of Action Lists As I dug into action lists some more, I realized that they were perfect for my implementation, but I also realized they had some drawbacks. The biggest flaw is simply the result of its greatest strength - its simplicity. Because it is a single ordered list, I couldn't have any sort of complex hierarchy of priorities. So if I wanted Attack Player to be a higher priority than Break Door, but lower than Move To Objective, while also having Move To Objective being lower priority than Break Door... that's not a simple problem to solve with action lists, but trivial with finite state machines. In summary, action lists are really useful for reasonably simple AI systems, but the more complex the AI you want to model, the more difficult it will be to implement action lists. That being said, there are a few ways you can extend the concept of Action Lists to make them more powerful. Ways to Extend This Concept So some of my AI units are multitaskers - they can move and attack at the same time. I could create multiple action lists - one to handle movement and one to handle attacking, but that is can be problematic - what if some types of movement preclude attacking, and some attacks require the unit to stand still? This is where Action Lanes come in. Action Lanes are an extension to the concept of Blocking. With Action Lanes, Action Items can now identify specific types of Action Items that it blocks from executing while allowing others to execute without problem. Let's show this in action. An Action Lane is just an additional way to determine what action. Each Action Item belongs to one or more lanes, and when its Blocking property returns true, it will add the lanes it belongs to Each action has a lane or multiple lanes they are identified with, and they will only block other actions which belong to those lanes. As an example, Attack Player belongs in the Action Lane, Move to Goal belongs in the Movement lane, and Build Ladder belongs in both since the unit must stand still and cannot attack while building. Then we order these items, and if they execute they will block subsequent actions appropriately. Example Implementation Now that we've gone over the theory, it is useful to step through a practical implementation. First let's setup our Action List and Action Items. For Action Items I like to decouple implementation, so let's make an interface. Here is an example implementation of the IActionItem for the BreakDoor Action Item. For the Action List itself we can use a simple List. Then we load it up with the IActionItems. After that, we setup a method that iterates over the list every frame. Remember we also have to handle blocking. Things get a bit more complicated if you want to use action lanes. In that case we define Action Lanes as a bitfield and then modify the IActionItem interface. Then we modify the iterator to take these lanes into account. Action Items will be skipped over if their lane is blocked, but will otherwise check as normal. If all lanes are blocked then we break out of the loop. Conclusion So that's a lot to take in. While coding the other day Frankie asked me to summarize my learnings. Thinking about it, there were a few key takeaways for me. Action lists are easier to setup and maintain then small transition-based state systems. They model a recognizable priority system. There are a few ways they can be extended to handle expanded functionality. As with anything in coding, there is often no best solution. It is important to keep a large variety of coding tools in our toolbox so that we can pick the right one for the problem at hand. Action Lists turned out to be perfect for King Randall's Party. Perhaps they will be the right solution for your project? Originally posted on gorillatactics.com