• # Building Brains Into Your Games

Artificial Intelligence

August, 1995

Game developers have always pushed the limits of the hardware when it comes to graphics and sound, but I think we all agree that when it's time to implement artificial intelligence for a game, AI always gets the short end of the stick! In this article, we are going to study a potpourri of AI topics ranging from the simple to the complex.

Along the way, we are going to try out a few demos that use a very rudimentary graphics interface to illustrate some of the simpler concepts. However, most of our discussion will be quasi-theoretical and abstract. This is because AI is not as simple as an algorithm, a data structure, or similar things. Artificial intelligence is a fluid concept that must be shaped by the game it is to be used on. Granted, you may use the same fundamental techniques on myriad games, but the form and implementation may be radically different.

Let's begin our discussion with some simple statements that define what AI is in the context of games. Artificial intelligence in the arena of computer games implies that the computer-controlled opponents and game objects seem to show some kind of cognitive process when taking actions or reacting to the player's actions. These actions may be implemented in a million different ways, but the bottom line, from an observers point of view, is that they seem to show intelligence.

This brings us to the fundamental definition of intelligence. For our purposes, intelligence is simply the ability to survive and perform tasks in an environment. The tasks may be to hunt down and destroy the player, find food, navigate an asteroid field, or whatever. Nevertheless, this will be our loose definition of intelligence.

Now that we have an idea of what we are trying to accomplish, where on earth should we begin? We will begin by using humans as our models of intelligence because they seem to be reasonably intelligent for carbon units. If we observe a human in an environment, we can extrapolate a few key behaviors of intelligence that we can model using fairly simple computer algorithms and techniques.

These behaviors are blind reflexes, random selection, use of known patterns, environmental analysis, memory-based selections and sequential behaviors that may encompass some or all of the other behaviors. We'll take a look at all of these behaviors and explore how we might implement them in a computer game, but first let's talk about the graphics module we are going to use for some of the demos.

The Graphics Module Half the world uses Microsoft C and C++ compilers and the other half uses Borland C and C++ compilers--so it's always a problem publishing demos that depend on the use of either. Hence, we are going to write C code that is totally compiler independent, based on a graphics interface that we are going to write ourselves, and that will work on both compilers. The graphics interface will be based on graphics mode 13h, which is 320 by 200 pixels with 256 colors. For the simple demos we are going to write, all we want to do is place the VGA/SVGA card in mode 13h and plot single pixels on the screen. Thus we need two functions:

Set_Video_Mode(int mode);
and

Plot_Pixel(int x, int y,unsigned char color);
We will use the video BIOS function 10h to set the video mode, but how can we plot pixels? Plotting pixels in mode 13h is very simple because the graphics are fully memory mapped. Basically, mode 13h is a totally linear array of memory that represents each pixel with a single byte. Further, this video memory starts at location A000:000, and consists of 200 rows and 320 columns. Therefore, to compute the address of any pixel at (x,y) we simply multiply the Y component by 320 and add the X. Or in other words:

memory offset = y*320+x;
Adding this memory offset to A000:0000 gives us the final memory location to access the desired screen pixel. Hence, if we alias a FAR pointer to the video memory like this:

unsigned char far* video_buffer = (unsigned char far*)A0000000L;
Then we can access the video memory using a syntax like:

video_buffer[y*320+x] = color;
And that's it. So, using that information, we can then write a simple pixel-plotting function and graphics-mode function. These two functions should be added to each demo so that the demos can perform the graphics-related functions without help from the compiler-dependent graphics library. We're also going to add a little time-delay function based on the PC's internal timer. The function is called Time_Delay() and takes a single parameter, which is the number of clicks to wait for. Listing 1 shows the complete graphics interface named GMOD.H for the demos contained within this article. Simply include the code of the graphics module with each demo and everything should work fine. Now that we have the software we need to do graphics, let's begin our discussion of AI.

Listing 1. The Graphics Module GMOD.H

// GMOD.H graphics module for demos unsigned char far *video_buffer = (unsigned char far *)0xA0000000L; void Plot_Pixel(int x,int y,int color) { // plots the pixel in the desired color a little quicker using binary shifting // to accomplish the multiplications video_buffer[((y<<8) + (y<<6)) + x] = (unsigned char )color; } // end Plot_Pixel void Set_Graphics_Mode(int mode) { // use the video interrupt 10h and the C interrupt function to set // the video mode union REGS inregs,outregs; inregs.h.ah = 0; // set video mode sub-function inregs.h.al = (unsigned char)mode; // video mode to change to int86(0x10, &inregs, &outregs); } // end Set_Graphics_Mode void Time_Delay(int clicks) { // this function uses the internal timer to wait a specified number of "clicks" // the actual amount of real time is the number of clicks * (time per click) // usually the time per click is set to 1/18th of a second or 55ms long far *clock = (long far *)0x0000046CL, // address of timer start_time; // starting time // get current time start_time = *clock; // when the current time minus the starting time >= the requested delay then // the function can exit while(labs(*clock - start_time) < (long)clicks){} } // end Time_Delay
[size="5"]Deterministic Algorithms

Deterministic algorithms are the simplest of the AI techniques used in games. These algorithms use a set of variables as the input and then use some simple rules to drive the computer- controlled enemies or game objects based on these inputs. We can think of deterministic algorithms as reflexes or very low-level instincts. Activated by some set of conditions in the environment, the algorithms then perform the desired behavior relentlessly without concern for the outcome, the past, or future events.

The chase algorithm is a classic example of a deterministic algorithm. The chase algorithm is basically a method of intelligence used to hunt down the player or some other object of interest in a game by applying the spatial coordinates of the computer-controlled object and the object to be tracked. Imagine a game with a top-down view of a battleground, on which three computer-controlled bad guys and one player are fighting. The question is, how can we make the computer-controlled bad guys track and move toward the player? One way is to use the coordinates of the bad guys and the coordinates of the player as inputs into a deterministic algorithm that outputs direction changes or direction vectors for the bad guys in real time.

Let's use bad guy one as the example. We see that he is located at coordinates (bx1,by1) and the player is located at coordinates (px,py). Therefore, a simple algorithm to make the bad guy move toward the player would be:

// process x-coords if (px>bx1) bx1++; else if (px // process y-coords if (py>by1) by1++; else if (py 
 That's all there is to it. If we wanted to reverse the logic and make the bad guy run then the conditional logic could be inverted or the outcome increment operators could be inverted. As an example of deterministic logic, Listing 2 is a complete program that will make a little computer-controlled dot chase a player-controlled dot. Use the numeric keypad to control your player and press ESC to exit the program. Listing 2. A Demo of Deterministic Logic // Deterministic chasing algorithm demo // use numeric keypad to move player #include #include #include #include #include #include #include #include #include "gmod.h" // include our graphics module int main(void) { int px=160, // starting position of player py=100, bx=0, // starting position of bad guy by=0, done=0; // exit flag // set the video mode to 13h Set_Graphics_Mode(0x13); // main event loop while(!done) { // perform player logic // get input from keyboard if (kbhit()) // which way is player moving? switch(getch()) { case '8': // up { if ((py-=2)<0) py+=200; } break; case '2': // down { if ((py+=2)>=200) py-=200; } break; case '6': // right { if ((px+=2)>=320) px-=320; } break; case '4': // left { if ((px-=2)<0) px+=320; } break; case 27: // exit { done=1; } break; } // end switch } // end if // perform bad guy logic if (px>bx) bx++; else if (px bx-; if (py>by) by++; else if (py by-; // draw player and bad guy Plot_Pixel(bx,by,12); Plot_Pixel(px,py,9); // wait a bit Time_Delay(1); } // end main while // reset graphics back to text Set_Graphics_Mode(0x03); // return success to DOS return(0); } // end main Now let's move on to another typical behavior, which we can categorize as random logic. [size="5"]Random Logic Sometimes an intelligent creature exhibits almost random behaviors. These random behaviors may be the result of any one of a number of internal processes, but there are two main ones that we should touch upon--lack of information and desired randomness. The first premise is an obvious one. Many times an intelligent creature does not have enough information to make a decision or may not have any information at all. The creature then simply does the best it can, which is to select a random behavior in hopes that it might be the correct one for the situation. For example, let's say you were dropped into a dungeon and presented with four identical doors. Knowing that all but one meant certain death, you would simply have to randomly select one! The second premise that brings on a random selection is intentional. For example, say you are a spy trying to make a getaway after acquiring some secret documents (this happens to me all the time). Now, imagine you have been seen, and the bad guys start shooting at you! If you run in a straight line, chances are you are going to get shot. However, if during your escape you make many random direction changes and zigzag a bit, you will get away every time! What we learn from that example is that many times random logic and selections are good because it makes it harder for the player to determine what the bad guys are going to do next, and it's a good way to help the bad guys make a selection when there isn't enough information to use a deterministic algorithm. Motion control is a typical place to apply random logic in bad- guy AI. You can use a random number or probability to select a new direction for the bad guy as a function of time. Let's envision a multiplayer game with a single, computer- controlled bad guy surrounded by four human players. This is a great place to apply random motion, using the following logic: // select a random translation for X axis bx1 = bx1 + rand()%11 - 5; // select a random translation for Y axis by1 = by1 + rand()%11 - 5; The position of the bad guy is translated by a random amount in both X and Y, which in this case is +-5 pixels or units. Of course, we can use random logic for a lot of other things besides direction changes. Starting positions, power levels, and probability of firing weapons are all good places to apply random logic. It's definitely a good technique that adds a bit of unpredictability to game AI. Listing 3 is a demo of random logic used to control motion. The demo creates an array of flies and uses random logic to move them around. Press ESC to exit the demo. Listing 3. A Bunch of Dumb Flies // Random logic demo // moves a flock of flies around // hit any key to exit #include #include #include #include #include #include #include #include #include "gmod.h" // include our graphics module #define NUM_FLIES 64 // start off with 64 flies typedef struct fly_typ { int x,y; // position of fly } fly; int main(void) { fly flys[NUM_FLIES]; // the array of flies int index; // looping variable // set the video mode to 13h Set_Graphics_Mode(0x13); // initialize all flies to random position for (index=0; index { flys[index].x = rand()%320; flys[index].y = rand()%200; } // end for index // main event loop while(!kbhit()) { // erase flies for (index=0; index Plot_Pixel(flys[index].x,flys[index].y,0); // perform fly logic, translate each fly +-2 pixels for (index=0; index { flys[index].x+=(-2+rand()%5); flys[index].y+=(-2+rand()%5); } // end for index // draw flies for (index=0; index Plot_Pixel(flys[index].x,flys[index].y,10); // wait a bit Time_Delay(2); } // end main while // reset graphics back to text Set_Graphics_Mode(0x03); // return success to DOS return(0); } // end main Now let's talk about patterns. [size="5"]Encoded List Processing Many intelligent creatures have prerecorded patterns or lists of behaviors that they have either learned from experience or are instinctive. We can think of a pattern as a sequence of steps we perform to accomplish a task. Granted, this sequence may be interrupted if something happens during the sequence that needs attention. But in general, if we forget about interruptions then we can think of patterns as a list of encoded instructions that an intelligent creature consumes to accomplish some task. For example, when you drive to work, school, or your girlfriend's or boyfriend's house, you are following a pattern. You get into your car, start it, drive to the destination, stop the car, turn it off, get out, and finally do whatever it is you're going to do. This is a pattern of behavior. Although during the entire experience a billion things may have gone through your head, the observed behavior was actually very simple. Hence, patterns are a good way to implement seemingly complex thought processes in game AI. In fact, many games today still use patterns for much of the game logic. So how can we implement patterns for game AI? Simply by using an input array to a list processor. The output of the processor is the control of a game object or bad guy. In this case, the encoded list has the following set of valid instructions: [font="Courier New"] Turn right Turn left Move forward Move backward Sit still Fire weapon [/font] Even though we only have six selections, we can construct quite a few patterns with a short input list of 16 elements as in the example. In fact there are 6 16 different possible patterns or roughly 2.8 trillion different behaviors. I think that's enough to make something look intelligent! So how can we use encoded lists and patterns in a game for the AI? One solid way is to use them to control the motion of a bad guy or game object. For example, a deterministic algorithm might decide it's time to make a bad guy perform some complex motion that would be difficult if we used standard conditional logic. Thus, we could use that pattern, which simply reads an encoded list directing the bad guy to make some tricky moves. For example, we might have a simple algorithm like this: int move_x[16] = {-2,0,0,0,3,3,2,1,0, -2,-2,-,0,1,2,3,4}; int move_y[16] = {0,0,0,1,1,1,0,0,-1,-1, 2,3,4,0,0.-1}; // encoded pattern logic for a 16 element list for (index=0; index<16; index++) { bx1+=move_x[index]; by1+=move_y[index]; } // end for index You'll notice that the encoded pattern is made up simply of X and Y translations. The pattern could just as well have contained complex records with a multitude of data fields. I've written detailed code that will create an example of patterns and list processing, a demo of an ant that can process one of four patterns selected by the keys 1-4. Unfortunately, it's too long to print here. Go to the Game Developer ftp site, and you can download it there. Now we're starting to get somewhere, but we need an overall control unit with some form of memory, and we must select the appropriate types of behaviors. [size="5"]Finite State Machines Finite state machines, or FSMs, are abstract models that can be implemented either in hardware or software. FSMs are not really machines in the mechanical sense of the word, but rather abstract models with a set of "states" that can be traversed. Within these states, the FSM not only has a special set of outputs but remembers the state and can transition to another state, if and only if a set of inputs or premises are met. Say we have a typical finite state machine in which there are a set of states labeled S0, S1, S2, and Sn, and within each state is a set of outputs. These outputs can be anything we wish -- from motion controls for a game's bad guys to hard disk commands. How do we model an FSM in software and use it to control the game AI? Let's begin with the first question. We can model an FSM with a single variable and a set of logical conditions used to make state transitions along with the output for each state. For example, let's actually build a simple software state machine that controls a computer bad guy differently based on the bad guy's distance to the player. The state machine will have the following four states: State 0: Select new state = STATE_NEW State 1: Move randomly = STATE_RANDOM State 2: Track player = STATE_TRACK State 3: Use a pattern = STATE_PATTERN The FSM's transition should be set up so that if the bad guy is within 50 units of the player, then the bad guy moves into State 2 and simply attacks. If the bad guy is in the range of 51 to 100 units from the player, then the bad guy goes into State 3 and moves in a pattern. Finally, if the bad guy is farther than 100 units from the player then chances are the bad guy can't even see the player (in the imaginary computer universe). In that case, the bad guy moves into State 1, which is random motion. So how can we implement this simple FSM machine? All we need is a variable to record the current state and some conditional logic to perform the state transitions and outputs. Listing 4 shows a rough algorithm that will do all this. Listing 4. The Core FSM Logic typedef unsigned short DISTANCE; const DISTANCE Tracking_Threshold = 50; const DISTANCE Random_Threshold = 100; DISTANCE theDistance; //Define states and initialize enum states{new, random, track, pattern}; states currentState = new; //FSM loop for( ; ; ){ switch (currentState) { case new: { //Note: Switchbox only, causes no behavior theDistance = CalcDistanceToPlayer(); if (theDistance > Random_Threshold){ currentState = random; } else { if (theDistance > Tracking_Threshold) { currentState = pattern; } else { currentState = track; } } break; case track: DoTrackBehavior(); currentState = new; break; case pattern: DoPatternBehavior(); currentState = new; break; case random: DoRandomBehavior(); currentState = new; break; case default: cerr<<"state machine has entered an unknown state\n"; assert(FAIL); } } Note that S0 (the new state) does not trigger any behavior on the part of the opponent. Rather, it acts as a state "switchbox," to which all states (except itself) transition. This allows you to localize in a single control block all the decision making about transitions Although this requires two cycles through the FSM loop to create one behavior, it's well worth it. In the case of a small FSM, the entire loop can stay in the cache, and in the case of a large FSM loop, the localization of the transition logic will more than pay for the performance penalty. If you absolutely refuse to double-loop, you can handcraft the transitions between states. A finite-state machine diagram will vividly illustrate, in the form of spaghetti transitions, when your transition logic is out of control. Now that we have an overall thought controller, that is, an FSM, we should discuss simulating sensory excitation in a virtual world. [size="5"]Environmental Sensing One problem that plagues AI game programming is that it can be very unfair-- at least to the player. The reason for this is that the player can only "see" what's on the computer screen, whereas the computer AI system has access to all variables and data that the player can't access. This brings us to the concept of simulated sensory organs for the bad guys and game objects. For example, in a three- dimensional tank game that takes place on a flat plain, the player can only see so far based on his or her field of view. Further, the player can't see through rocks, buildings, and obstacles. However, because the game logic has access to all the system variables and data structures, it is tempting for it to use this extra data to help with the AI for the bad guys. The question is, is this fair to the player? Well, of course not. So how can we make sure we supply the AI engine of the bad guys and game objects with the same information the player has? We must use simulated sensory inputs such as vision, hearing, vibration, and the like. Each opponent and the player has a cone of vision associated with it. Both the bad guys and the player can only see objects within this cone. The player can only see within this cone as a function of the 3D graphics engine, but the bad guys can only see within this cone as a function of their AI program. Let's be a little more specific about this. Since we know that we must be fair to the player, what we can do is write a simple algorithm that scans the area in front of each bad guy and determines if the player is within view. This scanning is similar to the player viewing the viewport or looking out the virtual window. Of course, we don't need to perform a full three-dimensional scan with ray tracing or the like--we can simply make sure the player is within the view angle of the bad guy in question by using trigonometry of any technique we wish. Based on the information obtained from each bad guy scan, the proper AI decision can be made in a more uniform and fair manner. Of course, we may want to give the computer-controlled AI system more advantage than the human player to make up for the AI system itself being rather primitive when compared to the 100 billion- cell neural network it is competing against, but you get the idea. Finally, we might ask, "Can we perform other kinds of sensing?" Yes. We can create simulated light detectors, sound detectors, and so forth. I have been experimenting with an underwater game engine, and in total darkness the only way the enemy creatures can "see" you is to listen to your propulsion units. Based on the power level of the player's engines the game AI determines the sound level that the bad guys hear and moves them toward the sound source or sources. [size="5"]Memory and Learning The final topic we're going to touch upon is memory and learning. Memory is easy enough to understand, but learning is a bit more nebulous. Learning as far as we are concerned is the ability to interact in an environment in such a way that behaviors that seem to work better than others under certain conditions are "memorized" and used more often. In essence, learning is based on memory of past actions being good or bad or whatever. Imagine that we have written a fairly complex game composed of computer- controlled aliens. These aliens use an FSM- based AI engine and environmental sensing. The problem is that one of the resources in the game is energion cubes and the player and aliens must compete for these cubes. As the player is moving around in the environment, he or she can create a mental map of where energion cubes seem to be plentiful, but the alien creatures have no such ability; they can only stand and are at a disadvantage. Can we give them a memory and teach them where these energion cubes are? Of course we can, we are cybergods! One such implementation would work as follows: We could use a simple data structure that would track the number of times an alien found energion in each geographical region of the game. Then, when an alien was power hungry, instead of randomly bouncing around, the alien would refer to this memory data structure and select the geographical region with the highest probability of finding energion and set its trajectory for this region. The previous example is a simple one, but as we can see, memory and learning are actually very easy to implement. Moreover, we can make the computer AI learn much more than where energion is. It could learn the most common defensive moves of the player and use this information against the player. Well that's enough for basic AI techniques. Let's take a quick look at how we can put it all together. [size="5"]Building Monsters from the Id We have quite a repertoire of computer AI tricks at our fingertips, so how should we use it all? Basically, when you write a game and are implementing the AI, you should list the types of behaviors that each game object or bad guy needs to exhibit. Simple creatures should use deterministic logic, randomness, and patterns. Complex creatures that will interact with the player should use an FSM-based AI engine. And the main game objects that harass and test the player should use an FSM and sensory inputs and memory. [size="5"]The Future I see AI as the next frontier to explore. Without a doubt, most game programmers have focused so much on graphics that AI hasn't been researched much. The irony is that researchers have been making leaps and bounds in AI research and Artificial Life or A-Life. I'm sure you've heard the common terms "genetic algorithms" and "neural networks." Genetic algorithms are simply a method of representing some aspect of a computer-based AI model with a set of "genes," which can represent whatever we wish--aggressiveness, maximum speed, maximum vision distance, and so on. Then, a population of creatures is generated using an algorithm that adds a little randomness in each of the output creatures' genes. Our game world is then populated with these gene-based creatures. As the creatures interact in the environment, they are killed, survive, and are reborn. The biological analog comes into play during the rebirthing phase. Either manually or by some other means, the computer AI engine "mates" various pairs of creatures and mixes their genes. The resulting offspring then survive another generation and the process continues. This causes the creatures to evolve so that they are most adapted for the given environment. Neural networks, on the other hand, are computer abstractions of a collection of brain cells that have firing thresholds. You can enhance or diminish these thresholds and the connections between cells. By "teaching" a neural network or strengthening and weakening these connections, the neural net can learn something. So we can use these nets to help make decisions and even come up with new methods.
 
 0   Report Article 
 Sign in to follow this   Followers 0 Go to articles Artificial Intelligence 
 User Feedback 0 Comments 0 Reviews There are no comments to display. Create an account or sign in to comment You need to be a member in order to leave a comment Create an account Sign up for a new account in our community. It's easy! Register a new account Sign in Already have an account? Sign in here. Sign In Now 
•  
 
 Advertisement 
 Advertisement Categories 29Audio 29Music and Sound FX 199Business 125Business and Law 36Career Development 38Production and Management 191Game Design 185Game Design and Theory 3Writing for Games 3UX for Games 233Industry 172Interviews 61Event Coverage 1162Programming 55Artificial Intelligence 615General and Gameplay Programming 321Graphics and GPU Programming 55Engines and Middleware (and 2 more) 64Visual Arts 34Archive Show all categories   Advertisement GameDev.net GameDev.net Articles GameDev.net Event Coverage GameDev.net Forums GameDev.net Blogs GameDev.net Gallery GameDev.net News GameDev.net Projects GDNet Chat All Activity Search In Everywhere This Category This Article More options... Find results that contain... Any of my search term words All of my search term words Find results in... Content titles and body Content titles only Home Articles Programming Artificial Intelligence Building Brains Into Your Games 
 
 
 × Existing user? Sign In Sign Up Browse Back Articles & Tutorials Back All Categories Audio Business Game Design Industry Programming Visual Arts Columns Back GameDev Unboxed Event Coverage Back All Events Game Developers Conference Power Up Digital Games Conference GameDev.Market Links News Podcasts Back All Podcasts Game Dev Loadout Archive Community Back Beginners Blogs Calendar Chat Forums Back All Forums Audio Business Game Design Programming Visual Arts Community Affiliates Topical Workshops Gallery Groups Projects Back All Projects Games Game Assets Game Mods Developer Tools Store Careers Back Contractors Hobby Projects Game Jobs Back Browse on GameDev.Jobs Post a Job Members Back Chat GDNet+ Membership Guidelines Leaderboard Online Users Awards Search Back All Activity My Activity Streams Back Latest Topics Featured Blogs Search var ipsDebug=false;var CKEDITOR_BASEPATH='//www.gamedev.net/applications/core/interface/ckeditor/ckeditor/';var ipsSettings={cookie_path:"/",cookie_prefix:"ips4_",cookie_ssl:true,upload_imgURL:"",message_imgURL:"",notification_imgURL:"",baseURL:"//www.gamedev.net/",jsURL:"//www.gamedev.net/applications/core/interface/js/js.php",csrfKey:"51ed495a35b74077857f153a4116c194",antiCache:"2a361a8a02",disableNotificationSounds:false,useCompiledFiles:true,links_external:true,memberID:0,analyticsProvider:"ga",viewProfiles:true,mapProvider:'google',mapApiKey:"AIzaSyAeT7tk3vnWWmbgVISkLpbhkQvekG19rHM",}; ips.setSetting('date_format',jQuery.parseJSON('"mm\/dd\/yy"'));ips.setSetting('date_first_day',jQuery.parseJSON('0'));ips.setSetting('remote_image_proxy',jQuery.parseJSON('1'));ips.setSetting('ipb_url_filter_option',jQuery.parseJSON('"none"'));ips.setSetting('url_filter_any_action',jQuery.parseJSON('"allow"'));ips.setSetting('bypass_profanity',jQuery.parseJSON('0'));ips.setSetting('emoji_style',jQuery.parseJSON('"native"'));ips.setSetting('emoji_shortcodes',jQuery.parseJSON('"1"'));ips.setSetting('emoji_ascii',jQuery.parseJSON('"1"'));ips.setSetting('emoji_cache',jQuery.parseJSON('"1"'));ips.setSetting('quickSearchDefault',jQuery.parseJSON('"all"'));ips.setSetting('quickSearchMinimum',jQuery.parseJSON('3'));ips.setSetting('quickSearchShowAdv',jQuery.parseJSON('true'));ips.setSetting('quickSearchIn',jQuery.parseJSON('"title"')); { "@context": "http://schema.org", "@type": "Article", "url": "https://www.gamedev.net/articles/programming/artificial-intelligence/building-brains-into-your-games-r574/", "discussionUrl": "https://www.gamedev.net/articles/programming/artificial-intelligence/building-brains-into-your-games-r574/", "mainEntityOfPage": "https://www.gamedev.net/articles/programming/artificial-intelligence/building-brains-into-your-games-r574/", "name": "Building Brains Into Your Games", "headline": "Building Brains Into Your Games", "text": "August, 1995\n Game developers have always pushed the limits of the hardware when it comes to graphics and sound, but I think we all agree that when it\u0027s time to implement artificial intelligence for a game, AI always gets the short end of the stick! In this article, we are going to study a potpourri of AI topics ranging from the simple to the complex. \nAlong the way, we are going to try out a few demos that use a very rudimentary graphics interface to illustrate some of the simpler concepts. However, most of our discussion will be quasi-theoretical and abstract. This is because AI is not as simple as an algorithm, a data structure, or similar things. Artificial intelligence is a fluid concept that must be shaped by the game it is to be used on. Granted, you may use the same fundamental techniques on myriad games, but the form and implementation may be radically different. \nLet\u0027s begin our discussion with some simple statements that define what AI is in the context of games. Artificial intelligence in the arena of computer games implies that the computer-controlled opponents and game objects seem to show some kind of cognitive process when taking actions or reacting to the player\u0027s actions. These actions may be implemented in a million different ways, but the bottom line, from an observers point of view, is that they seem to show intelligence. \nThis brings us to the fundamental definition of intelligence. For our purposes, intelligence is simply the ability to survive and perform tasks in an environment. The tasks may be to hunt down and destroy the player, find food, navigate an asteroid field, or whatever. Nevertheless, this will be our loose definition of intelligence. \nNow that we have an idea of what we are trying to accomplish, where on earth should we begin? We will begin by using humans as our models of intelligence because they seem to be reasonably intelligent for carbon units. If we observe a human in an environment, we can extrapolate a few key behaviors of intelligence that we can model using fairly simple computer algorithms and techniques. \nThese behaviors are blind reflexes, random selection, use of known patterns, environmental analysis, memory-based selections and sequential behaviors that may encompass some or all of the other behaviors. We\u0027ll take a look at all of these behaviors and explore how we might implement them in a computer game, but first let\u0027s talk about the graphics module we are going to use for some of the demos.\n The Graphics Module Half the world uses Microsoft C and C++ compilers and the other half uses Borland C and C++ compilers--so it\u0027s always a problem publishing demos that depend on the use of either. Hence, we are going to write C code that is totally compiler independent, based on a graphics interface that we are going to write ourselves, and that will work on both compilers. The graphics interface will be based on graphics mode 13h, which is 320 by 200 pixels with 256 colors. For the simple demos we are going to write, all we want to do is place the VGA/SVGA card in mode 13h and plot single pixels on the screen. Thus we need two functions: Set_Video_Mode(int mode); and Plot_Pixel(int x, int y,unsigned char color); We will use the video BIOS function 10h to set the video mode, but how can we plot pixels? Plotting pixels in mode 13h is very simple because the graphics are fully memory mapped. Basically, mode 13h is a totally linear array of memory that represents each pixel with a single byte. Further, this video memory starts at location A000:000, and consists of 200 rows and 320 columns. Therefore, to compute the address of any pixel at (x,y) we simply multiply the Y component by 320 and add the X. Or in other words: memory offset = y*320+x; Adding this memory offset to A000:0000 gives us the final memory location to access the desired screen pixel. Hence, if we alias a FAR pointer to the video memory like this: unsigned char far* video_buffer = (unsigned char far*)A0000000L; Then we can access the video memory using a syntax like: video_buffer[y*320+x] = color; And that\u0027s it. So, using that information, we can then write a simple pixel-plotting function and graphics-mode function. These two functions should be added to each demo so that the demos can perform the graphics-related functions without help from the compiler-dependent graphics library. We\u0027re also going to add a little time-delay function based on the PC\u0027s internal timer. The function is called Time_Delay() and takes a single parameter, which is the number of clicks to wait for. Listing 1 shows the complete graphics interface named GMOD.H for the demos contained within this article. Simply include the code of the graphics module with each demo and everything should work fine. Now that we have the software we need to do graphics, let\u0027s begin our discussion of AI. Listing 1. The Graphics Module GMOD.H// GMOD.H graphics module for demos\nunsigned char far *video_buffer = (unsigned char far *)0xA0000000L;\nvoid Plot_Pixel(int x,int y,int color)\n{\n// plots the pixel in the desired color a little quicker using binary shifting\n// to accomplish the multiplications\nvideo_buffer[((y\u0026lt;\u0026lt;8) + (y\u0026lt;\u0026lt;6)) + x] = (unsigned char )color;\n} // end Plot_Pixel\nvoid Set_Graphics_Mode(int mode)\n{\n// use the video interrupt 10h and the C interrupt function to set\n// the video mode\nunion REGS inregs,outregs;\ninregs.h.ah = 0; // set video mode sub-function\ninregs.h.al = (unsigned char)mode; // video mode to change to\nint86(0x10, \u0026amp;inregs, \u0026amp;outregs);\n} // end Set_Graphics_Mode\nvoid Time_Delay(int clicks)\n{\n// this function uses the internal timer to wait a specified number of \"clicks\"\n// the actual amount of real time is the number of clicks * (time per click)\n// usually the time per click is set to 1/18th of a second or 55ms\nlong far *clock = (long far *)0x0000046CL, // address of timer\nstart_time; // starting time\n// get current time\nstart_time = *clock;\n// when the current time minus the starting time \u0026gt;= the requested delay then\n// the function can exit\nwhile(labs(*clock - start_time) \u0026lt; (long)clicks){}\n} // end Time_Delay\n[size=\"5\"]Deterministic Algorithms\nDeterministic algorithms are the simplest of the AI techniques used in games. These algorithms use a set of variables as the input and then use some simple rules to drive the computer- controlled enemies or game objects based on these inputs. We can think of deterministic algorithms as reflexes or very low-level instincts. Activated by some set of conditions in the environment, the algorithms then perform the desired behavior relentlessly without concern for the outcome, the past, or future events. \nThe chase algorithm is a classic example of a deterministic algorithm. The chase algorithm is basically a method of intelligence used to hunt down the player or some other object of interest in a game by applying the spatial coordinates of the computer-controlled object and the object to be tracked. Imagine a game with a top-down view of a battleground, on which three computer-controlled bad guys and one player are fighting. The question is, how can we make the computer-controlled bad guys track and move toward the player? One way is to use the coordinates of the bad guys and the coordinates of the player as inputs into a deterministic algorithm that outputs direction changes or direction vectors for the bad guys in real time. \nLet\u0027s use bad guy one as the example. We see that he is located at coordinates (bx1,by1) and the player is located at coordinates (px,py). Therefore, a simple algorithm to make the bad guy move toward the player would be: // process x-coords\nif (px\u0026gt;bx1) bx1++;\nelse\n if (px\n// process y-coords\nif (py\u0026gt;by1) by1++;\nelse\n if (py That\u0027s all there is to it. If we wanted to reverse the logic and make the bad guy run then the conditional logic could be inverted or the outcome increment operators could be inverted. As an example of deterministic logic, Listing 2 is a complete program that will make a little computer-controlled dot chase a player-controlled dot. Use the numeric keypad to control your player and press ESC to exit the program. Listing 2. A Demo of Deterministic Logic // Deterministic chasing algorithm demo\n// use numeric keypad to move player\n#include \n#include \n#include \n#include \n#include \n#include \n#include \n#include \n#include \"gmod.h\" // include our graphics module\nint main(void)\n{\n int px=160, // starting position of player\n\tpy=100,\n\tbx=0, // starting position of bad guy\n\tby=0,\n\tdone=0; // exit flag\n // set the video mode to 13h\n Set_Graphics_Mode(0x13);\n // main event loop\n while(!done)\n {\n\t// perform player logic\n\t// get input from keyboard\n\tif (kbhit())\n \t// which way is player moving?\n \tswitch(getch())\n \t{\n \tcase \u00278\u0027: // up\n \t{\n \tif ((py-=2)\n \tpy+=200;\n \t} break;\n case \u00272\u0027: // down\n \t{\n \tif ((py+=2)\u0026gt;=200)\n \tpy-=200;\n \t} break;\n \tcase \u00276\u0027: // right\n \t{\n \tif ((px+=2)\u0026gt;=320)\n \tpx-=320;\n \t} break;\n \tcase \u00274\u0027: // left\n \t{\n \tif ((px-=2)\n \tpx+=320;\n \t} break;\n \tcase 27: // exit\n \t{\n \tdone=1;\n \t} break;\n \t} // end switch\n\t} // end if\n\t// perform bad guy logic\n\tif (px\u0026gt;bx)\n \tbx++;\n\telse\n \tif (px\n \tbx-;\n\tif (py\u0026gt;by)\n \tby++;\n\telse\n \tif (py\n \tby-;\n\t// draw player and bad guy\n\tPlot_Pixel(bx,by,12);\n\tPlot_Pixel(px,py,9);\n\t// wait a bit\n\tTime_Delay(1);\n } // end main while\n // reset graphics back to text\n Set_Graphics_Mode(0x03);\n // return success to DOS\n return(0);\n} // end main Now let\u0027s move on to another typical behavior, which we can categorize as random logic. \n[size=\"5\"]Random Logic\nSometimes an intelligent creature exhibits almost random behaviors. These random behaviors may be the result of any one of a number of internal processes, but there are two main ones that we should touch upon--lack of information and desired randomness. \nThe first premise is an obvious one. Many times an intelligent creature does not have enough information to make a decision or may not have any information at all. The creature then simply does the best it can, which is to select a random behavior in hopes that it might be the correct one for the situation. For example, let\u0027s say you were dropped into a dungeon and presented with four identical doors. Knowing that all but one meant certain death, you would simply have to randomly select one! \nThe second premise that brings on a random selection is intentional. For example, say you are a spy trying to make a getaway after acquiring some secret documents (this happens to me all the time). Now, imagine you have been seen, and the bad guys start shooting at you! If you run in a straight line, chances are you are going to get shot. However, if during your escape you make many random direction changes and zigzag a bit, you will get away every time! \nWhat we learn from that example is that many times random logic and selections are good because it makes it harder for the player to determine what the bad guys are going to do next, and it\u0027s a good way to help the bad guys make a selection when there isn\u0027t enough information to use a deterministic algorithm. Motion control is a typical place to apply random logic in bad- guy AI. You can use a random number or probability to select a new direction for the bad guy as a function of time. Let\u0027s envision a multiplayer game with a single, computer- controlled bad guy surrounded by four human players. This is a great place to apply random motion, using the following logic: \n// select a random translation for X axis\nbx1 = bx1 + rand()%11 - 5;\n// select a random translation for Y axis\nby1 = by1 + rand()%11 - 5; \n The position of the bad guy is translated by a random amount in both X and Y, which in this case is +-5 pixels or units. \nOf course, we can use random logic for a lot of other things besides direction changes. Starting positions, power levels, and probability of firing weapons are all good places to apply random logic. It\u0027s definitely a good technique that adds a bit of unpredictability to game AI. Listing 3 is a demo of random logic used to control motion. The demo creates an array of flies and uses random logic to move them around. Press ESC to exit the demo.Listing 3. A Bunch of Dumb Flies // Random logic demo\n// moves a flock of flies around\n// hit any key to exit\n#include \n#include \n#include \n#include \n#include \n#include \n#include \n#include \n#include \"gmod.h\" // include our graphics module\n#define NUM_FLIES 64 // start off with 64 flies\ntypedef struct fly_typ\n{\n int x,y; // position of fly\n} fly;\nint main(void)\n{\n fly flys[NUM_FLIES]; // the array of flies\n int index; // looping variable\n // set the video mode to 13h\n Set_Graphics_Mode(0x13);\n // initialize all flies to random position\n for (index=0; index\n {\n\tflys[index].x = rand()%320;\n\tflys[index].y = rand()%200;\n } // end for index\n // main event loop\n while(!kbhit())\n {\n\t// erase flies\n\tfor (index=0; index\n \tPlot_Pixel(flys[index].x,flys[index].y,0);\n\t// perform fly logic, translate each fly +-2 pixels\n\tfor (index=0; index\n\t{\n \tflys[index].x+=(-2+rand()%5);\n \tflys[index].y+=(-2+rand()%5);\n\t} // end for index\n\t// draw flies\n\tfor (index=0; index\n \tPlot_Pixel(flys[index].x,flys[index].y,10);\n\t// wait a bit\n\tTime_Delay(2);\n } // end main while\n // reset graphics back to text\n Set_Graphics_Mode(0x03);\n // return success to DOS\n return(0);\n} // end main Now let\u0027s talk about patterns.\n [size=\"5\"]Encoded List Processing\nMany intelligent creatures have prerecorded patterns or lists of behaviors that they have either learned from experience or are instinctive. We can think of a pattern as a sequence of steps we perform to accomplish a task. Granted, this sequence may be interrupted if something happens during the sequence that needs attention. But in general, if we forget about interruptions then we can think of patterns as a list of encoded instructions that an intelligent creature consumes to accomplish some task. \nFor example, when you drive to work, school, or your girlfriend\u0027s or boyfriend\u0027s house, you are following a pattern. You get into your car, start it, drive to the destination, stop the car, turn it off, get out, and finally do whatever it is you\u0027re going to do. This is a pattern of behavior. Although during the entire experience a billion things may have gone through your head, the observed behavior was actually very simple. Hence, patterns are a good way to implement seemingly complex thought processes in game AI. In fact, many games today still use patterns for much of the game logic. \nSo how can we implement patterns for game AI? Simply by using an input array to a list processor. The output of the processor is the control of a game object or bad guy. In this case, the encoded list has the following set of valid instructions: \n[font=\"Courier New\"] Turn right \nTurn left \nMove forward \nMove backward \nSit still \nFire weapon [/font]\nEven though we only have six selections, we can construct quite a few patterns with a short input list of 16 elements as in the example. In fact there are 6 16 different possible patterns or roughly 2.8 trillion different behaviors. I think that\u0027s enough to make something look intelligent! So how can we use encoded lists and patterns in a game for the AI? One solid way is to use them to control the motion of a bad guy or game object. For example, a deterministic algorithm might decide it\u0027s time to make a bad guy perform some complex motion that would be difficult if we used standard conditional logic. Thus, we could use that pattern, which simply reads an encoded list directing the bad guy to make some tricky moves. For example, we might have a simple algorithm like this: \nint move_x[16] = {-2,0,0,0,3,3,2,1,0, -2,-2,-,0,1,2,3,4};\nint move_y[16] = {0,0,0,1,1,1,0,0,-1,-1, 2,3,4,0,0.-1};\n// encoded pattern logic for a 16 element list\nfor (index=0; index\n{\n bx1+=move_x[index]; by1+=move_y[index];\n}\n// end for index \n You\u0027ll notice that the encoded pattern is made up simply of X and Y translations. The pattern could just as well have contained complex records with a multitude of data fields. I\u0027ve written detailed code that will create an example of patterns and list processing, a demo of an ant that can process one of four patterns selected by the keys 1-4. Unfortunately, it\u0027s too long to print here. Go to the Game Developer ftp site, and you can download it there. \nNow we\u0027re starting to get somewhere, but we need an overall control unit with some form of memory, and we must select the appropriate types of behaviors.\n [size=\"5\"]Finite State Machines\nFinite state machines, or FSMs, are abstract models that can be implemented either in hardware or software. FSMs are not really machines in the mechanical sense of the word, but rather abstract models with a set of \"states\" that can be traversed. Within these states, the FSM not only has a special set of outputs but remembers the state and can transition to another state, if and only if a set of inputs or premises are met. Say we have a typical finite state machine in which there are a set of states labeled S0, S1, S2, and Sn, and within each state is a set of outputs. These outputs can be anything we wish -- from motion controls for a game\u0027s bad guys to hard disk commands. How do we model an FSM in software and use it to control the game AI? Let\u0027s begin with the first question. \nWe can model an FSM with a single variable and a set of logical conditions used to make state transitions along with the output for each state. For example, let\u0027s actually build a simple software state machine that controls a computer bad guy differently based on the bad guy\u0027s distance to the player. The state machine will have the following four states: \nState 0: Select new state = STATE_NEW \nState 1: Move randomly = STATE_RANDOM \nState 2: Track player = STATE_TRACK \nState 3: Use a pattern = STATE_PATTERN \nThe FSM\u0027s transition should be set up so that if the bad guy is within 50 units of the player, then the bad guy moves into State 2 and simply attacks. If the bad guy is in the range of 51 to 100 units from the player, then the bad guy goes into State 3 and moves in a pattern. Finally, if the bad guy is farther than 100 units from the player then chances are the bad guy can\u0027t even see the player (in the imaginary computer universe). In that case, the bad guy moves into State 1, which is random motion. \nSo how can we implement this simple FSM machine? All we need is a variable to record the current state and some conditional logic to perform the state transitions and outputs. Listing 4 shows a rough algorithm that will do all this.Listing 4. The Core FSM Logic \ttypedef unsigned short DISTANCE;\nconst DISTANCE Tracking_Threshold = 50;\nconst DISTANCE Random_Threshold = 100;\nDISTANCE theDistance;\n//Define states and initialize\nenum states{new, random, track, pattern};\nstates currentState = new;\n//FSM loop\nfor( ; ; ){\n switch (currentState)\n {\n\tcase new:\n\t{\n \t//Note: Switchbox only, causes no behavior\n \ttheDistance = CalcDistanceToPlayer();\n \tif (theDistance \u0026gt; Random_Threshold){\n \tcurrentState = random;\n \t}\n \telse {\n \tif (theDistance \u0026gt; Tracking_Threshold) {\n \tcurrentState = pattern;\n \t}\n \telse {\n \tcurrentState = track;\n \t}\n\t} break;\n\tcase track:\n \tDoTrackBehavior();\n \tcurrentState = new;\n \tbreak;\n\tcase pattern:\n \tDoPatternBehavior();\n \tcurrentState = new;\n \tbreak;\n\tcase random:\n \tDoRandomBehavior();\n \tcurrentState = new;\n\tbreak;\n case default:\n\tcerr\n\tassert(FAIL);\n }\n} Note that S0 (the new state) does not trigger any behavior on the part of the opponent. Rather, it acts as a state \"switchbox,\" to which all states (except itself) transition. This allows you to localize in a single control block all the decision making about transitions \nAlthough this requires two cycles through the FSM loop to create one behavior, it\u0027s well worth it. In the case of a small FSM, the entire loop can stay in the cache, and in the case of a large FSM loop, the localization of the transition logic will more than pay for the performance penalty. If you absolutely refuse to double-loop, you can handcraft the transitions between states. A finite-state machine diagram will vividly illustrate, in the form of spaghetti transitions, when your transition logic is out of control. \nNow that we have an overall thought controller, that is, an FSM, we should discuss simulating sensory excitation in a virtual world.\n [size=\"5\"]Environmental Sensing\nOne problem that plagues AI game programming is that it can be very unfair-- at least to the player. The reason for this is that the player can only \"see\" what\u0027s on the computer screen, whereas the computer AI system has access to all variables and data that the player can\u0027t access. \nThis brings us to the concept of simulated sensory organs for the bad guys and game objects. For example, in a three- dimensional tank game that takes place on a flat plain, the player can only see so far based on his or her field of view. Further, the player can\u0027t see through rocks, buildings, and obstacles. However, because the game logic has access to all the system variables and data structures, it is tempting for it to use this extra data to help with the AI for the bad guys. \nThe question is, is this fair to the player? Well, of course not. So how can we make sure we supply the AI engine of the bad guys and game objects with the same information the player has? We must use simulated sensory inputs such as vision, hearing, vibration, and the like. Each opponent and the player has a cone of vision associated with it. Both the bad guys and the player can only see objects within this cone. The player can only see within this cone as a function of the 3D graphics engine, but the bad guys can only see within this cone as a function of their AI program. Let\u0027s be a little more specific about this. \nSince we know that we must be fair to the player, what we can do is write a simple algorithm that scans the area in front of each bad guy and determines if the player is within view. This scanning is similar to the player viewing the viewport or looking out the virtual window. Of course, we don\u0027t need to perform a full three-dimensional scan with ray tracing or the like--we can simply make sure the player is within the view angle of the bad guy in question by using trigonometry of any technique we wish. \nBased on the information obtained from each bad guy scan, the proper AI decision can be made in a more uniform and fair manner. Of course, we may want to give the computer-controlled AI system more advantage than the human player to make up for the AI system itself being rather primitive when compared to the 100 billion- cell neural network it is competing against, but you get the idea. \nFinally, we might ask, \"Can we perform other kinds of sensing?\" Yes. We can create simulated light detectors, sound detectors, and so forth. I have been experimenting with an underwater game engine, and in total darkness the only way the enemy creatures can \"see\" you is to listen to your propulsion units. Based on the power level of the player\u0027s engines the game AI determines the sound level that the bad guys hear and moves them toward the sound source or sources.\n[size=\"5\"]Memory and Learning\nThe final topic we\u0027re going to touch upon is memory and learning. Memory is easy enough to understand, but learning is a bit more nebulous. Learning as far as we are concerned is the ability to interact in an environment in such a way that behaviors that seem to work better than others under certain conditions are \"memorized\" and used more often. In essence, learning is based on memory of past actions being good or bad or whatever. Imagine that we have written a fairly complex game composed of computer- controlled aliens. These aliens use an FSM- based AI engine and environmental sensing. The problem is that one of the resources in the game is energion cubes and the player and aliens must compete for these cubes. \nAs the player is moving around in the environment, he or she can create a mental map of where energion cubes seem to be plentiful, but the alien creatures have no such ability; they can only stand and are at a disadvantage. Can we give them a memory and teach them where these energion cubes are? Of course we can, we are cybergods! \nOne such implementation would work as follows: We could use a simple data structure that would track the number of times an alien found energion in each geographical region of the game. Then, when an alien was power hungry, instead of randomly bouncing around, the alien would refer to this memory data structure and select the geographical region with the highest probability of finding energion and set its trajectory for this region. \nThe previous example is a simple one, but as we can see, memory and learning are actually very easy to implement. Moreover, we can make the computer AI learn much more than where energion is. It could learn the most common defensive moves of the player and use this information against the player. \nWell that\u0027s enough for basic AI techniques. Let\u0027s take a quick look at how we can put it all together.\n [size=\"5\"]Building Monsters from the Id\nWe have quite a repertoire of computer AI tricks at our fingertips, so how should we use it all? Basically, when you write a game and are implementing the AI, you should list the types of behaviors that each game object or bad guy needs to exhibit. Simple creatures should use deterministic logic, randomness, and patterns. Complex creatures that will interact with the player should use an FSM-based AI engine. And the main game objects that harass and test the player should use an FSM and sensory inputs and memory.\n [size=\"5\"]The Future\nI see AI as the next frontier to explore. Without a doubt, most game programmers have focused so much on graphics that AI hasn\u0027t been researched much. The irony is that researchers have been making leaps and bounds in AI research and Artificial Life or A-Life. \nI\u0027m sure you\u0027ve heard the common terms \"genetic algorithms\" and \"neural networks.\" Genetic algorithms are simply a method of representing some aspect of a computer-based AI model with a set of \"genes,\" which can represent whatever we wish--aggressiveness, maximum speed, maximum vision distance, and so on. Then, a population of creatures is generated using an algorithm that adds a little randomness in each of the output creatures\u0027 genes. \nOur game world is then populated with these gene-based creatures. As the creatures interact in the environment, they are killed, survive, and are reborn. The biological analog comes into play during the rebirthing phase. Either manually or by some other means, the computer AI engine \"mates\" various pairs of creatures and mixes their genes. The resulting offspring then survive another generation and the process continues. This causes the creatures to evolve so that they are most adapted for the given environment. \nNeural networks, on the other hand, are computer abstractions of a collection of brain cells that have firing thresholds. You can enhance or diminish these thresholds and the connections between cells. By \"teaching\" a neural network or strengthening and weakening these connections, the neural net can learn something. So we can use these nets to help make decisions and even come up with new methods.", "articleBody": "August, 1995\n Game developers have always pushed the limits of the hardware when it comes to graphics and sound, but I think we all agree that when it\u0027s time to implement artificial intelligence for a game, AI always gets the short end of the stick! In this article, we are going to study a potpourri of AI topics ranging from the simple to the complex. \nAlong the way, we are going to try out a few demos that use a very rudimentary graphics interface to illustrate some of the simpler concepts. However, most of our discussion will be quasi-theoretical and abstract. This is because AI is not as simple as an algorithm, a data structure, or similar things. Artificial intelligence is a fluid concept that must be shaped by the game it is to be used on. Granted, you may use the same fundamental techniques on myriad games, but the form and implementation may be radically different. \nLet\u0027s begin our discussion with some simple statements that define what AI is in the context of games. Artificial intelligence in the arena of computer games implies that the computer-controlled opponents and game objects seem to show some kind of cognitive process when taking actions or reacting to the player\u0027s actions. These actions may be implemented in a million different ways, but the bottom line, from an observers point of view, is that they seem to show intelligence. \nThis brings us to the fundamental definition of intelligence. For our purposes, intelligence is simply the ability to survive and perform tasks in an environment. The tasks may be to hunt down and destroy the player, find food, navigate an asteroid field, or whatever. Nevertheless, this will be our loose definition of intelligence. \nNow that we have an idea of what we are trying to accomplish, where on earth should we begin? We will begin by using humans as our models of intelligence because they seem to be reasonably intelligent for carbon units. If we observe a human in an environment, we can extrapolate a few key behaviors of intelligence that we can model using fairly simple computer algorithms and techniques. \nThese behaviors are blind reflexes, random selection, use of known patterns, environmental analysis, memory-based selections and sequential behaviors that may encompass some or all of the other behaviors. We\u0027ll take a look at all of these behaviors and explore how we might implement them in a computer game, but first let\u0027s talk about the graphics module we are going to use for some of the demos.\n The Graphics Module Half the world uses Microsoft C and C++ compilers and the other half uses Borland C and C++ compilers--so it\u0027s always a problem publishing demos that depend on the use of either. Hence, we are going to write C code that is totally compiler independent, based on a graphics interface that we are going to write ourselves, and that will work on both compilers. The graphics interface will be based on graphics mode 13h, which is 320 by 200 pixels with 256 colors. For the simple demos we are going to write, all we want to do is place the VGA/SVGA card in mode 13h and plot single pixels on the screen. Thus we need two functions: Set_Video_Mode(int mode); and Plot_Pixel(int x, int y,unsigned char color); We will use the video BIOS function 10h to set the video mode, but how can we plot pixels? Plotting pixels in mode 13h is very simple because the graphics are fully memory mapped. Basically, mode 13h is a totally linear array of memory that represents each pixel with a single byte. Further, this video memory starts at location A000:000, and consists of 200 rows and 320 columns. Therefore, to compute the address of any pixel at (x,y) we simply multiply the Y component by 320 and add the X. Or in other words: memory offset = y*320+x; Adding this memory offset to A000:0000 gives us the final memory location to access the desired screen pixel. Hence, if we alias a FAR pointer to the video memory like this: unsigned char far* video_buffer = (unsigned char far*)A0000000L; Then we can access the video memory using a syntax like: video_buffer[y*320+x] = color; And that\u0027s it. So, using that information, we can then write a simple pixel-plotting function and graphics-mode function. These two functions should be added to each demo so that the demos can perform the graphics-related functions without help from the compiler-dependent graphics library. We\u0027re also going to add a little time-delay function based on the PC\u0027s internal timer. The function is called Time_Delay() and takes a single parameter, which is the number of clicks to wait for. Listing 1 shows the complete graphics interface named GMOD.H for the demos contained within this article. Simply include the code of the graphics module with each demo and everything should work fine. Now that we have the software we need to do graphics, let\u0027s begin our discussion of AI. Listing 1. The Graphics Module GMOD.H// GMOD.H graphics module for demos\nunsigned char far *video_buffer = (unsigned char far *)0xA0000000L;\nvoid Plot_Pixel(int x,int y,int color)\n{\n// plots the pixel in the desired color a little quicker using binary shifting\n// to accomplish the multiplications\nvideo_buffer[((y\u0026lt;\u0026lt;8) + (y\u0026lt;\u0026lt;6)) + x] = (unsigned char )color;\n} // end Plot_Pixel\nvoid Set_Graphics_Mode(int mode)\n{\n// use the video interrupt 10h and the C interrupt function to set\n// the video mode\nunion REGS inregs,outregs;\ninregs.h.ah = 0; // set video mode sub-function\ninregs.h.al = (unsigned char)mode; // video mode to change to\nint86(0x10, \u0026amp;inregs, \u0026amp;outregs);\n} // end Set_Graphics_Mode\nvoid Time_Delay(int clicks)\n{\n// this function uses the internal timer to wait a specified number of \"clicks\"\n// the actual amount of real time is the number of clicks * (time per click)\n// usually the time per click is set to 1/18th of a second or 55ms\nlong far *clock = (long far *)0x0000046CL, // address of timer\nstart_time; // starting time\n// get current time\nstart_time = *clock;\n// when the current time minus the starting time \u0026gt;= the requested delay then\n// the function can exit\nwhile(labs(*clock - start_time) \u0026lt; (long)clicks){}\n} // end Time_Delay\n[size=\"5\"]Deterministic Algorithms\nDeterministic algorithms are the simplest of the AI techniques used in games. These algorithms use a set of variables as the input and then use some simple rules to drive the computer- controlled enemies or game objects based on these inputs. We can think of deterministic algorithms as reflexes or very low-level instincts. Activated by some set of conditions in the environment, the algorithms then perform the desired behavior relentlessly without concern for the outcome, the past, or future events. \nThe chase algorithm is a classic example of a deterministic algorithm. The chase algorithm is basically a method of intelligence used to hunt down the player or some other object of interest in a game by applying the spatial coordinates of the computer-controlled object and the object to be tracked. Imagine a game with a top-down view of a battleground, on which three computer-controlled bad guys and one player are fighting. The question is, how can we make the computer-controlled bad guys track and move toward the player? One way is to use the coordinates of the bad guys and the coordinates of the player as inputs into a deterministic algorithm that outputs direction changes or direction vectors for the bad guys in real time. \nLet\u0027s use bad guy one as the example. We see that he is located at coordinates (bx1,by1) and the player is located at coordinates (px,py). Therefore, a simple algorithm to make the bad guy move toward the player would be: // process x-coords\nif (px\u0026gt;bx1) bx1++;\nelse\n if (px\n// process y-coords\nif (py\u0026gt;by1) by1++;\nelse\n if (py That\u0027s all there is to it. If we wanted to reverse the logic and make the bad guy run then the conditional logic could be inverted or the outcome increment operators could be inverted. As an example of deterministic logic, Listing 2 is a complete program that will make a little computer-controlled dot chase a player-controlled dot. Use the numeric keypad to control your player and press ESC to exit the program. Listing 2. A Demo of Deterministic Logic // Deterministic chasing algorithm demo\n// use numeric keypad to move player\n#include \n#include \n#include \n#include \n#include \n#include \n#include \n#include \n#include \"gmod.h\" // include our graphics module\nint main(void)\n{\n int px=160, // starting position of player\n\tpy=100,\n\tbx=0, // starting position of bad guy\n\tby=0,\n\tdone=0; // exit flag\n // set the video mode to 13h\n Set_Graphics_Mode(0x13);\n // main event loop\n while(!done)\n {\n\t// perform player logic\n\t// get input from keyboard\n\tif (kbhit())\n \t// which way is player moving?\n \tswitch(getch())\n \t{\n \tcase \u00278\u0027: // up\n \t{\n \tif ((py-=2)\n \tpy+=200;\n \t} break;\n case \u00272\u0027: // down\n \t{\n \tif ((py+=2)\u0026gt;=200)\n \tpy-=200;\n \t} break;\n \tcase \u00276\u0027: // right\n \t{\n \tif ((px+=2)\u0026gt;=320)\n \tpx-=320;\n \t} break;\n \tcase \u00274\u0027: // left\n \t{\n \tif ((px-=2)\n \tpx+=320;\n \t} break;\n \tcase 27: // exit\n \t{\n \tdone=1;\n \t} break;\n \t} // end switch\n\t} // end if\n\t// perform bad guy logic\n\tif (px\u0026gt;bx)\n \tbx++;\n\telse\n \tif (px\n \tbx-;\n\tif (py\u0026gt;by)\n \tby++;\n\telse\n \tif (py\n \tby-;\n\t// draw player and bad guy\n\tPlot_Pixel(bx,by,12);\n\tPlot_Pixel(px,py,9);\n\t// wait a bit\n\tTime_Delay(1);\n } // end main while\n // reset graphics back to text\n Set_Graphics_Mode(0x03);\n // return success to DOS\n return(0);\n} // end main Now let\u0027s move on to another typical behavior, which we can categorize as random logic. \n[size=\"5\"]Random Logic\nSometimes an intelligent creature exhibits almost random behaviors. These random behaviors may be the result of any one of a number of internal processes, but there are two main ones that we should touch upon--lack of information and desired randomness. \nThe first premise is an obvious one. Many times an intelligent creature does not have enough information to make a decision or may not have any information at all. The creature then simply does the best it can, which is to select a random behavior in hopes that it might be the correct one for the situation. For example, let\u0027s say you were dropped into a dungeon and presented with four identical doors. Knowing that all but one meant certain death, you would simply have to randomly select one! \nThe second premise that brings on a random selection is intentional. For example, say you are a spy trying to make a getaway after acquiring some secret documents (this happens to me all the time). Now, imagine you have been seen, and the bad guys start shooting at you! If you run in a straight line, chances are you are going to get shot. However, if during your escape you make many random direction changes and zigzag a bit, you will get away every time! \nWhat we learn from that example is that many times random logic and selections are good because it makes it harder for the player to determine what the bad guys are going to do next, and it\u0027s a good way to help the bad guys make a selection when there isn\u0027t enough information to use a deterministic algorithm. Motion control is a typical place to apply random logic in bad- guy AI. You can use a random number or probability to select a new direction for the bad guy as a function of time. Let\u0027s envision a multiplayer game with a single, computer- controlled bad guy surrounded by four human players. This is a great place to apply random motion, using the following logic: \n// select a random translation for X axis\nbx1 = bx1 + rand()%11 - 5;\n// select a random translation for Y axis\nby1 = by1 + rand()%11 - 5; \n The position of the bad guy is translated by a random amount in both X and Y, which in this case is +-5 pixels or units. \nOf course, we can use random logic for a lot of other things besides direction changes. Starting positions, power levels, and probability of firing weapons are all good places to apply random logic. It\u0027s definitely a good technique that adds a bit of unpredictability to game AI. Listing 3 is a demo of random logic used to control motion. The demo creates an array of flies and uses random logic to move them around. Press ESC to exit the demo.Listing 3. A Bunch of Dumb Flies // Random logic demo\n// moves a flock of flies around\n// hit any key to exit\n#include \n#include \n#include \n#include \n#include \n#include \n#include \n#include \n#include \"gmod.h\" // include our graphics module\n#define NUM_FLIES 64 // start off with 64 flies\ntypedef struct fly_typ\n{\n int x,y; // position of fly\n} fly;\nint main(void)\n{\n fly flys[NUM_FLIES]; // the array of flies\n int index; // looping variable\n // set the video mode to 13h\n Set_Graphics_Mode(0x13);\n // initialize all flies to random position\n for (index=0; index\n {\n\tflys[index].x = rand()%320;\n\tflys[index].y = rand()%200;\n } // end for index\n // main event loop\n while(!kbhit())\n {\n\t// erase flies\n\tfor (index=0; index\n \tPlot_Pixel(flys[index].x,flys[index].y,0);\n\t// perform fly logic, translate each fly +-2 pixels\n\tfor (index=0; index\n\t{\n \tflys[index].x+=(-2+rand()%5);\n \tflys[index].y+=(-2+rand()%5);\n\t} // end for index\n\t// draw flies\n\tfor (index=0; index\n \tPlot_Pixel(flys[index].x,flys[index].y,10);\n\t// wait a bit\n\tTime_Delay(2);\n } // end main while\n // reset graphics back to text\n Set_Graphics_Mode(0x03);\n // return success to DOS\n return(0);\n} // end main Now let\u0027s talk about patterns.\n [size=\"5\"]Encoded List Processing\nMany intelligent creatures have prerecorded patterns or lists of behaviors that they have either learned from experience or are instinctive. We can think of a pattern as a sequence of steps we perform to accomplish a task. Granted, this sequence may be interrupted if something happens during the sequence that needs attention. But in general, if we forget about interruptions then we can think of patterns as a list of encoded instructions that an intelligent creature consumes to accomplish some task. \nFor example, when you drive to work, school, or your girlfriend\u0027s or boyfriend\u0027s house, you are following a pattern. You get into your car, start it, drive to the destination, stop the car, turn it off, get out, and finally do whatever it is you\u0027re going to do. This is a pattern of behavior. Although during the entire experience a billion things may have gone through your head, the observed behavior was actually very simple. Hence, patterns are a good way to implement seemingly complex thought processes in game AI. In fact, many games today still use patterns for much of the game logic. \nSo how can we implement patterns for game AI? Simply by using an input array to a list processor. The output of the processor is the control of a game object or bad guy. In this case, the encoded list has the following set of valid instructions: \n[font=\"Courier New\"] Turn right \nTurn left \nMove forward \nMove backward \nSit still \nFire weapon [/font]\nEven though we only have six selections, we can construct quite a few patterns with a short input list of 16 elements as in the example. In fact there are 6 16 different possible patterns or roughly 2.8 trillion different behaviors. I think that\u0027s enough to make something look intelligent! So how can we use encoded lists and patterns in a game for the AI? One solid way is to use them to control the motion of a bad guy or game object. For example, a deterministic algorithm might decide it\u0027s time to make a bad guy perform some complex motion that would be difficult if we used standard conditional logic. Thus, we could use that pattern, which simply reads an encoded list directing the bad guy to make some tricky moves. For example, we might have a simple algorithm like this: \nint move_x[16] = {-2,0,0,0,3,3,2,1,0, -2,-2,-,0,1,2,3,4};\nint move_y[16] = {0,0,0,1,1,1,0,0,-1,-1, 2,3,4,0,0.-1};\n// encoded pattern logic for a 16 element list\nfor (index=0; index\n{\n bx1+=move_x[index]; by1+=move_y[index];\n}\n// end for index \n You\u0027ll notice that the encoded pattern is made up simply of X and Y translations. The pattern could just as well have contained complex records with a multitude of data fields. I\u0027ve written detailed code that will create an example of patterns and list processing, a demo of an ant that can process one of four patterns selected by the keys 1-4. Unfortunately, it\u0027s too long to print here. Go to the Game Developer ftp site, and you can download it there. \nNow we\u0027re starting to get somewhere, but we need an overall control unit with some form of memory, and we must select the appropriate types of behaviors.\n [size=\"5\"]Finite State Machines\nFinite state machines, or FSMs, are abstract models that can be implemented either in hardware or software. FSMs are not really machines in the mechanical sense of the word, but rather abstract models with a set of \"states\" that can be traversed. Within these states, the FSM not only has a special set of outputs but remembers the state and can transition to another state, if and only if a set of inputs or premises are met. Say we have a typical finite state machine in which there are a set of states labeled S0, S1, S2, and Sn, and within each state is a set of outputs. These outputs can be anything we wish -- from motion controls for a game\u0027s bad guys to hard disk commands. How do we model an FSM in software and use it to control the game AI? Let\u0027s begin with the first question. \nWe can model an FSM with a single variable and a set of logical conditions used to make state transitions along with the output for each state. For example, let\u0027s actually build a simple software state machine that controls a computer bad guy differently based on the bad guy\u0027s distance to the player. The state machine will have the following four states: \nState 0: Select new state = STATE_NEW \nState 1: Move randomly = STATE_RANDOM \nState 2: Track player = STATE_TRACK \nState 3: Use a pattern = STATE_PATTERN \nThe FSM\u0027s transition should be set up so that if the bad guy is within 50 units of the player, then the bad guy moves into State 2 and simply attacks. If the bad guy is in the range of 51 to 100 units from the player, then the bad guy goes into State 3 and moves in a pattern. Finally, if the bad guy is farther than 100 units from the player then chances are the bad guy can\u0027t even see the player (in the imaginary computer universe). In that case, the bad guy moves into State 1, which is random motion. \nSo how can we implement this simple FSM machine? All we need is a variable to record the current state and some conditional logic to perform the state transitions and outputs. Listing 4 shows a rough algorithm that will do all this.Listing 4. The Core FSM Logic \ttypedef unsigned short DISTANCE;\nconst DISTANCE Tracking_Threshold = 50;\nconst DISTANCE Random_Threshold = 100;\nDISTANCE theDistance;\n//Define states and initialize\nenum states{new, random, track, pattern};\nstates currentState = new;\n//FSM loop\nfor( ; ; ){\n switch (currentState)\n {\n\tcase new:\n\t{\n \t//Note: Switchbox only, causes no behavior\n \ttheDistance = CalcDistanceToPlayer();\n \tif (theDistance \u0026gt; Random_Threshold){\n \tcurrentState = random;\n \t}\n \telse {\n \tif (theDistance \u0026gt; Tracking_Threshold) {\n \tcurrentState = pattern;\n \t}\n \telse {\n \tcurrentState = track;\n \t}\n\t} break;\n\tcase track:\n \tDoTrackBehavior();\n \tcurrentState = new;\n \tbreak;\n\tcase pattern:\n \tDoPatternBehavior();\n \tcurrentState = new;\n \tbreak;\n\tcase random:\n \tDoRandomBehavior();\n \tcurrentState = new;\n\tbreak;\n case default:\n\tcerr\n\tassert(FAIL);\n }\n} Note that S0 (the new state) does not trigger any behavior on the part of the opponent. Rather, it acts as a state \"switchbox,\" to which all states (except itself) transition. This allows you to localize in a single control block all the decision making about transitions \nAlthough this requires two cycles through the FSM loop to create one behavior, it\u0027s well worth it. In the case of a small FSM, the entire loop can stay in the cache, and in the case of a large FSM loop, the localization of the transition logic will more than pay for the performance penalty. If you absolutely refuse to double-loop, you can handcraft the transitions between states. A finite-state machine diagram will vividly illustrate, in the form of spaghetti transitions, when your transition logic is out of control. \nNow that we have an overall thought controller, that is, an FSM, we should discuss simulating sensory excitation in a virtual world.\n [size=\"5\"]Environmental Sensing\nOne problem that plagues AI game programming is that it can be very unfair-- at least to the player. The reason for this is that the player can only \"see\" what\u0027s on the computer screen, whereas the computer AI system has access to all variables and data that the player can\u0027t access. \nThis brings us to the concept of simulated sensory organs for the bad guys and game objects. For example, in a three- dimensional tank game that takes place on a flat plain, the player can only see so far based on his or her field of view. Further, the player can\u0027t see through rocks, buildings, and obstacles. However, because the game logic has access to all the system variables and data structures, it is tempting for it to use this extra data to help with the AI for the bad guys. \nThe question is, is this fair to the player? Well, of course not. So how can we make sure we supply the AI engine of the bad guys and game objects with the same information the player has? We must use simulated sensory inputs such as vision, hearing, vibration, and the like. Each opponent and the player has a cone of vision associated with it. Both the bad guys and the player can only see objects within this cone. The player can only see within this cone as a function of the 3D graphics engine, but the bad guys can only see within this cone as a function of their AI program. Let\u0027s be a little more specific about this. \nSince we know that we must be fair to the player, what we can do is write a simple algorithm that scans the area in front of each bad guy and determines if the player is within view. This scanning is similar to the player viewing the viewport or looking out the virtual window. Of course, we don\u0027t need to perform a full three-dimensional scan with ray tracing or the like--we can simply make sure the player is within the view angle of the bad guy in question by using trigonometry of any technique we wish. \nBased on the information obtained from each bad guy scan, the proper AI decision can be made in a more uniform and fair manner. Of course, we may want to give the computer-controlled AI system more advantage than the human player to make up for the AI system itself being rather primitive when compared to the 100 billion- cell neural network it is competing against, but you get the idea. \nFinally, we might ask, \"Can we perform other kinds of sensing?\" Yes. We can create simulated light detectors, sound detectors, and so forth. I have been experimenting with an underwater game engine, and in total darkness the only way the enemy creatures can \"see\" you is to listen to your propulsion units. Based on the power level of the player\u0027s engines the game AI determines the sound level that the bad guys hear and moves them toward the sound source or sources.\n[size=\"5\"]Memory and Learning\nThe final topic we\u0027re going to touch upon is memory and learning. Memory is easy enough to understand, but learning is a bit more nebulous. Learning as far as we are concerned is the ability to interact in an environment in such a way that behaviors that seem to work better than others under certain conditions are \"memorized\" and used more often. In essence, learning is based on memory of past actions being good or bad or whatever. Imagine that we have written a fairly complex game composed of computer- controlled aliens. These aliens use an FSM- based AI engine and environmental sensing. The problem is that one of the resources in the game is energion cubes and the player and aliens must compete for these cubes. \nAs the player is moving around in the environment, he or she can create a mental map of where energion cubes seem to be plentiful, but the alien creatures have no such ability; they can only stand and are at a disadvantage. Can we give them a memory and teach them where these energion cubes are? Of course we can, we are cybergods! \nOne such implementation would work as follows: We could use a simple data structure that would track the number of times an alien found energion in each geographical region of the game. Then, when an alien was power hungry, instead of randomly bouncing around, the alien would refer to this memory data structure and select the geographical region with the highest probability of finding energion and set its trajectory for this region. \nThe previous example is a simple one, but as we can see, memory and learning are actually very easy to implement. Moreover, we can make the computer AI learn much more than where energion is. It could learn the most common defensive moves of the player and use this information against the player. \nWell that\u0027s enough for basic AI techniques. Let\u0027s take a quick look at how we can put it all together.\n [size=\"5\"]Building Monsters from the Id\nWe have quite a repertoire of computer AI tricks at our fingertips, so how should we use it all? Basically, when you write a game and are implementing the AI, you should list the types of behaviors that each game object or bad guy needs to exhibit. Simple creatures should use deterministic logic, randomness, and patterns. Complex creatures that will interact with the player should use an FSM-based AI engine. And the main game objects that harass and test the player should use an FSM and sensory inputs and memory.\n [size=\"5\"]The Future\nI see AI as the next frontier to explore. Without a doubt, most game programmers have focused so much on graphics that AI hasn\u0027t been researched much. The irony is that researchers have been making leaps and bounds in AI research and Artificial Life or A-Life. \nI\u0027m sure you\u0027ve heard the common terms \"genetic algorithms\" and \"neural networks.\" Genetic algorithms are simply a method of representing some aspect of a computer-based AI model with a set of \"genes,\" which can represent whatever we wish--aggressiveness, maximum speed, maximum vision distance, and so on. Then, a population of creatures is generated using an algorithm that adds a little randomness in each of the output creatures\u0027 genes. \nOur game world is then populated with these gene-based creatures. As the creatures interact in the environment, they are killed, survive, and are reborn. The biological analog comes into play during the rebirthing phase. Either manually or by some other means, the computer AI engine \"mates\" various pairs of creatures and mixes their genes. The resulting offspring then survive another generation and the process continues. This causes the creatures to evolve so that they are most adapted for the given environment. \nNeural networks, on the other hand, are computer abstractions of a collection of brain cells that have firing thresholds. You can enhance or diminish these thresholds and the connections between cells. By \"teaching\" a neural network or strengthening and weakening these connections, the neural net can learn something. So we can use these nets to help make decisions and even come up with new methods.", "dateCreated": "1999-07-31T00:30:57+0000", "datePublished": "1999-07-31T00:30:00+0000", "dateModified": "1999-07-31T00:30:00+0000", "pageStart": 1, "pageEnd": 1, "author": { "@type": "Person", "name": "Myopic Rhino", "image": "https://secure.gravatar.com/avatar/16f4f23aa8088c2c27c829623874b42e?d=https://www.gamedev.net/uploads/monthly_2017_08/M.png.3d2e4bcade48bd80a1d6554f9c036e99.png", "url": "https://www.gamedev.net/profile/6-dave-astle/" }, "publisher": { "@type": "Organization", "name": "Myopic Rhino", "image": "https://secure.gravatar.com/avatar/16f4f23aa8088c2c27c829623874b42e?d=https://www.gamedev.net/uploads/monthly_2017_08/M.png.3d2e4bcade48bd80a1d6554f9c036e99.png", "logo": { "@type": "ImageObject", "url": "https://secure.gravatar.com/avatar/16f4f23aa8088c2c27c829623874b42e?d=https://www.gamedev.net/uploads/monthly_2017_08/M.png.3d2e4bcade48bd80a1d6554f9c036e99.png" }, "url": "https://www.gamedev.net/profile/6-dave-astle/" }, "interactionStatistic": [ { "@type": "InteractionCounter", "interactionType": "http://schema.org/ViewAction", "userInteractionCount": 5680 }, { "@type": "InteractionCounter", "interactionType": "http://schema.org/FollowAction", "userInteractionCount": 2 }, { "@type": "InteractionCounter", "interactionType": "http://schema.org/ReviewAction", "userInteractionCount": 0 }, { "@type": "InteractionCounter", "interactionType": "http://schema.org/CommentAction", "userInteractionCount": 0 } ], "image": { "@type": "ImageObject", "url": "https://secure.gravatar.com/avatar/16f4f23aa8088c2c27c829623874b42e?d=https://www.gamedev.net/uploads/monthly_2017_08/M.png.3d2e4bcade48bd80a1d6554f9c036e99.png", "width": "500", "height": "500" }, "aggregateRating": { "@type": "AggregateRating", "ratingValue": 0, "ratingCount": 0 }, "commentCount": 0 } { "@context": "http://www.schema.org", "@type": "WebSite", "name": "GameDev.net", "url": "https://www.gamedev.net/", "potentialAction": { "type": "SearchAction", "query-input": "required name=query", "target": "https://www.gamedev.net/search/?q={query}" }, "inLanguage": [ { "@type": "Language", "name": "English (USA)", "alternateName": "en-US" } ] } { "@context": "http://www.schema.org", "@type": "Organization", "name": "GameDev.net", "url": "https://www.gamedev.net/", "logo": "https://www.gamedev.net/uploads/themes/monthly_2017_04/gamedev-logo-2017-368x76.png.e986e41d566ff6c90485a5304985ed5f.png", "address": { "@type": "PostalAddress", "streetAddress": "", "addressLocality": null, "addressRegion": null, "postalCode": null, "addressCountry": null } } { "@context": "http://schema.org", "@type": "BreadcrumbList", "itemListElement": [ { "@type": "ListItem", "position": 1, "item": { "@id": "https://www.gamedev.net/articles/", "name": "Articles" } }, { "@type": "ListItem", "position": 2, "item": { "@id": "https://www.gamedev.net/articles/programming/", "name": "Programming" } }, { "@type": "ListItem", "position": 3, "item": { "@id": "https://www.gamedev.net/articles/programming/artificial-intelligence/", "name": "Artificial Intelligence" } } ] } { "@context": "http://schema.org", "@type": "ContactPage", "url": "https://www.gamedev.net/contact/" } Important Information By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.   I accept Participate in the game development conversation and more when you create an account on GameDev.net! Sign me up!