• entries
46
30
• views
51210

# Faking It: Platform Game Ai

Followers 0

2093 views

Today's blog post is going to start talking about game AI for my untitled in progress game using Game Maker. I have come to the point in my game where I need to start making enemies that behave somewhat more intelligently than a sack of potatoes.

To start with I realized that there is a commonality between my player and the enemies when it comes to interacting with platform elements. Both the player and the enemies need to be able to respond to gravity and have similar interactions with platforms in the game. Furthermore all enemies regardless of type will also need this behavior. The approach in some languages might be to create an "entity" class and have both the player and all entities inherit from this base class. I decided though that this might be a bit of overkill (after all while enemies and the player are similar, they are also very different) so instead I am opting to create a set of common routines to check platform collisions to share between the player and enemies. I am also creating a base enemy class which all enemies will use as a parent (Game Maker uses the term Parent, but this is the same concept as inheritance in other languages).

Here is check_platform_collision:

{ test_obj = argument0; return collision_rectangle(test_obj.x + test_obj.sprite_width/4, argument1+test_obj.sprite_height+(argument2-1),test_obj.x + (test_obj.sprite_width*0.75),argument1+test_obj.sprite_height+argument2,platform_base,false,true);}

and check_platform:

{ test_obj = argument0; temp_y = test_obj.y; col_id = check_platform_collision(test_obj,temp_y,1); if(col_id >= 0 && test_obj.vy > 0) { test_obj.vy = 0; while(check_platform_collision(test_obj,temp_y,-1)>=0) { temp_y = temp_y - 1; } test_obj.jump = 0; } return temp_y;}

Basically the code is simply to return the new y coordinate of the player after the collision checking is done. I am still working on writing a common routine for ladder collisions which currently is duplicated and looks like this:

 col2_id = collision_rectangle(x,y,x+sprite_width,y+1,ladder_bottom_obj,false,true); if(col2_id >=0 && climb) { climb = false; ypos = ypos -2; }

So we currently have a base enemy "object" and an enemy type that inherits from that base called gerbil. In my game the enemies are going to be gerbils with top hats... Currently though I have an ugly stand in programmer graphic of a box.

Alright so I have set the stage for how I am creating a base enemy to start programming my AI with. I have a common set of routines between my hero and enemies to not repeat code and I am inheriting from my enemy base class for all enemy types.

But what AI do I use??

The heart of the question now is HOW do I add enemy AI. For my player the logic goes that I respond to player interaction and adjust variables (in my case vector velocity components... fancy math speak for saying I take my velocity vector as represented by vx and vy variables, performing euler integration by adding acceleration to my velocity and then adding my velocity to my position... Or even simpler I made a platform engine type thingy).

If I was creating a networked game then I could create custom events that are triggered instead of direct user interaction and then create a queue of "actions" much like you would do in a time management game... So my enemies could do the same and I could call events that move the character instead of responding to input! but then I am left with the problem of when do I call these events? Is it worth doing this or is direct manipulation of the variables a more valid approach? Both have advantages and the designer is left to answer such questions.. I am going to experiment with the user defined events in Game maker that are called, and then my scheme must call these events to cause the enemy to move. The logic is then to "control" the enemy in much the same way the player is being controlled by the player.

Neural Networks and Genetic Algorithms!

So we need to do AI and anyone who has a casual interest in the subject but no real experience with actually making a game, might be inclined to say something along the lines of "I will use a neural network, fuzzy logic, and genetic algorithms for my game!" which is really a pointless combination of buzzwords designed to make anyone who says it seem somewhat intelligent.

The problem with this approach is beyond being utterly complicated it will not yield a fun and playable AI. The real way that most if not all game AI is done is by finite state machines which sound like some really complex and abstract concept! Let's quickly create a state class and a bunch of other fun states to be inherited from it and then use a dynamic array to manage all of our fun states! and then let's create a state transition table and do all sorts of fun stuff while we try to figure out how we actually "use" this mess we just created!

Or... perhaps not. In my mind the main approach to take when creating an AI in this situation is to layer behaviors. What I mean is that I am going to create a combination of behaviors and responses to conditions that the enemy can encounter and by crafting enough responses to enough conditions I will eventually arrive at something that looks intelligent. This can be comfortably done using various if/else statements and there is no real reason with proper abstraction to do more unless your creating a massive project.

I can also choose to perform certain behaviors are random intervals... but probably not COMPLETELY random intervals. Everything when it comes to such behaviors is best talked about in terms of weighted probabilities. For instance the probability of my next action to be continuing to type this blog post is higher than the probability of my next action to be to break out in an elaborate dance routine. To implement this we can use a random number generator that provides equal weight to each numerical outcome and then look for various ranges of values when we check to see which state to use.

State in this case is not really referring to an object (although I am sure people will disagree, we must make simplifications sometimes) but an abstract concept... State is the current values of all variables in the enemy object and to transition states is to change one or more variables to cause some new action (changing our x velocity to go right instead of left, or zeroing out the velocity to stop.). State transitions are the if statements that govern which enemy states to use.

The rules from transitioning to one state for another are either going to be based on random events or based on met conditions (such as the enemy is in range of the player).

Also I am going to introduce some variables to the enemy to represent certain physical properties. For instance I could have a variable indicating the enemies stamina. If the stamina gets too low then the enemy will go to a resting state. Once the stamina is regained then we can choose to perform a random weighted state transition OR if the enemy is close we can attack, ect.

The next blog post I write is going to talk about how I go about implementing these concepts in my game. I am currently working on routines to check if an enemy is about to leave a platform, if an enemy is over a ladder, and other routines to gain information about the game level. This will allow me to implement a form of steering behaviors for my enemies as well.

1
Followers 0

## 1 Comment

Nice post. Its always nice to read about a more programmatic game approach to AI than an attempt to model the brain. Its a far more fun and understandable view on intelligence.

Seeing how you define your rules and inputs, I immediately start to think about a rule engine (such as JBoss Drools) as a basis for simplified and specific artificial intelligence. Would be cool if you could tell the rule engine the state and let it decide what has to happen based on the state and a set of (configurable) rules. Hmm...
0

## Create an account

Register a new account