• 12/06/14 06:55 AM
    Sign in to follow this  

    Generalized Platformer AI Pathfinding

    Artificial Intelligence

    DotStarMoney

    Preamble

    If you're writing a "jump and run" style platformer game, you're probably thinking about adding some AI. This might constitute bad guys, good guys, something the player has to chase after etc... All too often, a programmer will forego intelligent AI for ease of implementation, and wind up with AI that just gives up when faced with a tricky jump, a nimble player, or some moving scenery. This article presents a technique to direct AI to any arbitrary static location on a map. The path an AI takes may utilize many well-timed jumps or moving scenery pieces, as long as it starts and ends in a stationary location (but this doesn't always have to be true). We'll cover the basic idea and get an implementation up and running. We'll cover advanced cases including moving platforms/destructible walls in a future article. This Technique is used in the game Nomera, at www.dotstarmoney.com or @DotStarMoney on Twitter.
    e3iKSJ7.png
    Before going any further, make sure you cannot implement a simpler algorithm due to constrained level geometry. I.e: all collision for levels is done on a grid of squares (most 2D games). In these cases you can get solid AI pathing with simpler techniques, this method is primarily for those who want their game AI to be human-like.

    Getting Ready

    Before we begin, it's good to have a working knowledge of mathematical graphs and graph traversal algorithms. You'll also need to be comfortable with vector maths for pre-processing and finding distances along surfaces. This technique applies to levels that are composed primarily of static level pieces with some moving scenery, and not levels that are constantly morphing on the fly. It's important to have access to the static level collision data as line segments; this simplifies things though this technique could easily be extended to support any geometric objects you use for collision.

    The Big Idea

    In layman's terms: As a developer, you jump around in the level between platforms, and the engine records the inputs you use from the point you jump/fall off of a platform, until the time you stand on the next one. It counts this as an "edge," saving the recorded inputs. When an AI wants to path through the level, he treats the series of platforms (we'll call them nodes from here on out) as vertices, and the recorded edges between them as a graph. The AI then takes a path by alternating walking along nodes, and taking the recorded input along edges to reach a destination. There are many important distinctions we'll need to make, but for now, just focus on the broad concepts. The technique we'll use is a combination of two algorithms. These are, creating the pathing graph, or "creating the data structure AI will utilize to path through the level" and traversing the pathing graph, or "guiding the enemy through the level given a destination". Obviously the latter requires the former. Creating the pathing graph is summarized as follows as follows:
    1. Load the level static collision data and compute from it a series of nodes.
    2. Load any recorded edges (paths) for the level and add these to their respective start nodes.
    3. Using the enemy collision model and movement parameters, record paths between nodes and add these to the graph.
    4. When exiting the level, export the recorded edges for the level.
    This might not totally make sense right now, but we'll break it down step by step. For now it's good to get the gist of the steps. Now a summary of traversing the pathing graph:
    1. Recieve a destination in the form of a destination node, and distance along that node; Calculate similar parameters for the source (starting) node.
    2. Compute a path, using any graph traversal algorithm from source to destination where the path is a series of nodes and edges.
    3. Guide the AI across a node to an edge by walking (or running, whatever the AI knows how to do) to reach the correct starting speed of the next edge in the path.
    4. Once the AI has reached the start location of the next edge in the path to some tolerance in both position and velocity, relinquish automatic control of the AI and begin control through the edges frame by frame recorded input.
    5. When recorded input ends, give control back to the automatic movement for whichever node upon which the AI stands.
    6. Repeat the last three steps until the destination has been reached
    Kinda getting the feel of it? Lets break down each step in detail.

    Implementing Pathfinding Step by Step

    Creating the Pathing Graph

    The pathing graph is made up of platforms/nodes, and connecting nodes to nodes are recordings/edges. It is important to first write hard definitions for what constitutes a platform, and what constitutes a recording. A node/platform has the following properties:
    • It is a subset of the line segments forming the level geometry.
    • Assuming normal gravity, all segments in the node are oriented such that their first vertex has a strictly smaller x coordinate than their second. (this would be reversed for inverted gravity)
    • Each subsequent segment in the node starts where the last segment ended.
    • Each segment in the node is traversable by an AI walking along its surface
    What does this add up to? The following key idea: A node can be traversed in its entirety by an AI walking along its surface without jumping or falling and an AI can walk to any point along the node from any other point. Here is a picture of a level's collision geometry:
    gMek452.png
    And here it is after we have extracted all of the nodes from it (numbered and seperately colored for clarity). In my implementation, node extraction is performed when the level is loaded, this way when a level is built you don't have to go back and mark any surfaces. You'll notice it's basically an extraction of "all the surfaces we could walk on:"
    MGnhyFZ.png NOTE: this image has a small error: 26 and 1 are two different nodes, but as you can see, they should be the same one.
    Depending on how your level geometry is stored, this step can take a little extra massaging to transform the arbitrary line segments into connected nodes. Another important aside, if you have static geometry that would impede the travel along a node (like a wall that doesn't quite touch down to the ground), you'll need to split nodes along this barrier. I don't have any in my example, but this will cause major complications down the road if you don't check for it. Once you have the nodes, you've completed the first step in creating the pathing graph. We also need to establish how we quantify position. A position, as used in determining sources and destinations for pathfinding, is a node (by number in this case), and a horizontal displacement along that node from its leftmost point. Why a horizontal displacement instead of an arc length along the node? Well let's say an AI collision body is a square or circle walking along a flat surface approaching an upward slope. Could its surface ever touch the interior corner point of the slope? Nope, so instead, position is measured as a horizontal displacement so we can view nodes as a "bent, horizontal line". To complete the second and third step, we need to clarify what an edge/recording is. An edge has the following properties:
    • An edge has a start position, and destination position on two different nodes (though it could be the same node if you want to create on-platform jump shortcuts!)
    • An edge has a series of recorded frame inputs that, provided to an AI in the edge starting position and starting velocity, will guide the AI to the position specified by the destination position
    A couple of things here: it is extremely neccessary that whatever generated the recorded frame input series had the EXACT collision and movement properties as the AI whose edge pathing was being created. The big question here, is where do the recorded frame inputs come from... you! Heres the jump: In Nomera's game engine in developer mode, recording can be turned on such that that as soon as the player takes a jump from a node, or falls off of a node, a new edge is created with starting position equal to the position that was fallen off of/jumped from. At this point, the player's inputs are recorded every frame. When the player lands on a node from the freefall/jump, and is there for a few frames, the recording is ended and added as an edge between the starting node and the current node (with positions of course). In other words, you're creating snippets of recorded player inputs that, if an AI is lined up with the starting position, the AI can relinquish control to these inputs to reach the destination position. Also important, when recording, the player's collision and movement properties should be momentarily switched to the AI's, and the edge marked as "only able to be taken" by the AI whose properties it was recorded with. The second step in creating the pathing graph is just loading any edges you had previously made, where the third is the actual recording process. How you do the recording is entirely up to you. Here is a screenshot of Nomera with the edges drawn on the screen. The lines only connect the starting and ending positions and don't trace the path, but it gets across the technique:
    9JQtXym.png?1
    In the upper left you can see marks from the in-game edge editor. This allows deletion of any edges you aren't particularly proud of, or don't want the AI to try and take. It also displays the number of frames the input was recorded for. Of course, an edge needs more properties than just the recorded frames, and starting and ending positions. As has been previously mentioned, the velocity at the start of the edge is critical as will become more obvious later. It is also beneficial to have easy access to the number of frames the edge takes, as this is useful in finding the shortest path to a destination. At this point, you should have the knowledge to build a pathing graph of platform nodes, and the recorded edges connecting them. What's more interesting though, is how AI navigates using this graph.

    Traversing the Pathing Graph

    Before we dive into how we use the pathing graph, a word on implementation. Since we're essentially recording AI actions across paths, it's a good idea to have your AIs controlled with a similar interface as the player. Let's say you have a player class that looks something like this: [code] class Player{ public: // ... void setInputs(int left, int right, int jump); // ... private: // ... } [/code] Where "left, right, and jump" are from the keyboard. First of all, these would be the values you record per frame during edge recording. Second of all, since the AI will also need a "setInputs" control interface, why not write a REAL interface? Then it becomes reasonably more modular: [code] enum PC_ControlMode{ MANUAL, RECORDED } class PlatformController{ public: // ... void setManualInput(int left, int right, int jump); void bindRecordedInput(RecordedFrames newRecord); int getLeft(); int getRight(); int getJump(); void step(timestep as double); // ... protected: PC_ControlMode controlMode; RecordedFrames curRecord; void setInputs(int left, int right, int jump); // ... } class Player : public PlatformController{ // ... } class AI : public PlatformController{ // ... } [/code] Now, both AI and player classes are controlled using an interface that's extendable to switch either between manual control or recorded. This setup is also convenient for pre-recorded cut scenes where the player loses control. Okay, so we want black box style methods in our AI controller like: [code] createPath(positionType destination); step(double timestep); [/code] Where the former sets up a path between the current position and the destination position, and the latter feeds inputs to [tt]setInputs()[/tt] to take the AI to the destination. In our step by step outline, [tt]createPath[/tt] forms the first two steps and [tt]step[/tt], the last three. So let's look at creating the path. A path will consist of an ordered sequence, starting with an edge, of alternating nodes and edges, ending in the final edge taking us to the destination node. We first need to be able to identify our current position, be it in the air or when resting on a node. When we're on a node, we'll need a reference to that node and horizontal position along it (our generic position remember?) To build the path, we use a graph traversal algorithm. In my implementation, I used Djikstra's algorithm. For each node we store, we'll also store with it the position we'd wind up in given the edge we took to get there (we'll call this [tt]edgeStartNodeCurrentPositionX[/tt] for posterity's sake). Therefore, edge weights are computed for a given edge like so: [code] edgeFrameLength = number of frames in the edge recording walkToEdgeDist = abs(edgeStartX - edgeStartNodeCurrentPositionX) edgeWeight = edgeFrameLength * TIMESTEP + walkToEdgeDist / (HORIZONTAL_WALKING_SPEED) if(edgeDestinationNode == destinationPositionNode){ edgeWeight += abs(edgeEndX - destinationPositionX) / (HORIZONTAL_WALKING_SPEED) } [/code] As you can see, our final edge weight is in terms of seconds and is the combination of the time taken in the recording, and the time taken to walk to the start of the edge. This calculation isn't exact, and would be different if sprinting was part of enemy movement. We also check to see if we end on the destination node, and if so, the walking time from the edge end position to the destination position is added to the weight. If we can calculate our edge weights, we can run Djikstra's! (or any other graph traversal algorithm, A* is fine here if you use a "euclidian distance to the destination" type heuristic). At this point, you should have a path! We're almost there, and to cover the 4 steps of the outline, there's not a lot to do. Basically, we have two procedures that we switch between depending on whether or not we stand on a node, or are being controlled by an edge recording. If we're on a node, we walk from our current position in the direction of the edge we have to take next. Now I mentioned previously that we also need to know the starting velocity of recorded edges. This is because, more often than not, your AI might have a little acceleration or decceleration when starting or stopping from walking. One of these transitional speeds may have been the point when the target edge began. Because of this, when we're walking towards the edge start location, we might have to slow down or back up a bit to take a running/walking start. Once we reach the start position of the edge we're going to take, more than likely, our position will not match the edge start position exactly. In my implementation the position was off rarely more than half of a pixel. What's important is that we reach the edge start position within some tolerance, and once we do, we'll snap the position/velocity of the AI to those of the edge start position/velocity. Now we're ready to relinquish control to the edge recording. If we're on an edge, well, each frame just adopts the controls provided by the edge recording and increase the number of the recorded frame that we read. Thats it! Eventually, the recording will finish, and if the recording was frame perfect, the AI will land on the next node and the node controls will take over.

    Some Odds and Ends

    There are a few things you can do to tune this technique for your game. It's highly recommended that you add an in-game path recording and deleting interface to help you easily build level pathing: Nomera takes about 10m to set up level pathing and its pretty fun too. It's also convenient to have nodes extracted automatically. While you technically could do it yourself, adding automatic extraction makes the workflow VASTLY easier. For fast retrieval of node parameters, Nomera stores all of the nodes in a hash table and all of the edges in lists per node. For easy display, edges are also stored in a master list to show their source/destination lines on the screen. If you didn't notice already, static interactive pieces like ladders or ropes that aren't collidable objects are automatically handled by this technique. Let's say you need to press "up" to climb a ladder, if that "up" press is recorded and your AI uses a similar interface to the one previously proposed, it will register the input and get to climbing.

    Wrap Up

    We've looked at a way to guide AI around a platforming level that works regardless of collision geometry and allows AI to take the full potential of their platformer controls. First, we generate a pathing graph for a level, then we build a path from the graph, and finally we guide an AI across that path. So does it work? Sure it does! Heres a gif:
    Ynhun7J.gif These guys were set to "hug mode." They're trying to climb into my skin wherever I go.
    If you have any questions or suggestions, please shoot me an email at chris@dotstarmoney.com. Thanks for reading!

    Update Log

    [b]27 Nov 2014[/b]: Initial Draft [b] 4 Dec 2014[/b]: Removed a line unrelated to article content.


      Report Article
    Sign in to follow this  


    User Feedback

    Create an account or sign in to leave a review

    You need to be a member in order to leave a review

    Create an account

    Sign up for a new account in our community. It's easy!

    Register a new account

    Sign in

    Already have an account? Sign in here.

    Sign In Now


    eriseven

    Report ·

      

    Share this review


    Link to review
    h4tt3n

    Report ·

      

    Share this review


    Link to review
    Squared'D

    Report ·

      

    Share this review


    Link to review
    tookie

    Report ·

      

    Share this review


    Link to review
    Orymus3

    Report ·

      

    Share this review


    Link to review
    FreOzgur

    Report ·

      

    Share this review


    Link to review
    Cygon

    Report ·

      

    Share this review


    Link to review
    V3ntr1s

    Report ·

      

    Share this review


    Link to review
    unbird

    Report ·

      

    Share this review


    Link to review
    jbadams

    Report ·

      

    Share this review


    Link to review
    WillTice

    Report ·

      

    Share this review


    Link to review