Jump to content
  • Advertisement

Search the Community

Showing results for tags 'Pathfinding'.

The search index is currently processing. Current results may not be complete.


More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Categories

  • Audio
    • Music and Sound FX
  • Business
    • Business and Law
    • Career Development
    • Production and Management
  • Game Design
    • Game Design and Theory
    • Writing for Games
    • UX for Games
  • Industry
    • Interviews
    • Event Coverage
  • Programming
    • Artificial Intelligence
    • General and Gameplay Programming
    • Graphics and GPU Programming
    • Engines and Middleware
    • Math and Physics
    • Networking and Multiplayer
  • Visual Arts
  • Archive

Categories

  • Audio
  • Visual Arts
  • Programming
  • Writing

Categories

  • Game Developers Conference
    • GDC 2017
    • GDC 2018
  • Power-Up Digital Games Conference
    • PDGC I: Words of Wisdom
    • PDGC II: The Devs Strike Back
    • PDGC III: Syntax Error

Forums

  • Audio
    • Music and Sound FX
  • Business
    • Games Career Development
    • Production and Management
    • Games Business and Law
  • Game Design
    • Game Design and Theory
    • Writing for Games
  • Programming
    • Artificial Intelligence
    • Engines and Middleware
    • General and Gameplay Programming
    • Graphics and GPU Programming
    • Math and Physics
    • Networking and Multiplayer
  • Visual Arts
    • 2D and 3D Art
    • Critique and Feedback
  • Community
    • GameDev Challenges
    • GDNet+ Member Forum
    • GDNet Lounge
    • GDNet Comments, Suggestions, and Ideas
    • Coding Horrors
    • Your Announcements
    • Hobby Project Classifieds
    • Indie Showcase
    • Article Writing
  • Affiliates
    • NeHe Productions
    • AngelCode
  • Topical
    • Virtual and Augmented Reality
    • News
  • Workshops
    • C# Workshop
    • CPP Workshop
    • Freehand Drawing Workshop
    • Hands-On Interactive Game Development
    • SICP Workshop
    • XNA 4.0 Workshop
  • Archive
    • Topical
    • Affiliates
    • Contests
    • Technical
  • GameDev Challenges's Topics
  • For Beginners's Forum

Calendars

  • Community Calendar
  • Games Industry Events
  • Game Jams
  • GameDev Challenges's Schedule

Blogs

There are no results to display.

There are no results to display.

Product Groups

  • GDNet+
  • Advertisements
  • Careers

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


About Me


Website


Industry Role


Twitter


Github


Twitch


Steam

Found 8 results

  1. Awoken

    Node Loops

    Hello GameDev, During my efforts to accomplish the last robust challenge of mine I discovered a new way, for me, to speed up the processing time to achieve much faster path-finding results. About the technique: I created continues node loops, Every node has information about the loop it's a member of. Each Node has the following object: { id: 4, loop: 10, loopSize: 8, loopStart: 0, loopEnd: 7 } and the following functions: left( x ){ if( this.id - x < this.loopStart ){ return ( this.id - x + this.loopSize ); } else { return ( this.id - x ); } } right( x ){ if( this.id + x > this.loopEnd ){ return ( this.id + x - this.loopSize ); } else { return ( this.id + x ); } } getDirection( n , M ){ var m = M || this.id; if( n - m > 0 ){ if( Math.abs( n - m - this.loopSize ) < ( n - m ) ){ return true; // to IDa } else { return false; // to IDb } } else { if( ( n - m + this.loopSize ) < Math.abs( n - m ) ){ return false; // to IDb } else { return true; // to IDa } } } getNodesBetween( n , M ){ var m = M || this.id; if( n - m > 0 ){ if( Math.abs( n - m - this.loopSize ) < ( n - m ) ){ return Math.abs( n - m - this.loopSize ); // to IDa } else { return ( n - m ); // to IDb } } else { if( ( n - m + this.loopSize ) < Math.abs( n - m ) ){ return ( n - m + this.loopSize ); // to IDb } else { return Math.abs( n - m ); // to IDa } } } With information about its parent loop accessible to each node, informed choices are possible about the nodes that should be tested for each subsequent iteration. The following animated .gif visually demonstrates the logic. 1: Select Origin and Destination points 2: Check for collisions between them and determine node pairs ( Node pairs are nodes that are apart of the same loop ) 3: Determine the smallest number of nodes between the node pairs, ( Are there fewer nodes if we travel left? or right? ) 4: Check for collisions between Origin node and each incremental node apart of the loop. ( stop once collision is found ) 5: Use last successful node tested as the new Origin point. 6: Check for collisions between Destination node and each incremental node apart of the loop. ( stop once collision i found ) 7: Use last successful node tested as the new Destination point. 7. Repeat. The Good, It is much faster than using the blind search method I was using before, especially if both directions are searched when an obstacle is encountered. If only one node loop is encountered it will return the shortest path with the addition of a refinement function. The Bad, It doesn't return the shortest path. For that it has to be used in combination with other techniques and two new 'branches' of the function would be required for each unique node loop encountered.
  2. Introduction In our 3D game (miimii1205), we use a dynamically created navigation mesh to navigate onto a procedurally generated terrain. To do so, only the A* and string pulling algorithms were more specifically used until our last agile sprint. We recently added two new behaviors to the pathfinding : steering and wall collision avoidance. In this article, I will describe how I achieved a simple way for agents to not walk into walls. Configuration 3D or 2D navigation mesh, as long as the 3D one is not cyclic. Navigation cells and their : polygonal edges, connections (other cell), shared edges (the line intersecting between two connected cells), centroids and normals. An A* and string pulled (not tested without string pulling) generated path consisting of waypoints on the navigation mesh. The Algorithm The agent is the pink low-poly humanoid and the final destination is the flag. The ideal algorithm (yet unoptimized) would be to cast an oriented rectangle between each consecutive waypoint where its width is the two times the radius. Think of the agent's center position being in the middle of the width. Anyway, this algorithm is too complicated, too long to develop for our game, too big for nothing and so I thought about another algorithm, which has its drawbacks actually. However, it's more suited for our game. Psss, check this article if you haven't seen it yet. The algorithm is the following : For each waypoint, pick the current one and the next one until the next one is the last. Iterate over the current navigation cell's edges, which is defined by the agent's 3D position. To do that, you need a spatial and optimized way to determine the closest cell of a 3D point. Our way to do it is to first have have an octree to partition the navigation mesh. After that, get all the cells that are close to the point plus an extra buffer. To find the cell that is the closest to the point, for each picked cell, cast a projection of the position onto each one of them. This can be done using their centroids and normals. Don't forget to snap the projected position onto the cell. After, that compute the length of the resulting vector and pick the smallest one. Convert each edge to a 2D line by discarding the Y component (UP vector). For each side left and right, which are defined by the agent's position and direction towards the next waypoint, cast a 2D line that start from the agent's position, that goes towards one of the two perpendicular directions related to the direction to the next waypoint and that has a length of the defined radius. If there's an intersection on a connection and that it's on the shared part of the connection, then continue with the connected cell's edges. If there are any intersections other than #5, create a new waypoint before the next waypoint. This new one's position is defined by the intersection's position translated by a length of two times the radius and projected towards the agent's current direction as a 2D line. The same translation is applied to the next waypoint. Cast two 2D lines, one on each side of the agent as described before, starting from the sides, going towards the same direction as the agent and of the same length between the current and next waypoint. Check for #5. If there is an intersection on a connection and that it's on the unshared part of the connection, then do #6 (no if). If there's an intersection on a simple edge, then translate the next waypoint as described in #6. Here's a video of the algorithm in action :
  3. Awoken

    More Adventures in Robust Coding

    Hello GameDev, This entry is going to be a big one for me, and it's going to cover a lot. What I plan to cover on my recent development journey is the following: 1 - Goal of this Blog entry. 2 - Lessons learned using Node.js for development and testing as opposed to Chrome console. 3 - Linear Path algorithm for any surface. 4 - Dynamic Path Finding using Nodes for any surface, incorporating user created, dynamic assets. 5 - short term goals for the game. -- - -- - -- - -- - -- - -- - Goal of this Blog entry - -- - -- - -- - -- - -- - -- My goal for this adventure is to create a dynamic path-finding algorithm so that: - any AI that is to be moved will be able to compute the shortest path from any two points on the surface of the globe. - the AI will navigate around bodies of water, vegetation, dynamic user assets such as buildings and walls. - will compute path in less then 250 milliseconds. There are a few restrictions the AI will have to follow, in the image above you can see land masses that are cut off from one another via rivers and bodies of water are uniquely colored. If an AI is on a land mass of one color, for now, it will only be able to move to a location on the same colored land mass. However; there are some land masses that take up around 50% of the globe and have very intricate river systems. So the intended goal is be able to have an AI be on one end of the larger land mass and find the shortest path to the opposite end within 250 milliseconds. Currently my path finding algorithm can find the shortest path in anywhere from 10 ms and up, and when I say up, I mean upwards of 30 seconds, and that's because of the way I built the algorithm, which is in the process of being optimised. -- - -- - -- - -- - -- - -- - Lessons learned using Node.js for development and testing - -- - -- - -- - -- - -- - -- As of this writing I am using Node.js to test the efficiency of my algorithms. This has slowed down my development. I am not a programmer by trade, I've taught myself the bulk-work of what I know, and I often spend my time re-inventing the wheel and learning things the hard way. Last year I made the decision to move my project over to Node.js for continued development, eventually it all had to be ported over to Node.js anyways. In hind sight I would have done things differently. I would have continued to use Chrome console for testing and development, small scale, then after the code was proven to be robust would I then port it over to Node.js. If there is one lesson I'd like to pass on to aspiring and new programmers, it's this, use a language and development environment that allows you, the programmer, to jump into the code while it's running and follow each iteration, line by line, of code as it's be executed, basically debugging. It is so easy to catch errors in logic that way. Right now I'm throwing darts at a dart board, guesses what I should be sending to the console for feedback to help me learn more about logical errors using Node.js, see learning the hard way. -- - -- - -- - -- - -- - -- - Linear Path algorithm for any surface. - -- - -- - -- - -- - -- - -- In the blog entry above I go into detail explaining how I create a world. The important thing to take away from it is that every face of the world has information about all surrounding faces sharing vertices pairs. In addition, all vertices have information regarding those faces that use it for their draw order, and all vertices have information regarding all vertices that are adjacent to them. An example vertices and face object would look like the following: Vertices[ 566 ] = { ID: 566, x: -9.101827364, y: 6.112948791, z: 0.192387718, connectedFaceIDs: [ 90 , 93 , 94 , 1014 , 1015 , 1016 ], // clockwise order adjacentVertices: [ 64 , 65 , 567 , 568 , 299 , 298 ] // clockwise order } Face[ 0 ] = { ID: 0, a: 0, b: 14150, c: 14149, sharedEdgeVertices: [ { a:14150 , b: 14149 } , { a:0 , b: 14150 } , { a:14149 , b:0 } ], // named 'cv' in previous blog post sharedEdgeFaceIDs: [ 1 , 645 , 646 ], // named 's' in previous blog post drawOrder: [ 1 , 0 , 2 ], // named 'l' in previous blog post } Turns out the algorithm is speedy for generating shapes of large sizes. My buddy who is a Solutions Architect told me I'm a one trick pony, HA! Anyways, this algorithm comes in handy because now if I want to identify a linear path along all faces of a surface, marked as a white line in the picture above, you can reduce the number of faces to be tested, during raycasting, to the number of faces the path travels across * 2. To illustrate, imagine taking a triangular pizza slice which is made of two faces, back to back. the tip of the pizza slice is touching the center of the shape you want to find a linear path along, the two outer points of the slice are protruding out from the surface of the shape some distance so as to entirely clear the shape. When I select my starting and ending points for the linear path I also retrieve the face information those points fall on, respectively. Then I raycaste between the sharedEdgeVertices, targeting the pizza slice. If say a hit happens along the sharedEdgeVertices[ 2 ], then I know the next face to test for the subsequent raycaste is face ID 646, I also know that since the pizza slice comes in at sharedEdgeVertice[ 2 ], that is it's most likely going out at sharedEdgeVertices[ 1 ] or [ 0 ]. If not [ 1 ] then I know it's 99% likely going to be [ 0 ] and visa-versa. Being able to identify a linear path along any surface was the subject of my first Adventure in Robust Coding. Of course there are exceptions that need to be accounted for. Such as, when the pizza slice straddles the edge of a face, or when the pizza slice exits a face at a vertices. Sometimes though when I'm dealing with distances along the surface of a given shape where the pizza slice needs to be made up of more than one set of back to back faces, another problem can arise: I learned about the limitations of floating point numbers too, or at least that's what it appear to be to me. I'm sure most of you are familiar with some variation of the infinite chocolate bar puzzle So with floating point numbers I learned that you can have two faces share two vertices along one edge, raycaste at a point that is directly between the edges of two connecting faces, and occasionally, the raycaste will miss hitting either of the two faces. I attribute this in large part because floating point numbers only capture an approximation of a point, not the exact point. Much like in the infinite chocolate bar puzzle there exists a tiny gap along the slice equal in size to the removed piece, like wise, that tiny gap sometimes causes a miss for the raycaste. If someone else understands this better please correct me. -- - -- - -- - -- - -- - -- - Dynamic Path Finding using Nodes for any surface - -- - -- - -- - -- - -- - -- Now that I've got the linear path algorithm working in tip top shape, I use it in conjunction with Nodes to create the pathfinding algorithm. Firstly I identify the locations for all nodes. I do this using a Class I created called Orientation Vector, I mention them in the blog post above. When they're created, they have a position vector, a pointTo vector, and an axis vector. The beauty of this class is that I can merge them, which averages their position, pointTo, and axis vectors, and it allows me to rotate them along any axis, and it allows me to move them any distance along the axis of their pointTo vector. To create shoreline collision geometry, and node collision geometry, illustrated above, and node locations along shorelines, illustrated below, I utilise the Orientation Vector Class. Firstly, the water table for the world is set to an arbitrary value, right now it's 1.08, so if a vector for a given face falls below the table and one or two vertors are above the table then I know the face is a shoreline face. Then I use simple Math to determine at what two points the face meets the water and create two OVectors, each pointing at each-other. Then I rotate them along their y axis 90 and -90 degrees respectively so that they are now facing inland. Since each face, which are shoreline faces, touch one another, there will be duplicate OVectors a each point along the shore. However, each Ovector will have a pointTo vector relative to it's sister Ovector during creation. I merge the paired Ovectors at each point along the shore, this averages their position, pointTo and axis. I then move them inland a small distance. The result is the blue arrows above. The blue arrows are the locations of three of the thousands of nodes created for a given world. Each Node has information about the shoreline collision geometry, the node collision geometry ( the geometry connecting nodes ), and the Node to its left and the Node to its right. Each face of collision geometry is given a Node ID to refer to. So to create the path-finding algorithm. I first identify the linear path between the starting and ending points. I then test each segment of the linear path for collision geometry. If I get a hit, I retrieve the Node ID. This gives me the location for the Node associated for a given face of collision geometry. I then travel left and right along connecting Nodes checking to see if a new Linear path to the end point is possible, if no immediate collision geometry is encountered, the process continues and is repeated as needed. Subsequently, a list of points is established, marking the beginning, encountered Nodes and end of the line of travel. The List is then trimmed by testing linear paths between every third point, if a valid path is found, the middle point is spliced. Then all possible paths that have been trimmed are calculated for distance. the shortest one wins. Below is the code for the algorithm I currently use. its my first attempt at using classes to create an algorithm. Previously I just relied on elaborate arrays. I plan on improving the the process mentioned above by keeping track of distance as each path spreads out from it's starting location. Only the path which is shortest in distance will go through its next iteration. With this method, once a path to the end is found, I can bet it will be shortest, so I won't need to compute all possible paths like I am now. The challenge I've been facing for the past two months is sometimes the Nodes end up in the water, The picture above shows a shoreline where the distance the OVectors travel would place them in the water. Once a node is in the water, it allows the AI to move to it, then there is no shoreline collision geometry for it to encounter, which would keep it on land, and so the AI just walks into the ocean. Big Booo! I've been writing variations of the same function to correct the location of the geometry shown below in Red and Yellow below. But what a long process. I've rewritten this function time and time again. I want it to be, well as the title of this Blog states, Robust, but it's slow going. As of today's date, it's not Robust, and the optimised path-finding algorithm hasn't been written either. I'll be posting updates in this blog entry as I make progress towards my goal. I'll also make mention what I achieve for shortest, long time for pathfinding. Hopefully it'll be below 250 ms. -- - -- - -- - -- - -- - -- - short term goals for the game - -- - -- - -- - -- - -- - -- Badly... SO BADLY I want to be focusing on game content, that's all I've been thinking about. Argh, But this all has to get wrapped up before I can. I got ahead of myself, I'm guilty of being too eager. But there is no sense building game content on top of an engine which is prone to errors. My immediate goals for the engine are as follows: // TO DO's // // Dec 26th 2017 // /* * << IN PROGRESS >> -update path node geometry so no errors occur * -improve path finding alg with new technique * -improve client AI display -only one geometry for high detail, and one for tetrahedron. * -create ability to select many AI at the same time by drawing a rectangle by holding mouse button. * -create animation server to recieve a path and process animation, and test out in client with updates. * -re-write geometry merging function so that the client vertices and faces have a connected Target ID * -incorporate dynamic asset functionality into client. * -create a farm and begin writing AI. * -program model clusters * -sychronize server and client AI. Test how many AI and how quickly AI can be updated. Determine rough estimate of number of players the server can support. * */ see the third last one! That's the one, oh what a special day that'll be. I've created a Project page, please check it out. It gives my best description to date of what the game is going to be about. Originally I was going to name it 'Seed', a family member made the logo I use as my avatar and came up with the name back in 2014. Then just this week I find out that some studio in Europe is making THE EXACT SAME GAME ! WHA??? http://www.pcgamer.com/seed-is-a-hugely-ambitious-in-development-mmo-that-echoes-eve-online-rimworld-and-the-sims/ I'm being facetious, but they're very close to being the same game. Anyways, Mine will be better, you read it here first! hahaha. The project is no longer going to be called Seed, it's instead going to be called what I've always called it and will probably always call it; the game [ edit: 02/02/18 Some new screen shots to show off. All the new models were created by Brandross. There are now three earth materials, clay, stone and marble. There are also many types of animals and more tree types. ] Thanks for reading and if you've got anything to comment on I welcome it all. Awoken
  4. I recently worked on a path-finding algorithm used to move an AI agent into an organically generated dungeon. It's not an easy task but because I've already worked on Team Fortress 2 cards in the past, I already knew navigation meshes (navmesh) and their capabilities. Why Not Waypoints? As described in this paper, waypoint networks were in the past used in video games to save valuable resources. It was an acceptable compromise : level designers already knew where NPCs could and could not go. However, as technology has evolved, computers got more memory that became faster and cheaper. In other words, there was a shift from efficiency to flexibility. In a way, navigation meshes are the evolution of waypoints networks because they fulfill the same need but in a different way. One of the advantages of using a navigation mesh is that an agent can go anywhere in a cell as long as it is convex because it is essentially the definition of convex. It also means that the agent is not limited to a specific waypoint network, so if the destination is out of the waypoint network, it can go directly to it instead of going to the nearest point in the network. A navigation mesh can also be used by many types of agents of different sizes, rather than having many waypoint networks for agents of different sizes. Using a navigation mesh also speeds up graph exploration because, technically, a navigation mesh has fewer nodes than an equivalent waypoint network (that is, a network that has enough points to cover a navigation mesh). The navigation mesh Graph To summarize, a navigation mesh is a mesh that represents where an NPC can walk. A navigation mesh contains convex polygonal nodes (called cells). Each cell can be connected to each other using connections defined by an edge shared between them (or portal edge). In a navigation mesh, each cell can contain information about itself. For example, a cell may be labeled as toxic, and therefore only those units capable of resisting this toxicity can move across it. Personally, because of my experience, I view navigation meshes like the ones found in most Source games. However, all cells in Source's navigation meshes are rectangular. Our implementation is more flexible because the cells can be irregular polygons (as long as they're convex). Navigation Meshes In practice A navigation mesh implementation is actually three algorithms : A graph navigation algorithm A string pulling algorithm And a steering/path-smoothing algorithm In our cases, we used A*, the simple stupid funnel algorithm and a traditional steering algorithm that is still in development. Finding our cells Before doing any graph searches, we need to find 2 things : Our starting cell Our destination cell For example, let's use this navigation mesh : In this navigation meshes, every edge that are shared between 2 cells are also portal edges, which will be used by the string pulling algorithm later on. Also, let's use these points as our starting and destination points: Where our buddy (let's name it Buddy) stands is our staring point, while the flag represents our destination. Because we already have our starting point and our destination point, we just need to check which cell is closest to each point using an octree. Once we know our nearest cells, we must project the starting and destination points onto their respective closest cells. In practice, we do a simple projection of both our starting and destination points onto the normal of their respective cells. Before snapping a projected point, we must first know if the said projected point is outside its cell by finding the difference between the area of the cell and the sum of the areas of the triangles formed by that point and each edge of the cell. If the latter is remarkably larger than the first, the point is outside its cell. The snapping then simply consists of interpolating between the vertices of the edge of the cell closest to the projected point. In terms of code, we do this: Vector3f lineToPoint = pointToProject.subtract(start); Vector3f line = end.subtract(start); Vector3f returnedVector3f = new Vector3f().interpolateLocal(start, end, lineToPoint.dot(line) / line.dot(line)); In our example, the starting and destination cells are C1 and C8 respectively: Graph Search Algorithm A navigation mesh is actually a 2D grid of an unknown or infinite size. In a 3D game, it is common to represent a navigation mesh graph as a graph of flat polygons that aren't orthogonal to each other. There are games that use 3D navigation meshes, like games that use flying AI, but in our case it's a simple grid. For this reason, the use of the A* algorithm is probably the right solution. We chose A* because it's the most generic and flexible algorithm. Technically, we still do not know how our navigation mesh will be used, so going with something more generic can have its benefits... A* works by assigning a cost and a heuristic to a cell. The closer the cell is to our destination, the less expensive it is. The heuristic is calculated similarly but we also take into account the heuristics of the previous cell. This means that the longer a path is, the greater the resulting heuristic will be, and it becomes more likely that this path is not an optimal one. We begin the algorithm by traversing through the connections each of the neighboring cells of the current cell until we arrive at the end cell, doing a sort of exploration / filling. Each cell begins with an infinite heuristic but, as we explore the mesh, it's updated according to the information we learn. In the beginning, our starting cell gets a cost and a heuristic of 0 because the agent is already inside of it. We keep a queue in descending order of cells based on their heuristics. This means that the next cell to use as the current cell is the best candidate for an optimal path. When a cell is being processed, it is removed from that queue in another one that contains the closed cells. While continuing to explore, we also keep a reference of the connection used to move from the current cell to its neighbor. This will be useful later. We do it until we end up in the destination cell. Then, we "reel" up to our starting cell and save each cell we landed on, which gives an optimal path. A* is a very popular algorithm and the pseudocode can easily be found. Even Wikipedia has a pseudocode that is easy to understand. In our example, we find that this is our path: And here are highlighted (in pink) the traversed connections: The String Pulling Algorithm String pulling is the next step in the navigation mesh algorithm. Now that we have a queue of cells that describes an optimal path, we have to find a queue of points that an AI agent can travel to. This is where the sting pulling is needed. String pulling is in fact not linked to characters at all : it is rather a metaphor. Imagine a cross. Let's say that you wrap a silk thread around this cross and you put tension on it. You will find that the string does not follow the inner corner of it, but rather go from corner to corner of each point of the cross. This is precisely what we're doing but with a string that goes from one point to another. There are many different algorithms that lets us to do this. We chose the Simple Stupid Funnel algorithm because it's actually... ...stupidly simple. To put it simply (no puns intended), we create a funnel that checks each time if the next point is in the funnel or not. The funnel is composed of 3 points: a central apex, a left point (called left apex) and a right point (called right apex). At the beginning, the tested point is on the right side, then we alternate to the left and so on until we reach our point of destination. (as if we were walking) When a point is in the funnel, we continue the algorithm with the other side. If the point is outside the funnel, depending on which side the tested point belongs to, we take the apex from the other side of the funnel and add it to a list of final waypoints. The algorithm is working correctly most of the time. However, the algorithm had a bug that add the last point twice if none of the vertices of the last connection before the destination point were added to the list of final waypoints. We just added an if at the moment but we could come back later to optimize it. In our case, the funnel algorithm gives this path: The Steering Algoritm Now that we have a list of waypoints, we can finally just run our character at every point. But if there were walls in our geometry, then Buddy would run right into a corner wall. He won't be able to reach his destination because he isn't small enough to avoid the corner walls. That's the role of the steering algorithm. Our algorithm is still in heavy development, but its main gist is that we check if the next position of the agent is not in the navigation meshes. If that's the case, then we change its direction so that the agent doesn't hit the wall like an idiot. There is also a path curving algorithm, but it's still too early to know if we'll use that at all... We relied on this good document to program the steering algorithm. It's a 1999 document, but it's still interesting ... With the steering algoritm, we make sure that Buddy moves safely to his destination. (Look how proud he is!) So, this is the navigation mesh algorithm. I must say that, throughout my research, there weren't much pseudocode or code that described the algorithm as a whole. Only then did we realize that what people called "Navmesh" was actually a collage of algorithms rather than a single monolithic one. We also tried to have a cyclic grid with orthogonal cells (i.e. cells on the wall, ceiling) but it looked like that A* wasn't intended to be used in a 3D environment with flat orthogonal cells. My hypothesis is that we need 3D cells for this kind of navigation mesh, otherwise the heuristic value of each cell can change depending on the actual 3D length between the center of a flat cell and the destination point. So we reduced the scope of our navigation meshes and we were able to move an AI agent in our organic dungeon. Here's a picture : Each cyan cubes are the final waypoints found by the String pulling and blue lines represents collisions meshes. Our AI is currently still walking into walls, but the steering is still being implemented.
  5. Just released another version of DRTS. You can play it at https://distilledgames.itch.io/distilled-rts Picking Maps I improved the UI to preview maps before starting to play with bots. Now it is easier to find a map you like and see what you are getting into before starting the game. Camera Controls In addition to earlier input methods, you now can control the camera also via keyboard: To pan, use arrow keys, for zoom plus and minus. To improve onboarding, I added buttons for controlling the camera. This visual addition to the UI should help new players notice that the field of view can be changed. Under The Hood To prepare for competitive games, the map generator was improved for exact symmetry of edges in generated maps. Pathfinding now yields optimal paths in more cases.
  6. I recently worked on a path-finding algorithm used to move an AI agent into an organically generated dungeon. It's not an easy task but because I've already worked on Team Fortress 2 cards in the past, I already knew navigation meshes (navmesh) and their capabilities. Why Not Waypoints? As described in this paper, waypoint networks were in the past used in video games to save valuable resources. It was an acceptable compromise : level designers already knew where NPCs could and could not go. However, as technology has evolved, computers got more memory that became faster and cheaper. In other words, there was a shift from efficiency to flexibility. In a way, navigation meshes are the evolution of waypoints networks because they fulfill the same need but in a different way. One of the advantages of using a navigation mesh is that an agent can go anywhere in a cell as long as it is convex because it is essentially the definition of convex. It also means that the agent is not limited to a specific waypoint network, so if the destination is out of the waypoint network, it can go directly to it instead of going to the nearest point in the network. A navigation mesh can also be used by many types of agents of different sizes, rather than having many waypoint networks for agents of different sizes. Using a navigation mesh also speeds up graph exploration because, technically, a navigation mesh has fewer nodes than an equivalent waypoint network (that is, a network that has enough points to cover a navigation mesh). The navigation mesh Graph To summarize, a navigation mesh is a mesh that represents where an NPC can walk. A navigation mesh contains convex polygonal nodes (called cells). Each cell can be connected to each other using connections defined by an edge shared between them (or portal edge). In a navigation mesh, each cell can contain information about itself. For example, a cell may be labeled as toxic, and therefore only those units capable of resisting this toxicity can move across it. Personally, because of my experience, I view navigation meshes like the ones found in most Source games. However, all cells in Source's navigation meshes are rectangular. Our implementation is more flexible because the cells can be irregular polygons (as long as they're convex). Navigation Meshes In practice A navigation mesh implementation is actually three algorithms : A graph navigation algorithm A string pulling algorithm And a steering/path-smoothing algorithm In our cases, we used A*, the simple stupid funnel algorithm and a traditional steering algorithm that is still in development. Finding our cells Before doing any graph searches, we need to find 2 things : Our starting cell Our destination cell For example, let's use this navigation mesh : In this navigation meshes, every edge that are shared between 2 cells are also portal edges, which will be used by the string pulling algorithm later on. Also, let's use these points as our starting and destination points: Where our buddy (let's name it Buddy) stands is our staring point, while the flag represents our destination. Because we already have our starting point and our destination point, we just need to check which cell is closest to each point using an octree. Once we know our nearest cells, we must project the starting and destination points onto their respective closest cells. In practice, we do a simple projection of both our starting and destination points onto the normal of their respective cells. Before snapping a projected point, we must first know if the said projected point is outside its cell by finding the difference between the area of the cell and the sum of the areas of the triangles formed by that point and each edge of the cell. If the latter is remarkably larger than the first, the point is outside its cell. The snapping then simply consists of interpolating between the vertices of the edge of the cell closest to the projected point. In terms of code, we do this: Vector3f lineToPoint = pointToProject.subtract(start); Vector3f line = end.subtract(start); Vector3f returnedVector3f = new Vector3f().interpolateLocal(start, end, lineToPoint.dot(line) / line.dot(line)); In our example, the starting and destination cells are C1 and C8 respectively: Graph Search Algorithm A navigation mesh is actually a 2D grid of an unknown or infinite size. In a 3D game, it is common to represent a navigation mesh graph as a graph of flat polygons that aren't orthogonal to each other. There are games that use 3D navigation meshes, like games that use flying AI, but in our case it's a simple grid. For this reason, the use of the A* algorithm is probably the right solution. We chose A* because it's the most generic and flexible algorithm. Technically, we still do not know how our navigation mesh will be used, so going with something more generic can have its benefits... A* works by assigning a cost and a heuristic to a cell. The closer the cell is to our destination, the less expensive it is. The heuristic is calculated similarly but we also take into account the heuristics of the previous cell. This means that the longer a path is, the greater the resulting heuristic will be, and it becomes more likely that this path is not an optimal one. We begin the algorithm by traversing through the connections each of the neighboring cells of the current cell until we arrive at the end cell, doing a sort of exploration / filling. Each cell begins with an infinite heuristic but, as we explore the mesh, it's updated according to the information we learn. In the beginning, our starting cell gets a cost and a heuristic of 0 because the agent is already inside of it. We keep a queue in descending order of cells based on their heuristics. This means that the next cell to use as the current cell is the best candidate for an optimal path. When a cell is being processed, it is removed from that queue in another one that contains the closed cells. While continuing to explore, we also keep a reference of the connection used to move from the current cell to its neighbor. This will be useful later. We do it until we end up in the destination cell. Then, we "reel" up to our starting cell and save each cell we landed on, which gives an optimal path. A* is a very popular algorithm and the pseudocode can easily be found. Even Wikipedia has a pseudocode that is easy to understand. In our example, we find that this is our path: And here are highlighted (in pink) the traversed connections: The String Pulling Algorithm String pulling is the next step in the navigation mesh algorithm. Now that we have a queue of cells that describes an optimal path, we have to find a queue of points that an AI agent can travel to. This is where the sting pulling is needed. String pulling is in fact not linked to characters at all : it is rather a metaphor. Imagine a cross. Let's say that you wrap a silk thread around this cross and you put tension on it. You will find that the string does not follow the inner corner of it, but rather go from corner to corner of each point of the cross. This is precisely what we're doing but with a string that goes from one point to another. There are many different algorithms that lets us to do this. We chose the Simple Stupid Funnel algorithm because it's actually... ...stupidly simple. To put it simply (no puns intended), we create a funnel that checks each time if the next point is in the funnel or not. The funnel is composed of 3 points: a central apex, a left point (called left apex) and a right point (called right apex). At the beginning, the tested point is on the right side, then we alternate to the left and so on until we reach our point of destination. (as if we were walking) When a point is in the funnel, we continue the algorithm with the other side. If the point is outside the funnel, depending on which side the tested point belongs to, we take the apex from the other side of the funnel and add it to a list of final waypoints. The algorithm is working correctly most of the time. However, the algorithm had a bug that add the last point twice if none of the vertices of the last connection before the destination point were added to the list of final waypoints. We just added an if at the moment but we could come back later to optimize it. In our case, the funnel algorithm gives this path: The Steering Algoritm Now that we have a list of waypoints, we can finally just run our character at every point. But if there were walls in our geometry, then Buddy would run right into a corner wall. He won't be able to reach his destination because he isn't small enough to avoid the corner walls. That's the role of the steering algorithm. Our algorithm is still in heavy development, but its main gist is that we check if the next position of the agent is not in the navigation meshes. If that's the case, then we change its direction so that the agent doesn't hit the wall like an idiot. There is also a path curving algorithm, but it's still too early to know if we'll use that at all... We relied on this good document to program the steering algorithm. It's a 1999 document, but it's still interesting ... With the steering algoritm, we make sure that Buddy moves safely to his destination. (Look how proud he is!) So, this is the navigation mesh algorithm. I must say that, throughout my research, there weren't much pseudocode or code that described the algorithm as a whole. Only then did we realize that what people called "Navmesh" was actually a collage of algorithms rather than a single monolithic one. We also tried to have a cyclic grid with orthogonal cells (i.e. cells on the wall, ceiling) but it looked like that A* wasn't intended to be used in a 3D environment with flat orthogonal cells. My hypothesis is that we need 3D cells for this kind of navigation mesh, otherwise the heuristic value of each cell can change depending on the actual 3D length between the center of a flat cell and the destination point. So we reduced the scope of our navigation meshes and we were able to move an AI agent in our organic dungeon. Here's a picture : Each cyan cubes are the final waypoints found by the String pulling and blue lines represents collisions meshes. Our AI is currently still walking into walls, but the steering is still being implemented.
  7. 3D Rigger Mixamo Is Making Some Changes Originally founded in 2008, Mixamo has been a go-to source for 3D animation and rigging for nearly a decade. On August 28th, 2013, Mixamo launched a platform called Face Plus. It was designed to allow developers or animators turn facial expressions recorded with even a webcam into top-notch 3D animation. This is not unlike what Holotech Studios’ “FaceRig” does for streamers and video creators as well. Back this past May 2017, Adobe (publishers of the Mixamo platform since 2015) announced the end of an era. Mixamo’s Face Plus will be removed on August 22nd, 2017, as well as several other free features. According to the Adobe post on the Mixamo website, they lay out what is changing and what is staying the same. Set out as bullet points in the post, they seem to be cleaning house on the “free” features. But, they are keeping a small handful of their main key points, such as: Select and use characters from Mixamo’s Character Collection Upload 3D characters to get them ready to animate using the Auto-Rigger Apply animations to characters Download as an .fbx or .dae Noticeably, the list that’s getting 86’ed includes major tools and the ability to save assets from their site: The ability to save and retrieve assets in “My Assets”. The Face Plus facial animation plugin The Decimator polygon reduction tool The ability to download Control-rig scripts Download types for .bvh and .biped that streamline integration with 3rd party applications Mixamo forums will close and all help articles will be moved to Adobe’s HelpX Don't Worry! There's still time! They’re allowing people to still download the tools and plug-ins until August 22nd, as well as utilizing their easy-to-download features of saving Assets directly. Those that use Mixamo, utilize Face Plus, or even just do animation are highly encouraged to take advantage and download the tools for the next week. To download, sign into your (free) Adobe account at the Mixamo website and start downloading. Developers have only one week left to grab as much as they can. For a complete FAQ and how-to for the Mixamo massacre, check out the Adobe post from their Customer Care Team. UPDATE (8/15/17): Adobe would like to stress that Mixamo, itself, is not going anywhere, just the Face Plus features, the tools mentioned above, and a couple other features to make the platform more "streamlined and a little modernized". Once the dust is cleared next week from the changes, I'll reach out to Adobe from a more guided tour of the new set-up. Additionally, I have changed the title of this article to reflect the clarification. Apologies for the inconvenience to those that have already read the article.
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

Participate in the game development conversation and more when you create an account on GameDev.net!

Sign me up!