Jump to content
Site Stability Read more... ×
  • Advertisement

jb-dev

Member
  • Content Count

    212
  • Joined

  • Last visited

Everything posted by jb-dev

  1. jb-dev

    A simple puzzle

    From the album: Vaporwave Roguelite

    This is showing off a quite simple puzzle. The goal is to turn all those globes green which would activate the pressure plate back there. Then it's just a matter of standing on the latter to complete this simple puzzle.
  2. jb-dev

    newIce.png

    From the album: Vaporwave Roguelite

  3. jb-dev

    Digital Desert's Title Card

    From the album: Vaporwave Roguelite

    This is the third level's title card. All title cards will have its colours matched with its level. Every levels also have a unique level seal.
  4. jb-dev

    Juicy Jungle's Title Card

    From the album: Vaporwave Roguelite

    This is the first level's title card.
  5. I recently worked on a path-finding algorithm used to move an AI agent into an organically generated dungeon. It's not an easy task but because I've already worked on Team Fortress 2 cards in the past, I already knew navigation meshes (navmesh) and their capabilities. Why Not Waypoints? As described in this paper, waypoint networks were in the past used in video games to save valuable resources. It was an acceptable compromise : level designers already knew where NPCs could and could not go. However, as technology has evolved, computers got more memory that became faster and cheaper. In other words, there was a shift from efficiency to flexibility. In a way, navigation meshes are the evolution of waypoints networks because they fulfill the same need but in a different way. One of the advantages of using a navigation mesh is that an agent can go anywhere in a cell as long as it is convex because it is essentially the definition of convex. It also means that the agent is not limited to a specific waypoint network, so if the destination is out of the waypoint network, it can go directly to it instead of going to the nearest point in the network. A navigation mesh can also be used by many types of agents of different sizes, rather than having many waypoint networks for agents of different sizes. Using a navigation mesh also speeds up graph exploration because, technically, a navigation mesh has fewer nodes than an equivalent waypoint network (that is, a network that has enough points to cover a navigation mesh). The navigation mesh Graph To summarize, a navigation mesh is a mesh that represents where an NPC can walk. A navigation mesh contains convex polygonal nodes (called cells). Each cell can be connected to each other using connections defined by an edge shared between them (or portal edge). In a navigation mesh, each cell can contain information about itself. For example, a cell may be labeled as toxic, and therefore only those units capable of resisting this toxicity can move across it. Personally, because of my experience, I view navigation meshes like the ones found in most Source games. However, all cells in Source's navigation meshes are rectangular. Our implementation is more flexible because the cells can be irregular polygons (as long as they're convex). Navigation Meshes In practice A navigation mesh implementation is actually three algorithms : A graph navigation algorithm A string pulling algorithm And a steering/path-smoothing algorithm In our cases, we used A*, the simple stupid funnel algorithm and a traditional steering algorithm that is still in development. Finding our cells Before doing any graph searches, we need to find 2 things : Our starting cell Our destination cell For example, let's use this navigation mesh : In this navigation meshes, every edge that are shared between 2 cells are also portal edges, which will be used by the string pulling algorithm later on. Also, let's use these points as our starting and destination points: Where our buddy (let's name it Buddy) stands is our staring point, while the flag represents our destination. Because we already have our starting point and our destination point, we just need to check which cell is closest to each point using an octree. Once we know our nearest cells, we must project the starting and destination points onto their respective closest cells. In practice, we do a simple projection of both our starting and destination points onto the normal of their respective cells. Before snapping a projected point, we must first know if the said projected point is outside its cell by finding the difference between the area of the cell and the sum of the areas of the triangles formed by that point and each edge of the cell. If the latter is remarkably larger than the first, the point is outside its cell. The snapping then simply consists of interpolating between the vertices of the edge of the cell closest to the projected point. In terms of code, we do this: Vector3f lineToPoint = pointToProject.subtract(start); Vector3f line = end.subtract(start); Vector3f returnedVector3f = new Vector3f().interpolateLocal(start, end, lineToPoint.dot(line) / line.dot(line)); In our example, the starting and destination cells are C1 and C8 respectively: Graph Search Algorithm A navigation mesh is actually a 2D grid of an unknown or infinite size. In a 3D game, it is common to represent a navigation mesh graph as a graph of flat polygons that aren't orthogonal to each other. There are games that use 3D navigation meshes, like games that use flying AI, but in our case it's a simple grid. For this reason, the use of the A* algorithm is probably the right solution. We chose A* because it's the most generic and flexible algorithm. Technically, we still do not know how our navigation mesh will be used, so going with something more generic can have its benefits... A* works by assigning a cost and a heuristic to a cell. The closer the cell is to our destination, the less expensive it is. The heuristic is calculated similarly but we also take into account the heuristics of the previous cell. This means that the longer a path is, the greater the resulting heuristic will be, and it becomes more likely that this path is not an optimal one. We begin the algorithm by traversing through the connections each of the neighboring cells of the current cell until we arrive at the end cell, doing a sort of exploration / filling. Each cell begins with an infinite heuristic but, as we explore the mesh, it's updated according to the information we learn. In the beginning, our starting cell gets a cost and a heuristic of 0 because the agent is already inside of it. We keep a queue in descending order of cells based on their heuristics. This means that the next cell to use as the current cell is the best candidate for an optimal path. When a cell is being processed, it is removed from that queue in another one that contains the closed cells. While continuing to explore, we also keep a reference of the connection used to move from the current cell to its neighbor. This will be useful later. We do it until we end up in the destination cell. Then, we "reel" up to our starting cell and save each cell we landed on, which gives an optimal path. A* is a very popular algorithm and the pseudocode can easily be found. Even Wikipedia has a pseudocode that is easy to understand. In our example, we find that this is our path: And here are highlighted (in pink) the traversed connections: The String Pulling Algorithm String pulling is the next step in the navigation mesh algorithm. Now that we have a queue of cells that describes an optimal path, we have to find a queue of points that an AI agent can travel to. This is where the sting pulling is needed. String pulling is in fact not linked to characters at all : it is rather a metaphor. Imagine a cross. Let's say that you wrap a silk thread around this cross and you put tension on it. You will find that the string does not follow the inner corner of it, but rather go from corner to corner of each point of the cross. This is precisely what we're doing but with a string that goes from one point to another. There are many different algorithms that lets us to do this. We chose the Simple Stupid Funnel algorithm because it's actually... ...stupidly simple. To put it simply (no puns intended), we create a funnel that checks each time if the next point is in the funnel or not. The funnel is composed of 3 points: a central apex, a left point (called left apex) and a right point (called right apex). At the beginning, the tested point is on the right side, then we alternate to the left and so on until we reach our point of destination. (as if we were walking) When a point is in the funnel, we continue the algorithm with the other side. If the point is outside the funnel, depending on which side the tested point belongs to, we take the apex from the other side of the funnel and add it to a list of final waypoints. The algorithm is working correctly most of the time. However, the algorithm had a bug that add the last point twice if none of the vertices of the last connection before the destination point were added to the list of final waypoints. We just added an if at the moment but we could come back later to optimize it. In our case, the funnel algorithm gives this path: The Steering Algoritm Now that we have a list of waypoints, we can finally just run our character at every point. But if there were walls in our geometry, then Buddy would run right into a corner wall. He won't be able to reach his destination because he isn't small enough to avoid the corner walls. That's the role of the steering algorithm. Our algorithm is still in heavy development, but its main gist is that we check if the next position of the agent is not in the navigation meshes. If that's the case, then we change its direction so that the agent doesn't hit the wall like an idiot. There is also a path curving algorithm, but it's still too early to know if we'll use that at all... We relied on this good document to program the steering algorithm. It's a 1999 document, but it's still interesting ... With the steering algoritm, we make sure that Buddy moves safely to his destination. (Look how proud he is!) So, this is the navigation mesh algorithm. I must say that, throughout my research, there weren't much pseudocode or code that described the algorithm as a whole. Only then did we realize that what people called "Navmesh" was actually a collage of algorithms rather than a single monolithic one. We also tried to have a cyclic grid with orthogonal cells (i.e. cells on the wall, ceiling) but it looked like that A* wasn't intended to be used in a 3D environment with flat orthogonal cells. My hypothesis is that we need 3D cells for this kind of navigation mesh, otherwise the heuristic value of each cell can change depending on the actual 3D length between the center of a flat cell and the destination point. So we reduced the scope of our navigation meshes and we were able to move an AI agent in our organic dungeon. Here's a picture : Each cyan cubes are the final waypoints found by the String pulling and blue lines represents collisions meshes. Our AI is currently still walking into walls, but the steering is still being implemented.
  6. Now THAT'S the question! It really depends on the type of terrain used. If the terrain geometry is quite simple then maybe just build the navmesh by using the terrain directly. Another idea is if everything is static we could simply use static navmesh and create a cell for each triangle... In our case, we had a simple room-corridor layout so most of our navmesh was rectangular, so it was quite simple. While I was building the graph traversal algorithm @thecheeselover was doing all of the navmesh baking so really it's up to you to find the best approach for the job.
  7. jb-dev

    hams.png

    From the album: Vaporwave Roguelite

  8. jb-dev

    Travelling Merchant

    From the album: Vaporwave Roguelite

    This is a quick look at a travelling merchant. Although missing right now, with it the player can trade one of its piece of equipment for another.
  9. jb-dev

    Travelling Agency

    From the album: Vaporwave Roguelite

    This is a quick look into the travelling agency room
  10. As the gaming industry advances and evolves, new ideas can suddenly popup and change the way things goes in an instant. I've worked for 2 years in a Agile-centered startup. I've experienced first hand what agility can do to boost productivity and reduce development cost. It's a really simple and pleasant thing to work with agility, and maybe the gaming industry can benefit from it. The values Everybody that knows agility should be able to understand the core values of the Agile Manifesto. To put it simply, we should value: Individuals and interactions over process and tools; (i.e. work geographically close rather than remotely or value genuine conversations over Skype) Working software over comprehensive documentation; In our case, the game MUST be fun (or work correctly) before anything else Customer collaboration over contract negotiation; Basically, make the player more involved in the game's development Responding to change over following a plan. This is, of course, directly dependent of point 3 Also, keep in mind that it's mostly an hypothesis. As the manifesto states : Doing Agile vs. Being Agile During the past 2 years, our scrum master always warned us about the impostors, or the people that are "Doing Agility". Things like using a Kanban board or doing Daily Scrums wont make your team Agile. It's all about the values. These tools should be used to apply those values rather than using them just because it's a trend. For exemple, some teams uses a Scrum backlog while locking it's content to everybody other that the project owner. This puts the project on rails, and can just be a waterfall process disguised as something agile. How can the gaming industry can benefit from it? As we keep seeing video games cost more and take a lot more time to develop, the customers grows more and more displeased: There are bugs-a-plenty on crucial mechanics, while relatively small and insignifiant functionalities are well polished and bug-free. Getting the game as fast as possible in the hands of beta players The first thing to do is to involve the player as fast as possible in the game development's cycle. Doing that will significantly improve the player's sense of content and can make the game better. In the past, this kind of development wasn't practical, but, with the development of broadband, it's now possible. Early Access Nowadays, people talks about "Early Access". For those living under a rock, early access is a way of distributing a game in which the game itself isn't completed in a traditional sense. The customer then buys an somewhat incomplete game, of which many features will come later. This process is by far the best idea the industry never had. However, due to some mishandling of said practice, it's now regarded as a sing of cheapness and greed. If the ones doing it are agile impostors, then yes. There are times where games simply gets abandoned by their maintainer. These instances gives early access some bad name. Early access should also be used in Agility in a form of customer collaboration. For example, one could ask a random early access player to participate in the planification of a sprint. If the team isn't confortable with the idea, then maybe a suggestion box can be plugged directly in the backlog... Prioritizing mechanics based on feedback Getting feedback form customers is an important thing in Agile Methodologies. The backlog can be prioritized based on player feedback and feature requests. The same thing can be used on bugs. The more frequent it becomes, the more critical it becomes. Minimal efforts of development In most cases, the backlog should be prioritized based on business value rather that complexity. Being so, things that are developed first are important (or so the team thinks) to the game, while less important functions aren't implemented right away and have time to simmer down. Paired with early access, this becomes a quite powerful tool to test out hypothesis and experiment with mechanics. Due to the iterative process of some methodologies, functionalities are minimal and doesn't require a whole lot of effort to develop: after all, we have no idea if a mechanic is going to be loved or hated by a player until said players plays the game. Change prone games Another important argument is that Agility is generally speaking change tolerant. In our case, this means that if the beta players disliked or liked a newly implemented feature, then said feature should be remove and subsequent features should also be re-evaluated. The importance should always change based on discoveries made by the team or even the project owner. It is considered healthy for a backlog to change it's feature's priority. Otherwise, it's a bad omen... How about me? Agile is first and foremost a philosophy, or a way of life. Sure, it can easily be imposed to people, but it's in these situations that you get fake Agility. It should be a conviction more that a rule: a rule should be static and shouldn't change a lot. Agility starts with the team. If one of it's member isn't committed, it will go sour really fast. I've seen so many times that Agility can be broken by a malicious person (a saboteur if you will). The best thing you can do to an Agile team is to be, yourself, agile. ------------------------------------------ Agility should be an integral part of game development. With Agile, you can easily prioritize mechanics that are crucial for having a fun and interesting game.
  11. I recently worked on a path-finding algorithm used to move an AI agent into an organically generated dungeon. It's not an easy task but because I've already worked on Team Fortress 2 cards in the past, I already knew navigation meshes (navmesh) and their capabilities. Why Not Waypoints? As described in this paper, waypoint networks were in the past used in video games to save valuable resources. It was an acceptable compromise : level designers already knew where NPCs could and could not go. However, as technology has evolved, computers got more memory that became faster and cheaper. In other words, there was a shift from efficiency to flexibility. In a way, navigation meshes are the evolution of waypoints networks because they fulfill the same need but in a different way. One of the advantages of using a navigation mesh is that an agent can go anywhere in a cell as long as it is convex because it is essentially the definition of convex. It also means that the agent is not limited to a specific waypoint network, so if the destination is out of the waypoint network, it can go directly to it instead of going to the nearest point in the network. A navigation mesh can also be used by many types of agents of different sizes, rather than having many waypoint networks for agents of different sizes. Using a navigation mesh also speeds up graph exploration because, technically, a navigation mesh has fewer nodes than an equivalent waypoint network (that is, a network that has enough points to cover a navigation mesh). The navigation mesh Graph To summarize, a navigation mesh is a mesh that represents where an NPC can walk. A navigation mesh contains convex polygonal nodes (called cells). Each cell can be connected to each other using connections defined by an edge shared between them (or portal edge). In a navigation mesh, each cell can contain information about itself. For example, a cell may be labeled as toxic, and therefore only those units capable of resisting this toxicity can move across it. Personally, because of my experience, I view navigation meshes like the ones found in most Source games. However, all cells in Source's navigation meshes are rectangular. Our implementation is more flexible because the cells can be irregular polygons (as long as they're convex). Navigation Meshes In practice A navigation mesh implementation is actually three algorithms : A graph navigation algorithm A string pulling algorithm And a steering/path-smoothing algorithm In our cases, we used A*, the simple stupid funnel algorithm and a traditional steering algorithm that is still in development. Finding our cells Before doing any graph searches, we need to find 2 things : Our starting cell Our destination cell For example, let's use this navigation mesh : In this navigation meshes, every edge that are shared between 2 cells are also portal edges, which will be used by the string pulling algorithm later on. Also, let's use these points as our starting and destination points: Where our buddy (let's name it Buddy) stands is our staring point, while the flag represents our destination. Because we already have our starting point and our destination point, we just need to check which cell is closest to each point using an octree. Once we know our nearest cells, we must project the starting and destination points onto their respective closest cells. In practice, we do a simple projection of both our starting and destination points onto the normal of their respective cells. Before snapping a projected point, we must first know if the said projected point is outside its cell by finding the difference between the area of the cell and the sum of the areas of the triangles formed by that point and each edge of the cell. If the latter is remarkably larger than the first, the point is outside its cell. The snapping then simply consists of interpolating between the vertices of the edge of the cell closest to the projected point. In terms of code, we do this: Vector3f lineToPoint = pointToProject.subtract(start); Vector3f line = end.subtract(start); Vector3f returnedVector3f = new Vector3f().interpolateLocal(start, end, lineToPoint.dot(line) / line.dot(line)); In our example, the starting and destination cells are C1 and C8 respectively: Graph Search Algorithm A navigation mesh is actually a 2D grid of an unknown or infinite size. In a 3D game, it is common to represent a navigation mesh graph as a graph of flat polygons that aren't orthogonal to each other. There are games that use 3D navigation meshes, like games that use flying AI, but in our case it's a simple grid. For this reason, the use of the A* algorithm is probably the right solution. We chose A* because it's the most generic and flexible algorithm. Technically, we still do not know how our navigation mesh will be used, so going with something more generic can have its benefits... A* works by assigning a cost and a heuristic to a cell. The closer the cell is to our destination, the less expensive it is. The heuristic is calculated similarly but we also take into account the heuristics of the previous cell. This means that the longer a path is, the greater the resulting heuristic will be, and it becomes more likely that this path is not an optimal one. We begin the algorithm by traversing through the connections each of the neighboring cells of the current cell until we arrive at the end cell, doing a sort of exploration / filling. Each cell begins with an infinite heuristic but, as we explore the mesh, it's updated according to the information we learn. In the beginning, our starting cell gets a cost and a heuristic of 0 because the agent is already inside of it. We keep a queue in descending order of cells based on their heuristics. This means that the next cell to use as the current cell is the best candidate for an optimal path. When a cell is being processed, it is removed from that queue in another one that contains the closed cells. While continuing to explore, we also keep a reference of the connection used to move from the current cell to its neighbor. This will be useful later. We do it until we end up in the destination cell. Then, we "reel" up to our starting cell and save each cell we landed on, which gives an optimal path. A* is a very popular algorithm and the pseudocode can easily be found. Even Wikipedia has a pseudocode that is easy to understand. In our example, we find that this is our path: And here are highlighted (in pink) the traversed connections: The String Pulling Algorithm String pulling is the next step in the navigation mesh algorithm. Now that we have a queue of cells that describes an optimal path, we have to find a queue of points that an AI agent can travel to. This is where the sting pulling is needed. String pulling is in fact not linked to characters at all : it is rather a metaphor. Imagine a cross. Let's say that you wrap a silk thread around this cross and you put tension on it. You will find that the string does not follow the inner corner of it, but rather go from corner to corner of each point of the cross. This is precisely what we're doing but with a string that goes from one point to another. There are many different algorithms that lets us to do this. We chose the Simple Stupid Funnel algorithm because it's actually... ...stupidly simple. To put it simply (no puns intended), we create a funnel that checks each time if the next point is in the funnel or not. The funnel is composed of 3 points: a central apex, a left point (called left apex) and a right point (called right apex). At the beginning, the tested point is on the right side, then we alternate to the left and so on until we reach our point of destination. (as if we were walking) When a point is in the funnel, we continue the algorithm with the other side. If the point is outside the funnel, depending on which side the tested point belongs to, we take the apex from the other side of the funnel and add it to a list of final waypoints. The algorithm is working correctly most of the time. However, the algorithm had a bug that add the last point twice if none of the vertices of the last connection before the destination point were added to the list of final waypoints. We just added an if at the moment but we could come back later to optimize it. In our case, the funnel algorithm gives this path: The Steering Algoritm Now that we have a list of waypoints, we can finally just run our character at every point. But if there were walls in our geometry, then Buddy would run right into a corner wall. He won't be able to reach his destination because he isn't small enough to avoid the corner walls. That's the role of the steering algorithm. Our algorithm is still in heavy development, but its main gist is that we check if the next position of the agent is not in the navigation meshes. If that's the case, then we change its direction so that the agent doesn't hit the wall like an idiot. There is also a path curving algorithm, but it's still too early to know if we'll use that at all... We relied on this good document to program the steering algorithm. It's a 1999 document, but it's still interesting ... With the steering algoritm, we make sure that Buddy moves safely to his destination. (Look how proud he is!) So, this is the navigation mesh algorithm. I must say that, throughout my research, there weren't much pseudocode or code that described the algorithm as a whole. Only then did we realize that what people called "Navmesh" was actually a collage of algorithms rather than a single monolithic one. We also tried to have a cyclic grid with orthogonal cells (i.e. cells on the wall, ceiling) but it looked like that A* wasn't intended to be used in a 3D environment with flat orthogonal cells. My hypothesis is that we need 3D cells for this kind of navigation mesh, otherwise the heuristic value of each cell can change depending on the actual 3D length between the center of a flat cell and the destination point. So we reduced the scope of our navigation meshes and we were able to move an AI agent in our organic dungeon. Here's a picture : Each cyan cubes are the final waypoints found by the String pulling and blue lines represents collisions meshes. Our AI is currently still walking into walls, but the steering is still being implemented.
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!