Jump to content
  • Advertisement

Pathfinding Navigation Mesh : Wall Collision Avoidance

thecheeselover

2972 views

Subscribe to our subreddit to get all the updates from the team!

Introduction

In our 3D game (miimii1205), we use a dynamically created navigation mesh to navigate onto a procedurally generated terrain. To do so, only the A* and string pulling algorithms were more specifically used until our last agile sprint. We recently added two new behaviors to the pathfinding : steering and wall collision avoidance. In this article, I will describe how I achieved a simple way for agents to not walk into walls.

Configuration

  • 3D or 2D navigation mesh, as long as the 3D one is not cyclic.
  • Navigation cells and their : polygonal edges, connections (other cell), shared edges (the line intersecting between two connected cells), centroids and normals.
  • An A* and string pulled (not tested without string pulling) generated path consisting of waypoints on the navigation mesh.

The Algorithm

The agent is the pink low-poly humanoid and the final destination is the flag.

ps1.thumb.png.51f7c30354febae7d7506525e7bd38d9.png

ps5.thumb.png.c74e59246b5a0490ed6add7c9f57dd11.png

 

The ideal algorithm (yet unoptimized) would be to cast an oriented rectangle between each consecutive waypoint where its width is the two times the radius. Think of the agent's center position being in the middle of the width. Anyway, this algorithm is too complicated, too long to develop for our game, too big for nothing and so I thought about another algorithm, which has its drawbacks actually. However, it's more suited for our game.

Psss, check this article if you haven't seen it yet.

The algorithm is the following :

  1. For each waypoint, pick the current one and the next one until the next one is the last.
  2. Iterate over the current navigation cell's edges, which is defined by the agent's 3D position. To do that, you need a spatial and optimized way to determine the closest cell of a 3D point. Our way to do it is to first have have an octree to partition the navigation mesh. After that, get all the cells that are close to the point plus an extra buffer. To find the cell that is the closest to the point, for each picked cell, cast a projection of the position onto each one of them. This can be done using their centroids and normals. Don't forget to snap the projected position onto the cell. After, that compute the length of the resulting vector and pick the smallest one.
  3. Convert each edge to a 2D line by discarding the Y component (UP vector).
  4. For each side left and right, which are defined by the agent's position and direction towards the next waypoint, cast a 2D line that start from the agent's position, that goes towards one of the two perpendicular directions related to the direction to the next waypoint and that has a length of the defined radius.
  5. If there's an intersection on a connection and that it's on the shared part of the connection, then continue with the connected cell's edges.
  6. If there are any intersections other than #5, create a new waypoint before the next waypoint. This new one's position is defined by the intersection's position translated by a length of two times the radius and projected towards the agent's current direction as a 2D line. The same translation is applied to the next waypoint.
  7. Cast two 2D lines, one on each side of the agent as described before, starting from the sides, going towards the same direction as the agent and of the same length between the current and next waypoint. Check for #5. If there is an intersection on a connection and that it's on the unshared part of the connection, then do #6 (no if). If there's an intersection on a simple edge, then translate the next waypoint as described in #6.

ps2.thumb.png.b152f5b8a2d20c0d0262147d5f4604e5.png

ps3.thumb.png.699ea4016b877a916455be2f94dedb12.png

ps4.thumb.png.8a6fd9db0a068366dd96fe3f47437f3c.png

 

Here's a video of the algorithm in action : 

 



12 Comments


Recommended Comments

I feel like there has to be a simpler way to accomplish this. Can you make the corners have non-zero radius during string pulling? Or treat the corners as circles, and move the string-pulled lines to be tangent to the circles (plus an additional connecting chamfer)?

Share this comment


Link to comment
13 hours ago, swiftcoder said:

I feel like there has to be a simpler way to accomplish this. Can you make the corners have non-zero radius during string pulling? Or treat the corners as circles, and move the string-pulled lines to be tangent to the circles (plus an additional connecting chamfer)?

@miimii1205  did the string pulling algorithm, so I'll tag him. As far as I know, the second option seems to be really complex. The easiest one would actually be the first option you proposed.

 

I'm sure there are static ways that are better than what I just described. Static in the sense that they are computed before the path generation algorithm is done. However, my algorithm is dynamic. It runs during run time and can be limited to the two next waypoints.

 

Also, the algorithm that I use to intersect two 2D lines is performant.

Edited by thecheeselover

Share this comment


Link to comment

Curious why you didn't use the typical nav mesh implementation where the edges of the navmesh are retracted from the walls but the agent radius, so you don't have to do runtime radius compensation? It also nicely handles a set of other problems too, like areas too small for the agent to fit will be culled away from the resulting navigation. 

Share this comment


Link to comment
3 hours ago, DrEvil said:

Curious why you didn't use the typical nav mesh implementation where the edges of the navmesh are retracted from the walls but the agent radius, so you don't have to do runtime radius compensation? It also nicely handles a set of other problems too, like areas too small for the agent to fit will be culled away from the resulting navigation. 

There are two reasons for [that I can remember] :

  1. Our navigation mesh are generated in-house and originally, we weren't supposed to add the wall collision avoidance module. We actually don't want the navigations meshes to depend onto each agent's radius. There's actually a way to do what you're talking about with multilayered navigation meshes.
  2. Each agent can have different radius values.

Share this comment


Link to comment
DrEvil

Posted (edited)

I understand, but typically that means layers are built for each agent radius. It's a trade-off of course, but it solves enough of the issues that it is usually worth it for most games out there. Especially when there are freely usable navigation mesh libraries out there like recast.

I'm mainly curious what your reasoning is though. I have spend considerable time in the past on my hobby project experimenting with a manually built un-retracted navigation mesh, but it was annoying enough to try and handle the agent radius at runtime that I shelved it. That was many years ago though. For various reasons I've felt like trying to revive it recently, because for my particular use case(a bot in older quake based FPS engines, although I am parsing the collision of the map data to generate the navigation, there is a lot of noise and trash in the output that takes considerable effort to trim out, and still more manual effort to provide contextual information (team specific markup, etc)

Here's a really old video showing the tile based recast driven traditional navigation mesh.

https://www.youtube.com/watch?v=5qYgRA5oINs

Problem is that although I can trim some amount of data from these maps(like brushes marked as sky boxes, etc, even so the resulting data set still contains enough collision data that I get for instance a lot of navigation on "top" of the world, where the brush flags can't be automatically considered relevant or not, even though the navigation inside the world is reasonable, there is a lot of of garbage navigation as well, and enough 

https://imgur.com/a/r3BbBxn (etf_2fort recast auto navmesh, without hiding a lot of top brush fases you don'e even really see the important navigation of the world)

This is another old video of the non retracted navmesh I was experimenting with way back as well, once I got dynamic obstacle support working. There was a lot to like about where this was heading, even though the navmesh was manually built, but I ultimately burned out on compensating for the radius stuff at runtime. There are a lot of edge cases in a real world 3d game map.

https://www.youtube.com/watch?v=UhSNwaTV3II

I've recently gotten re-interested in my bot and am thinking about giving this approach another go, since after spending a lot of time on the automatic recast drive building, I'm finding that I still have to spend considerable time fixing up the data to trim out data where I don't want it, mark team specific regions, where the resulting time to produce a completed navigation doesn't save much, especially since the manual navmesh creation process is not only very fast with the way I have it set up, but it's very precise and results in far less regions in the resulting data set. Plus I have various shortcuts that take advantage of my problem space, such as the tendency for symmetry in the game maps the bot supports, so I can do half the map and then mirror all the sectors over to the other half.

Here's a map where only the green areas have been layed out, and the cyan sectors have been mirrored for the other half of the map

https://imgur.com/a/1r1qlYS

 

Edited by DrEvil

Share this comment


Link to comment
DrEvil

Posted (edited)

There is something super appealing about the manual approach. The quality of output is way more controllable and optimal, in terms of numbers of regions, size of regions, etc, and this even includes a number of manual "special" sectors, like floating "ledge" sectors that I use to cast downwards to make drop connections and such.

https://imgur.com/a/rTYiRTB

https://imgur.com/a/iUtnE9A

Just gotta solve that pesky radius stuff.

Edited by DrEvil

Share this comment


Link to comment
On 5/12/2018 at 12:17 PM, DrEvil said:

I understand, but typically that means layers are built for each agent radius. It's a trade-off of course, but it solves enough of the issues that it is usually worth it for most games out there. Especially when there are freely usable navigation mesh libraries out there like recast.

Initially, I tried what is described in this tutorial because our project is in Java and uses the jMonkey Engine 3.1. It was such a hassle just to configure the pathfinding librairies because CritterAI is so obscure. I had to modify the source code.

What is CritterAI? According to the mentionned tutorial

Quote
Parameter Insight

The jme3AI system uses CritterAI, which is based off Recast and Detour navigation. The author of Recast lays out a few specific rules for NavMesh creation in this blog post, which logically apply to jme3AI. Below is a translation of this post as it pertains to jme3AI.

Also, the navmesh generator just didn't work and is extremely old. Everytime I fix something related to it, something else would break. So I just rage quit and decided that we would make our own navmesh module. That would allow us to have more control over the AI.

Another thing to note is that our game has differents worlds made of randomly generated levels. Each world geometry is quite different from each other. For example, we use a navigation mesh for the jungle level because of its organic nature but can directly use the A* in the savannah because it's made up of voxels with the marching cubes algorithm.

Making our own library comes with downsides too. We cannot directly use the awesome features of the libgdx ai library for example.

Furthermore, we have a lot of units with different radiuses : humanoids, snakes, bigger snakes, tigers, mini-bosses, bosses, legendary bosses and so on. That would make a lot of layers for the navigation mesh.

 

Now for your problem and hobby project, more specifically the navigation mesh cells on top of buildings, what you can do is either check that each of them is indirectly or directly connection to a root cell or you could check that each of them has at least 1 connection.

My second proposal is easier but will likely still sometimes outputs garbage. My first proposal is most expensive in terms of performance and also requires a human to flag the root cell or a certain root position if you can get the closer cell to a point.

That's cool to have obstacles!

 

Our needs and problems are actually different : you work on a prebuilt level (mesh) when we work with a completely generated level. The best designer for our generated world is the generator actually. That's why it is he (it?) which creates the navigation mesh for our game. The only things that is not in the generator are the octree, the automatic connections, which is a lot of work by the way, and cells metadata.

 

What do you want to do with your navmesh generator? Is its purpose only to generate the navmesh of several prebuilt maps or is it to work for a community of modders? Do you plan to add a lot of new maps?

Share this comment


Link to comment

My day job is on http://www.creativersegame.com/ so I'm also familiar with navigating procedural data. Given the voxel nature of our game we haven't had the need or desire to generate any navmesh data, and instead just path on the grid, since much of the pathing looks at meta data associated with the specific block types that are being pathed through, and that would be lost by abstracting it up to a navmesh. Some blocks allow a mob to move over them, others don't, etc. I recently added support for pathfinding for larger than 1 block units. It ended up being relatively simple to get good results, while still ultimately pathfinding at the grid level. The core changes involved doing a loop based check along the horizontal axis to confirm spacing for the desired radius for both clearance as well as whether the node is 'grounded'. It's relatively simple and allows me to maintain the per block logic we use for 1 block pathing. What it the scale of your game that you need to build nav meshes? Even pathfinding within the visible chunk range, we haven't been tempted to try and generate any higher level pathing abstraction on top of the voxel data. If we did, I would probably start with a higher level chunk based graph, but still keep the path following and such at the block level.

Share this comment


Link to comment
12 hours ago, DrEvil said:

My day job is on http://www.creativersegame.com/ so I'm also familiar with navigating procedural data. Given the voxel nature of our game we haven't had the need or desire to generate any navmesh data, and instead just path on the grid, since much of the pathing looks at meta data associated with the specific block types that are being pathed through, and that would be lost by abstracting it up to a navmesh. Some blocks allow a mob to move over them, others don't, etc. I recently added support for pathfinding for larger than 1 block units. It ended up being relatively simple to get good results, while still ultimately pathfinding at the grid level. The core changes involved doing a loop based check along the horizontal axis to confirm spacing for the desired radius for both clearance as well as whether the node is 'grounded'. It's relatively simple and allows me to maintain the per block logic we use for 1 block pathing. What it the scale of your game that you need to build nav meshes? Even pathfinding within the visible chunk range, we haven't been tempted to try and generate any higher level pathing abstraction on top of the voxel data. If we did, I would probably start with a higher level chunk based graph, but still keep the path following and such at the block level.

We have different worlds for our game and everything is still "work in progress" and can change. Until yesterday, those were the planified worlds :

  1. Savannah - Zones made up of marching cubes voxels where they are applied like a heightmap
  2. Jungle - Organic brute force dungeon generation
  3. Goblin cave - Marching cubes voxels with a lot of tunnels and with layers
  4. Sky - Extremely organic and relying a lot on assets

The world we are currently working on is the jungle, which needs generated navigation meshes.

We put a lot of effort into navmesh and yesterday we changed the design of the jungle level generator for a classic 2D array dungeon digger. Reading your comment, it made me think that maybe we wasted our time on the navmesh because we could have just used the newly designed 2D array for the A* and string pulling. We could still use our work on the sky world but we will probably work on it in like 6 months... Damn it! We try to follow Agile methodologies and still fail to not waste our time. It just proves that I'm more of a bottoms-up and backend / engine programmer than what I should be for our game.

 

Nice game by the way :)

 

Share this comment


Link to comment

I definitely understand what it's like to follow rabbit holes. They can feel like the right decisions in the moment(not saying it isn't in your case). My hobby AI bot represents over a decade of hobby experimentation, multiple restarts on various systems, lots of experimentation, ~4 different navigation systems. Bots are one of the few areas of modding that an aspiring A.I. developer could do solo work to get noticed in the industry, and it got me my first few jobs at Pandemic and id Software. It can be counter productive when an actual product is your end goal though. Sounds like you got a pretty big scope planned out. Not sure what your team size is and such, but best of luck with it.

Share this comment


Link to comment
1 hour ago, DrEvil said:

I definitely understand what it's like to follow rabbit holes. They can feel like the right decisions in the moment(not saying it isn't in your case). My hobby AI bot represents over a decade of hobby experimentation, multiple restarts on various systems, lots of experimentation, ~4 different navigation systems. Bots are one of the few areas of modding that an aspiring A.I. developer could do solo work to get noticed in the industry, and it got me my first few jobs at Pandemic and id Software. It can be counter productive when an actual product is your end goal though. Sounds like you got a pretty big scope planned out. Not sure what your team size is and such, but best of luck with it.

We are a small team of 2. I'm the "lead" programmer and my friend is the lead designer; whilst being what I believe is a talented junior programmer, the word lead really only means that I do the most programming. I still have a lot of work to do on my maturity / professionalism side however :/

Nice for your jobs! My experimentations with voxels and procedural generation made me win all the internships that I really wanted in university and I chose one at Ludia, which is a video game development company naturally.

It's been an extremely constructive discussion with you, so thank you very much :)  I hope everything you have planned goes well for you too!

Share this comment


Link to comment

It have much simple solution for humanoids. Just add a Minkowski'y summ to nav mesh while creating it.

Share this comment


Link to comment

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Advertisement
  • Advertisement
  • Blog Entries

  • Similar Content

    • By Andr
      Hello,
      I am currently drawing an FFT ocean into a texture including Dx, Dy and Dz. As i need the real height at a point for another algorithm, i am passing the points through a vertex shader as following:
      #version 330 core layout (location = 0) in vec3 position; layout (location = 1) in vec2 texCoords; // Displacement and normal map uniform sampler2D displacementMap; uniform mat4 MVPMatrix; uniform int N; uniform vec2 gridLowerLeftCorner; out float displacedHeight; void main() { // Displace the original position by the amount in the texture vec3 displacedVertex = texture(displacementMap, texCoords).xyz + position; // Scale vertex to the 0 -> 1 range vec2 waterCellIndices = vec2((displacedVertex.x - gridLowerLeftCorner.x)/N, (displacedVertex.z - gridLowerLeftCorner.y)/N); // scale it to -1 -> 1 waterCellIndices = (waterCellIndices * 2.0) - 1.0; displacedHeight = displacedVertex.y; gl_Position = vec4(waterCellIndices, 0, 1); } This works correctly (it writes the correct height at a given point). The issue is that some points due to the Dx and Dz displacement will get outside the clip space. This points should instead wrap around as the ocean is a collection of tiles.
      As you can see in the attached file the edges fit together perfectly inside white square if they would wrap around (this is the clip space dimensions from RenderDoc).
      Is there any way i could wrap around this texture (in reality wrap around the clip space positions) so it stays all inside the viewport correctly?
      I tried to wrap around in the vertex shader by checking the boundaries and wrapping around but it doesnt work when a triangle has a least one vertice inside of the viewport and others outside.
       
      Many thanks,
      André
       

    • By 3dmodelerguy
      So I am trying to prototype a 2D platformer like game in Unity and trying to figure out the best method that I should be investigating for controlling the player character. I see generally 2 different ways to handle this which is either using Rigidbody2D or using raycasts. The list of things that I am looking to do right now are:
      has some sort of gravity like effect moving side to side jumping double jumping being able to walking up slopes of a certain angle (but prevent it after a certain angle and have the user slide down if they are on that angle or larger) be able to hang off the edge and pull up I sure some things might come up later in development if I get that far but these are what I want to get working at least at a very basic level before I move on from the character controller for the prototype (if you watch this video for about 15 - 20 seconds, you will see generally most of the functionality I am looking to achieve: https://www.youtube.com/watch?v=6rzTnx6IHqM&start=375).
      Now I started with the Rigidbody2D since I do want to have some level of gravity in the game (not just the player but items, enemy, etc.) and I can get the first 4 working pretty easy but the 5th is more troublesome (and have not gotten to the 6th). I can move up angles (I am setting Rigidbody2D.velocity for movement as it seems to be the most recommended) but for example if I am walking up an angle and then stop, my character jumps up a little (I am guess because of the forward velocity that is has when I stop applying hortizontal velocity for moving side to side, there is still extra vertical velocity).
      So I have 2 questions:
      Should I be looking at Rigidbody2D or manual raycasts? Either way that you recommend going, do you have any resources that you would recommend looking at for reference (video tutorials, articles, etc.)?
    • By Valerio Tripodo
      Hello everyone, my name is Valerio and I'm an expert in business development and product innovation. At the moment I'm working for a big gambling company and I'm also working on a side project to launch a startup aimed at the whole gaming world. I have an idea and I would like the help of this community to validate it.
      Thanks to machine learning it is possible to predict user behavior. For example, from the tests I have done, I can assure you that it is possible to predict. with an accuracy between 80 and 90 percent (depending on the quality of the data), which users will use a certain app next week.
      My idea of the service is to create a Softwere as a Service, with a monthly fee, which allows developers and business owners of a game to load tables of data into the software and receive the desired predictive output.
      For example, thanks to this softwere you might know which users will play and which ones will not play next week, or analyze more specific metrics, like who will make a purchase and who does not, who will use a particular feature and who does not, and so on.
      With this information, the team that manages the app can set up marketing activities to increase the engagment, such as sending push notifications only to those who know you will not play next week, or special offers to those who will not make a purchase.
      Here are the questions I need you to answer:
      - Do you find this service useful?
      - If so, how much would you pay per month for such a service?
       
      Thank you all for your participation,
      Valerio.
    • By babyjesus
      Hello!
      I'm currently developing a top-down RPG styled game with an Entity Component System architecture and, as the game grows in features, so does my game entities, that is, item models, enemy prototypes, etc. Those definitions are currently in JSON files but at the end of the day I still have long factory classes that read from those files and populate the entities with their respective components and properties in the most naive way.
      Reading through a presentation about Techniques and Strategies for Data-driven design in Game Development (slides 80–93) (warning: big pdf file) there is this "prototyping approach" where you can build up each game entity from multiple prototypes. I find this really interesting, however, the presentation doesn't mention any implementation details and I'm totally in the dark. I don't know how powerful should this system be. By the way, I'm using Java and LibGDX's engine. My first idea is making a robust prototype-instancing factory where, given a JSON file, it will be able to return an entity populated with its corresponding components. For example:
       
      Enemies.json
      { "skeleton" : { "id" : 0, "components" : { "HealthComponent" : { "totalHealth" : 100 }, "TextureComponent" : { "pathToTexture" : "assets/skeleton.png" } } } }  
      If I wanted to instantiate a Skeleton entity, I would read it's prototype, iterate over it's components and somehow I would instantiate them correctly.
      With this approach I have the following issues:
      It will most likely involve using Java Reflection to instance entities from a JSON file. This is a topic that I know little about and will probably end up in dark magic code. Some instances properties can't be prototyped and will have to be passed as parameters to the factory. For example, when creating an enemy entity, an (x, y) position will have to be provided. Suddenly creating instances is not so straight forward. How powerful should this system be? Should it have this "recursive" behavior where you can extend a prototype with an other and so on? This sounds a little bit like dependency injection. Am I reinventing the wheel? Is there anything done already I can make us of? Even though it's still in it's infancy, here is a short demo (under one minute) of my game.

      Thank you!
    • By Iris_Technologies
      Suppose i don't have any linker at hand but i am calling an exported function from a C++ DLL Windows, i.e. sqrt from mvcrt14.dll, how would i get just and only just the Relative Virtual Address of sqrt from that dll to simulate what linker does and convert this call to a call to such RVA on the hexcoded generated .exe file? 
      Either, how would i read the RVA of Mac, Android, iOS and Linux library formats?
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!