# Pathfinding Navigation meshes and pathfinding

2862 views

I recently worked on a path-finding algorithm used to move an AI agent into an organically generated dungeon.

It's not an easy task but because I've already worked on Team Fortress 2 cards in the past, I already knew navigation meshes (navmesh) and their capabilities.

## Why Not Waypoints?

As described in this paper, waypoint networks were in the past used in video games to save valuable resources. It was an acceptable compromise : level designers already knew where NPCs could and could not go. However, as technology has evolved, computers got more memory that became faster and cheaper.

In other words, there was a shift from efficiency to flexibility.

In a way, navigation meshes are the evolution of waypoints networks because they fulfill the same need but in a different way.

One of the advantages of using a navigation mesh is that an agent can go anywhere in a cell as long as it is convex because it is essentially the definition of convex. It also means that the agent is not limited to a specific waypoint network, so if the destination is out of the waypoint network, it can go directly to it instead of going to the nearest point in the network. A navigation mesh can also be used by many types of agents of different sizes, rather than having many waypoint networks for agents of different sizes.

Using a navigation mesh also speeds up graph exploration because, technically, a navigation mesh has fewer nodes than an equivalent waypoint network (that is, a network that has enough points to cover a navigation mesh).

To summarize, a navigation mesh is a mesh that represents where an NPC can walk. A navigation mesh contains convex polygonal nodes (called cells). Each cell can be connected to each other using connections defined by an edge shared between them (or portal edge). In a navigation mesh, each cell can contain information about itself. For example, a cell may be labeled as toxic, and therefore only those units capable of resisting this toxicity can move across it.

Personally, because of my experience, I view navigation meshes like the ones found in most Source games.

However, all cells in Source's navigation meshes are rectangular. Our implementation is more flexible because the cells can be irregular polygons (as long as they're convex).

A navigation mesh implementation is actually three algorithms :

2. A string pulling algorithm
3. And a steering/path-smoothing algorithm

In our cases, we used A*, the simple stupid funnel algorithm and a traditional steering algorithm that is still in development.

### Finding our cells

Before doing any graph searches, we need to find 2 things :

1. Our starting cell
2. Our destination cell

For example, let's use this navigation mesh :

In this navigation meshes, every edge that are shared between 2 cells are also portal edges, which will be used by the string pulling algorithm later on.

Also, let's use these points as our starting and destination points:

Where our buddy (let's name it Buddy) stands is our staring point, while the flag represents our destination.

Because we already have our starting point and our destination point, we just need to check which cell is closest to each point using an octree. Once we know our nearest cells, we must project the starting and destination points onto their respective closest cells.

In practice, we do a simple projection of both our starting and destination points onto the normal of their respective cells.

Before snapping a projected point, we must first know if the said projected point is outside its cell by finding the difference between the area of the cell and the sum of the areas of the triangles formed by that point and each edge of the cell. If the latter is remarkably larger than the first, the point is outside its cell.

The snapping then simply consists of interpolating between the vertices of the edge of the cell closest to the projected point. In terms of code, we do this:

Vector3f lineToPoint = pointToProject.subtract(start);
Vector3f line = end.subtract(start);
Vector3f returnedVector3f = new Vector3f().interpolateLocal(start, end, lineToPoint.dot(line) / line.dot(line));

In our example, the starting and destination cells are C1 and C8 respectively:

## Graph Search Algorithm

A navigation mesh is actually a 2D grid of an unknown or infinite size. In a 3D game, it is common to represent a navigation mesh graph as a graph of flat polygons that aren't orthogonal to each other.

There are games that use 3D navigation meshes, like games that use flying AI, but in our case it's a simple grid.

For this reason, the use of the A* algorithm is probably the right solution.

We chose A* because it's the most generic and flexible algorithm. Technically, we still do not know how our navigation mesh will be used, so going with something more generic can have its benefits...

A* works by assigning a cost and a heuristic to a cell. The closer the cell is to our destination, the less expensive it is. The heuristic is calculated similarly but we also take into account the heuristics of the previous cell. This means that the longer a path is, the greater the resulting heuristic will be, and it becomes more likely that this path is not an optimal one.

We begin the algorithm by traversing through the connections each of the neighboring cells of the current cell until we arrive at the end cell, doing a sort of exploration / filling. Each cell begins with an infinite heuristic but, as we explore the mesh, it's updated according to the information we learn. In the beginning, our starting cell gets a cost and a heuristic of 0 because the agent is already inside of it.

We keep a queue in descending order of cells based on their heuristics. This means that the next cell to use as the current cell is the best candidate for an optimal path. When a cell is being processed, it is removed from that queue in another one that contains the closed cells.

While continuing to explore, we also keep a reference of the connection used to move from the current cell to its neighbor. This will be useful later.

We do it until we end up in the destination cell. Then, we "reel" up to our starting cell and save each cell we landed on, which gives an optimal path.

A* is a very popular algorithm and the pseudocode can easily be found. Even Wikipedia has a pseudocode that is easy to understand.

In our example, we find that this is our path:

And here are highlighted (in pink) the traversed connections:

### The String Pulling Algorithm

String pulling is the next step in the navigation mesh algorithm. Now that we have a queue of cells that describes an optimal path, we have to find a queue of points that an AI agent can travel to. This is where the sting pulling is needed.

String pulling is in fact not linked to characters at all : it is rather a metaphor.

Imagine a cross. Let's say that you wrap a silk thread around this cross and you put tension on it. You will find that the string does not follow the inner corner of it, but rather go from corner to corner of each point of the cross.

This is precisely what we're doing but with a string that goes from one point to another.

There are many different algorithms that lets us to do this. We chose the Simple Stupid Funnel algorithm because it's actually...

...stupidly simple.

To put it simply (no puns intended), we create a funnel that checks each time if the next point is in the funnel or not. The funnel is composed of 3 points: a central apex, a left point (called left apex) and a right point (called right apex). At the beginning, the tested point is on the right side, then we alternate to the left and so on until we reach our point of destination. (as if we were walking)

When a point is in the funnel, we continue the algorithm with the other side.

If the point is outside the funnel, depending on which side the tested point belongs to, we take the apex from the other side of the funnel and add it to a list of final waypoints.

The algorithm is working correctly most of the time. However, the algorithm had a bug that add the last point twice if none of the vertices of the last connection before the destination point were added to the list of final waypoints. We just added an if at the moment but we could come back later to optimize it.

In our case, the funnel algorithm gives this path:

### The Steering Algoritm

Now that we have a list of waypoints, we can finally just run our character at every point.

But if there were walls in our geometry, then Buddy would run right into a corner wall. He won't be able to reach his destination because he isn't small enough to avoid the corner walls.

That's the role of the steering algorithm.

Our algorithm is still in heavy development, but its main gist is that we check if the next position of the agent is not in the navigation meshes. If that's the case, then we change its direction so that the agent doesn't hit the wall like an idiot. There is also a path curving algorithm, but it's still too early to know if we'll use that at all...

We relied on this good document to program the steering algorithm. It's a 1999 document, but it's still interesting ...

With the steering algoritm, we make sure that Buddy moves safely to his destination. (Look how proud he is!)

So, this is the navigation mesh algorithm.

I must say that, throughout my research, there weren't much pseudocode or code that described the algorithm as a whole. Only then did we realize that what people called "Navmesh" was actually a collage of algorithms rather than a single monolithic one.

We also tried to have a cyclic grid with orthogonal cells (i.e. cells on the wall, ceiling) but it looked like that A* wasn't intended to be used in a 3D environment with flat orthogonal cells. My hypothesis is that we need 3D cells for this kind of navigation mesh, otherwise the heuristic value of each cell can change depending on the actual 3D length between the center of a flat cell and the destination point.

So we reduced the scope of our navigation meshes and we were able to move an AI agent in our organic dungeon. Here's a picture :

Each cyan cubes are the final waypoints found by the String pulling and blue lines represents collisions meshes. Our AI is currently still walking into walls, but the steering is still being implemented.

Good article. As well as A* also consider the Floyd-Warshall algorithm which is very easy to implement and good if you want to pre-calculate all the paths from each cell to every other cell in the mesh.

Thumbs up for a very good article.

Nice article!

As I've been implementing NavMeshes myself - in most online resources I've missed some further description (majority of articles mention Funnel algorithm, but very few actually describe it in pseudo-code - that is the one thing I'm missing in this article, although from my own experience I know it takes a lot of time to write articles, and they simply can't contain everything).

Now, as for NavMesh itself (and my personal curiosity!) - how do you generate navmeshes? Do you take agent sizes into account (and how?)?

Just now, Vilem Otte said:

Nice article!

As I've been implementing NavMeshes myself - in most online resources I've missed some further description (majority of articles mention Funnel algorithm, but very few actually describe it in pseudo-code - that is the one thing I'm missing in this article, although from my own experience I know it takes a lot of time to write articles, and they simply can't contain everything).

Now, as for NavMesh itse﻿lf (and my personal curiosity!) - how do you generate navmeshes? Do you take agent sizes into account (and how?)?

Now THAT'S the question!

It really depends on the type of terrain used. If the terrain geometry is quite simple then maybe just build the navmesh by using the terrain directly.

Another idea is if everything is static we could simply use static navmesh and create a cell for each triangle...

In our case, we had a simple room-corridor layout so most of our navmesh was rectangular, so it was quite simple.

While I was building the graph traversal algorithm @thecheeselover was doing all of the navmesh baking so really it's up to you to find the best approach for the job.

Posted (edited)

36 minutes ago, Vilem Otte said:

Do you take agent sizes into account (and how?)?

26 minutes ago, jb-dev said:

While I was building the graph traversal algorithm @thecheeselover was doing all of the navmesh baking so really it's up to you to find the best approach for the job.

You can either adapt the pathfinding algorithm to suit the agent sizes or you can adapt the navmesh generation to do so. From what I've seen, there are more articles about adapting the navmesh rather than the pathfinding algorithm. However, I did write a blog entry about how I tackled that problem by taking into account agent sizes during the pathfinding algorithm pass. Here's the link.

Edited by thecheeselover

2 minutes ago, thecheeselover said:

You can either adapt the path-finding algorithm to suit the agent sizes or you can adapt the navmesh generation to do so. From what I've seen, there are more articles about adapting the navmesh rather than the path finding algorithm. However, I did write a blog entry about how tackled that problem by taking into account agent sizes during the path finding algorithm pass. Here's the link.

Thumbs up and I double-confirm the post. The problem is not really (only) in the how famous and/or hard technique you use(even though difficulty of coding themes can be very important) but there are more articles of certain path-finding techniques than actual
code and/or pseudo-code
on how to implement them. @thecheeselover I will read the article later, will comment on it also if need be.

@jb-dev and @thecheeselover I see thanks.

In terms for navmesh generation I have tried:

• navmeshes directly created by artists (these tends to be bad, as artists generally don't understand navmesh that good - but they asked whether they can have control ... this didn't work in the end)
• navmeshes generated by artists by setting walkable/obstacle on surface - geometry then gets voxelized (varying voxel size - this can be used to simulate navmesh agent size ... first voxelize walkables, then 'subtract' obstacles), and then triangulated, this works quite well (although is computationally quite heavy - but can be preprocessed)
• the same as 2, but geometry doesn't get voxelized, instead it is getting clipped (supports just bounding boxes for non-walkables now) - generally - clip your walkable geometry with non-walkables, and then triangulate result. varying agent size could then be solved similarly to what you have described.

Currently I'm still sticking with the middle one.

Quote

I must say that, throughout my research, there weren't much pseudocode or code that described the algorithm as a whole. Only then did we realize that what people called "Navmesh" was actually a collage of algorithms rather than a single monolithic one.

Technically, only your spatial data structure, i.e. the replacement for waypoints, is the navmesh. Ultimately most navigation/pathfinding systems have 3 components which you tweak to work well together:

• spatial graph (to store the information about the world as a graph)
• graph search (to find a logical route through the graph)
• path post-processing (to translate the route into an effective set of instructions for an agent).

@Kylotan I was more pointing directly to the Funnel algorithm in case of Navmesh pathfinding (which is technically post processing step).

My problem at the time was - that I had a planet with features on it (in reality it could really be any geometry - toroids, various space stations, etc.). By voxelization (voxelize all walkables, and then subtract all non-walkables) and then applying marching cubes algorithm I built a navigational mesh - which I also used to create a graph (i.e. triangles being nodes and edges of graph were connecting neighboring triangles). Finding the list of nodes to traverse when going from point A to point B was done with A* - so again straightforward.

The problem was path optimization - to make the path look nice - i.e. a string pulling algorithm. The pseudo code in articles for those is mostly done in 2D assuming planar like map with single upwards direction. There were multiple solutions I have tried back then:

1. treat it as spherical geometry (which works very well for such planet-like shapes - as long as you move on the surface), spherical geometry is not that hard once you get used to it
2. another option is to use a common 'upward direction' for each 2 adjacent triangles you walk over which is F.e. normalize(N1 + N2) where N1 and N2 are triangle's normals. Project both triangles along this direction and you have your 2D coordinates for current step in Funnel algorithm

I have already finished it back then and was quite satisfied with results. While that personal project never got finished, I still use the pathfinding algorithm and navmesh generation in my source code.

Note: If I'm not mistaken (correct me if I'm wrong) - most of commercial engines does not support these like navmeshes (and I know this is a bit of corner-case scenario). I.e. try generating navmesh in engine of your choice for sphere on which characters could walk like on planet (i.e. center of gravity is in its center) - I don't think it is supported at all (at least it wasn't last time I've checked in Unity).

In theory a navmesh can cover any surface, but yeah, in practice, it's common to 'simplify' the process to assume you're going to only walk across the upper edge of a surface facing up - this allows the algorithm to also make assumptions about slopes that are too steep to traverse, etc.

## Create an account

Register a new account

• ### Similar Content

• My project started in 2014 but recently ended due to no funds.  AltarisNine was a Minecraft project based on RPG. The conc﻿ept was nine islands that you explore at a time to follow an in depth lore based on our own production team. This is where the 'Nine' comes in. With skepticism of future success we hope to make this tale into chapters. Such as the first one introducing Nine islands at time.
It wasn't always the same though, my world did evolve over time and now I have a better idea of what it is better than ever. In the first island, Main Isle, is themed around jungles and wilderness. There's lore that stretches throughout the chapter which will engage the player. There would also be kinds of characters you can be such as any other RPG which could be talked about (because i'm still  about what I have lol)
My former team was designing a world players would get into interact with in various ways. Boss battles would be minigames and the RPG lore would be engaged in and something indie platforms would enjoy and talk about beyond platforms.
In the minecraft varient I was a builder, the leader, and the story director which everyone respected. I led my own team of builders and story writers. While I chose certain individuals to be the head department of development and art design.
The reason I am here is to find a new team to help take this away from minecraft and hope we can be successful about it. I'll happily commute each and every person that volunteers and will be accommodated down the line with promotions, wages, and definitely praised for helping start my dream up.

Here are some questions that were frequently asked and that I can thoroughly answer:
What is the goal of the game? If you've ever heard of Wizard101. I got inspired by that game a little. I like the concept of making yourself in this world of mystery and impressing people with new mechanics and events that they enjoy. I'd like for the game to be successful and be mostly on PC but if this keeps up we could reach out to other consoles. But for now, PC, one platform at a time lol. My goal personally is to give people the entertainment and enjoyment I think they'll deserve. Something thats not cheesy, not cliche, something new to keep evolving the gaming community Is this in first-person or third-person? This will be a third person game. We can play around with the camera angles but I kind of want it from a aerial pov I saw RPG in the post so can I assume that the game will have generic RPG elements, e.g. quests, npcs, story-line, items? Yes this will have generic RPG elements. But with a few surprises that make the game different. Such as making boss fights some type of minigame. I don't know how the audience will like or even if it'll flow with game play. But I'd still like to take the idea on for now. Will there be combats, e.g. vs. monsters, vs. players(?) ? There will be tons of concepts. As i've said before the 'Nine' comes in the Nine isles of this world we haven't named yet lol. Each nine islands we come up with will not only give players plenty of content to play, but something we break up into story chapters. Each island will have its on set monsters tied to the story or even monsters that are just natural in their environment. There will also be a PvP aspect which can't be brought up too much because its difficult to try to come up with a player style culture that isn't too predictable or generic or even cliche. I was wondering if it should be an initiated fight or a head on duel like world of warcraft. Is this a single player game or a multiplayer one? Definitely multiplayer. Will the game look like Minecraft? like a voxel/blocks game? I imagined it not looking like minecraft but maybe that can be a concept of its own down the line (like an island concept). I was thinking along the lines of a 3D style and not like minecraft. What are the core mechanics to be included, e.g. player movement, enemy movement, enemy AI? This question is more technical but there will be interactive things in the world, things to collect, natural occurring crafting supplies to make new loot and weapons with. There will be NPC's and thats a broad topic enough lol. I'd even a imagine a pet, housing, and gardening system. But thats for accessories in coding and to give more content in the game for later polishing. Is there a storyline already made? There is an indirect storyline. We've made a script for voice actors (and just what to make the NPC's say in general) in A9 v1. Are there goals already planned out? There are many goals to set out. One each at a time for separate upcoming departments The first 8 pictures were of our hub, the other 9 was our factions world. The factions world doesn't retain to this project I wanted you to see how dedicated I was to making this project. I built everything in the hub myself except for the giant pagodas. The last two photos were all the ones I could find of the RPG world

• I want to render an ocean where players can change waves’ amplitude in real-time. Initially, I would render rolling waves (see picture). As the amplitude increases, I need to transition the rolling waves into breaking waves (see picture). For now, I am not going to show the shoreline onscreen so I don’t need to render breaking waves interacting with the shoreline; I only need breaking waves on the open ocean.

I’ve tried three different approaches so far and I’ve only had success with rolling waves using approach 1. Breaking waves have been impossible so far with all three approaches.

Approach 1: Mesh deformation

a.     I can create smooth rolling waves using the Sine and Gerstner equations.

b.     Since I can’t use these equations for breaking waves, I tried to implement them by using this free plugin whose output is similar to this paid mesh deformation plugin. But there are 2 problems with this plugin approach:

·      There is no smooth transition between rolling waves generated by approach 1a and the breaking waves generated by the Deform plugin

·      The output of the plugin does not look similar to real breaking ocean waves in three different ways:

i.     No smooth blending with the ocean surface

ii.     A large depression is created below the crest

iii.     The entire wave is the same height (rather than with more realistic variations)

c.      I considered using vertex shaders but this approach seems similar to mesh deformation.

Approach 2: Fluid dynamics + metaballs

1.     To render an ocean I will need thousands of particles which will be too expensive in terms of performance (especially for mobile devices).

Approach 3: Using mesh files

1.     I can create breaking waves using some 3D software like in this post but then I can’t modify the ocean in real-time. It will be more like a pre-rendered simulation.

To summarize, I am looking for an approach where I can vary ocean waves’ amplitude for a smooth transition between rolling waves and breaking waves. Please let me know if you have more questions.

• By bandages
So, in real life, incoming dot normal at the silhouette is always 0.  With smooth shaded meshes, it never is, not naturally, not outside of contrived situations.  (Not with flat shaded meshes either, I guess.)
And incoming dot normal is one of the bedrocks of CG.  Probably the equal of 4x4 matrix multiplication.  Problems with silhouette normals show up in Fresnel, in diffuse lighting, in environment mapping....  everywhere.  But I can't really find anybody talking about it.  (Maybe I'm not Googling the right terms.)
Obviously, the problem decreases as poly count goes up, eventually reaching a point where it's dwarfed by other silhouette problems (like translucency or micro-occlusion) that CG doesn't handle well either.  But, if I'm reasoning correctly, normal maps don't improve the problem-- they're as likely to exacerbate it as improve it, and the exacerbations are, aesthetically speaking, probably worse than the improvements are better.
I've tried playing with crude fixes-- basically, rotating normals toward incoming by a percentage, or of course clamping incoming dot normal (like we all have to do) to prevent it from bending behind the mesh.  Nothing I've tried looks good.  I suppose the best option might be to rotate normals to perpendicular to incoming at the silhouette and then interpolate to the nearest inflection point  of something like screen space depth to preserve curvature, but the math for how to do that is beyond me, and I'm not sure it would look any better.  Or maybe, instead, somehow, adjust the drawn silhouette to match the silhouette defined by incoming dot normal?  Not even sure how that would work, not if the normal was pointing away from incoming.
I don't know-- is this a solvable problem?  Has anyone tried other stuff and given up, pursued anything that was promising but too expensive, anything like that?  Are there any papers I'm missing?  It's really surprising to me that I can't find anyone else talking about this.
(Apologies if I chose the wrong subforum for this.  I considered art forums, but I felt that people frequenting the programming forums would have more to say on the subject.)

• Hi there! I have one issue for now. I'm creating a RayCasting application, and for my floor and ceiling I'm trying to use Mode7 for rendering (I think this is easier to understand). but, I cant align the RayCasting walls with the mode7 floor. I use a rotate matrix to make the rotation of floor. Do you know what a need to think in the implementation to fix that? Or do you know if there is some tutorial explaining about it? Thanks!!! (Check the image below for understand)

Here is my mode7 code:
function mode7() { let _x = 0; let _y = 0; let z = 0; let sin = Math.sin(degreeToRadians(data.player.angle)); let cos = Math.cos(degreeToRadians(data.player.angle)); for(let y = data.projection.halfHeight; y < data.projection.height; y++) { for(let x = 0; x < data.projection.width; x++) { _x = ((data.projection.width - x) * cos) - (x * sin); _y = ((data.projection.width - x) * sin) + (x * cos); _x /= z; _y /= z; if(_y < 0) _y *= -1; if(_x < 0) _x *= -1; _y *= 8.0; _x *= 8.0; _y %= data.floorTextures[0].height; _x %= data.floorTextures[0].width; screenContext.fillStyle = data.floorTextures[0].data[Math.floor(_x) + Math.floor(_y) * data.floorTextures[0].width]; screenContext.fillRect(x, y, 1, 1); } z += 1; } }

• The latest release of Diligent Engine combines a number of recent updates (Vulkan on iOS, GLTF2.0 support, shadows), significantly improves performance of OpenGL backend, updates API, adds integration with Dear Imgui and implements new samples and tutorials. Some of the new features in this release:
GLTF2.0 support (loader, PBR renderer and sample viewer) Shadowing Component and Shadows Sample Integration with Dear Imgui library and Dear Imgui demo Tutorial13 - Shadow Map Tutorial14 - Compute Shader Tutorial15 - Multiple Windows Check it out on GitHub.