[LOSING MY MIND] How to create a navmesh from scratch?

Started by
9 comments, last by GoldSpark 1 month ago

So I have been focusing on AI programming and so far I have learned A*, State Machines, Behavior Trees, steering behaviors and how to implement it all.

I have come across that navmeshes are now being used widely. I can't find a single tutorial on how to create a navmesh in my own game engine. How to bake it, from scratch with no libraries like Recast?

What do I need to do to make it blue like in unity and connect it so it becomes a graph? I know about triangles but is there a tutorial on how to do this?

It was very fun using everything I have learned from AI programming but I got stuck here on this and now I have lost all motivation for AI programming…

For example how can I do it so those blue polygons(like in unity) go around obstacles in my map?



Ayy lmao

Advertisement

Building a navmesh can be difficult if it needs to handle arbitrary 3D geometry. If your environment is 2.5D then it could be much more straightforward. The main requirement is to have some definition of “walkable” areas (e.g. slope not too steep, no obstacles), and then to represent those walkable areas with a set of convex 2D polygons connected in a graph data structure. I'd first try something like this:

  • Generate a bunch of points throughout the environment (random or uniformly spaced).
  • Determine for each point if it is “walkable”, e.g. use a capsule intersection test to see if there are obstacles.
  • Somehow cluster the walkable points into convex regions. You might cluster points based on slope and/or height, then decompose those clusters into convex parts using convex decomposition algorithms.
  • Build connectivity graph of convex regions.

However this doesn't seem guaranteed to work in all cases. The 3rd and 4th steps could be difficult to get right.

Instead, I'd suggest trying a different data structure entirely. A “probabilistic roadmap" is another way to build a graph compatible with A-star algorithm, and is much easier to construct (can even be built dynamically at run time). The roadmap data structure is often used in robotics and motion planning fields. Essentially it is just a graph of connected points in the environment, with a few requirements: points should be placed no closer to obstacles than some minimum distance (i.e. an agent's radius), and connected points should be visible to each other (using a swept-sphere intersection test, not just a single ray). It might be constructed using an approach like this:

  • Generate random points in the environment's empty space using some approach (rapidly exploring random tree, random ray tracing, recursive octree subdivision, etc.). Discard points that are too close to obstacles, or high in the air.
  • Build an acceleration structure (BVH, KD tree) for the points to speed up neighbor queries.
  • For each point, determine potential nearby neighbors within some distance (e.g. twice the point density).
  • Evaluate each neighbor to see if it can be connected with the current point. This could be done by a swept-sphere intersection test, or approximated by tracing many random rays between random points on the surfaces of two spheres.

Then you can use this roadmap data structure with A-star for planning paths through the environment. Given a start and goal positions, first find the nearest graph nodes to the start and goal. Then, use A* to plan a path through the graph between the start and end nodes. The agent should keep track of its current intermediate goal node along the path, and always moves toward that intermediate goal. At every time step, the agent should check the next node in path for visibility to see if the intermediate goal should be changed. If the next node is visible, then it should be become the intermediate goal. Eventually, the next goal will be the destination, and agent can move toward that to reach the final goal. The roadmap data structure ensures that this produces collision-free paths due to how it is constructed with sufficient offset from nearby obstacles.

I would voxelize the scene, make isosurface from the obtained density volume, use mesh reduction on the result.

This is robust, but only if your initial voxelization is high resolution enough to capture the thinnest walls.

But maybe the method proposed above - generating random samples on the surface, the tracing rays to find adjacent samples - is less work / more efficient.

GoldSpark said:
It was very fun using everything I have learned from AI programming but I got stuck here on this and now I have lost all motivation for AI programming…

I work on such geometry processing things for many years now, and it's difficult, frustrating, and often just boring and tedious.

Look for libraries. It has nothing to do with AI but will cost you a lot of time if you do it all yourself.

I wonder how Valve did it for cs 1.6 and people for Halo Combat Evolved…

Because cs 1.6 did it like squares not triangles and I am not sure if Halo CE used some library for navmesh.

Ayy lmao

GoldSpark said:
I wonder how Valve did it for cs 1.6 and people for Halo Combat Evolved… Because cs 1.6 did it like squares not triangles and I am not sure if Halo CE used some library for navmesh.

For small-ish games like those they probably just had the level designer place a navmesh by hand, without automatic tools. (e.g. you might create a set of connected convex polygons in 3DS Max which represent the navmesh, then export those as a mesh). It's quicker/easier to pay a level designer to do that for games of limited scope than to pay a programmer to develop an automatic system that works well in all problematic cases.

@Aressera Ohhh awesome. Thank you very much. Makes sense.

So I can still focus on decision making and steering of AI programming without having to worry about not implementing automatic navmesh system still?

But cs 1.6 has automatic thing that generated navmesh of a map and it geenrates it as squares.

Ayy lmao

GoldSpark said:
I wonder how Valve did it for cs 1.6

Pretty sure that's BSP levels. This makes the extraction of walkable polygons and their connectivity very easy, so robust automation is easy too.

Contrary, if we compose the level from arbitrary models, the problem becomes very hard. Models often are not manifold, so it's not defined what space is solid and what is empty. The most robust method is integrating the whole surface to a query point to obtain a winding number, but the basic implementation is very slow.
And the problem becomes worse due to overlap of many such models composing a complex scene.
Automatic solutions will likely have errors requiring some manual work to fix them.

@JoeJ So BSP levels have information on what is already walkable? And then all the algorithm does is creates nodes out of those polygons?

Ayy lmao

GoldSpark said:
So BSP levels have information on what is already walkable? And then all the algorithm does is creates nodes out of those polygons?

BSP trees define a notion of solid and empty space (it's cells made of convex polyhedra are either solid or empty).
Thus, it's trivial to extract the surface (any renderer does), and you can filter by normal dot upVector to find walkable surface for example. (Raytracing BSP to check height also is possible)
Finding adjacent surface polygons across an edge is possible as well.
Finally, each surface polygon becomes a node, and adjacent polygons form edges in a graph data structure, which you can use for path finding algorithms.

So yes, older game levels made from BSP would ease up your AI research a lot.

Thank you very much.

Ayy lmao

This topic is closed to new replies.

Advertisement