# Pathfinding Navigation Mesh : Wall Collision Avoidance

4706 views

Subscribe to our subreddit to get all the updates from the team!

# Introduction

In our 3D game (miimii1205), we use a dynamically created navigation mesh to navigate onto a procedurally generated terrain. To do so, only the A* and string pulling algorithms were more specifically used until our last agile sprint. We recently added two new behaviors to the pathfinding : steering and wall collision avoidance. In this article, I will describe how I achieved a simple way for agents to not walk into walls.

# Configuration

• 3D or 2D navigation mesh, as long as the 3D one is not cyclic.
• Navigation cells and their : polygonal edges, connections (other cell), shared edges (the line intersecting between two connected cells), centroids and normals.
• An A* and string pulled (not tested without string pulling) generated path consisting of waypoints on the navigation mesh.

# The Algorithm

The agent is the pink low-poly humanoid and the final destination is the flag.

The ideal algorithm (yet unoptimized) would be to cast an oriented rectangle between each consecutive waypoint where its width is the two times the radius. Think of the agent's center position being in the middle of the width. Anyway, this algorithm is too complicated, too long to develop for our game, too big for nothing and so I thought about another algorithm, which has its drawbacks actually. However, it's more suited for our game.

The algorithm is the following :

1. For each waypoint, pick the current one and the next one until the next one is the last.
2. Iterate over the current navigation cell's edges, which is defined by the agent's 3D position. To do that, you need a spatial and optimized way to determine the closest cell of a 3D point. Our way to do it is to first have have an octree to partition the navigation mesh. After that, get all the cells that are close to the point plus an extra buffer. To find the cell that is the closest to the point, for each picked cell, cast a projection of the position onto each one of them. This can be done using their centroids and normals. Don't forget to snap the projected position onto the cell. After, that compute the length of the resulting vector and pick the smallest one.
3. Convert each edge to a 2D line by discarding the Y component (UP vector).
4. For each side left and right, which are defined by the agent's position and direction towards the next waypoint, cast a 2D line that start from the agent's position, that goes towards one of the two perpendicular directions related to the direction to the next waypoint and that has a length of the defined radius.
5. If there's an intersection on a connection and that it's on the shared part of the connection, then continue with the connected cell's edges.
6. If there are any intersections other than #5, create a new waypoint before the next waypoint. This new one's position is defined by the intersection's position translated by a length of two times the radius and projected towards the agent's current direction as a 2D line. The same translation is applied to the next waypoint.
7. Cast two 2D lines, one on each side of the agent as described before, starting from the sides, going towards the same direction as the agent and of the same length between the current and next waypoint. Check for #5. If there is an intersection on a connection and that it's on the unshared part of the connection, then do #6 (no if). If there's an intersection on a simple edge, then translate the next waypoint as described in #6.

Here's a video of the algorithm in action :

I feel like there has to be a simpler way to accomplish this. Can you make the corners have non-zero radius during string pulling? Or treat the corners as circles, and move the string-pulled lines to be tangent to the circles (plus an additional connecting chamfer)?

Posted (edited)

13 hours ago, swiftcoder said:

I feel like there has to be a simpler way to accomplish this. Can ﻿you make the corners﻿﻿ have ﻿﻿﻿﻿﻿﻿﻿﻿﻿﻿﻿﻿﻿﻿﻿﻿﻿﻿﻿﻿﻿﻿﻿non-zero radius during string pulling? Or treat the corners as circles, and move ﻿the string-pulled lines to be tangent to the circles (plus an additional connecting chamfer﻿)?

@miimii1205  did the string pulling algorithm, so I'll tag him. As far as I know, the second option seems to be really complex. The easiest one would actually be the first option you proposed.

I'm sure there are static ways that are better than what I just described. Static in the sense that they are computed before the path generation algorithm is done. However, my algorithm is dynamic. It runs during run time and can be limited to the two next waypoints.

Also, the algorithm that I use to intersect two 2D lines is performant.

Edited by thecheeselover

Curious why you didn't use the typical nav mesh implementation where the edges of the navmesh are retracted from the walls but the agent radius, so you don't have to do runtime radius compensation? It also nicely handles a set of other problems too, like areas too small for the agent to fit will be culled away from the resulting navigation.

3 hours ago, DrEvil said:

Curious why you didn't use the typical nav mesh implementation where the edges of the navmesh are retracted from the walls but the agent radius, so you don't have to do runtime radius compensation? It also nicely handles a set of other problems too, like areas too small for the agent to fit will be culled away from the resulting navigation.

There are two reasons for [that I can remember] :

1. Our navigation mesh are generated in-house and originally, we weren't supposed to add the wall collision avoidance module. We actually don't want the navigations meshes to depend onto each agent's radius. There's actually a way to do what you're talking about with multilayered navigation meshes.
2. Each agent can have different radius values.

Posted (edited)

I understand, but typically that means layers are built for each agent radius. It's a trade-off of course, but it solves enough of the issues that it is usually worth it for most games out there. Especially when there are freely usable navigation mesh libraries out there like recast.

I'm mainly curious what your reasoning is though. I have spend considerable time in the past on my hobby project experimenting with a manually built un-retracted navigation mesh, but it was annoying enough to try and handle the agent radius at runtime that I shelved it. That was many years ago though. For various reasons I've felt like trying to revive it recently, because for my particular use case(a bot in older quake based FPS engines, although I am parsing the collision of the map data to generate the navigation, there is a lot of noise and trash in the output that takes considerable effort to trim out, and still more manual effort to provide contextual information (team specific markup, etc)

Here's a really old video showing the tile based recast driven traditional navigation mesh.

Problem is that although I can trim some amount of data from these maps(like brushes marked as sky boxes, etc, even so the resulting data set still contains enough collision data that I get for instance a lot of navigation on "top" of the world, where the brush flags can't be automatically considered relevant or not, even though the navigation inside the world is reasonable, there is a lot of of garbage navigation as well, and enough

https://imgur.com/a/r3BbBxn (etf_2fort recast auto navmesh, without hiding a lot of top brush fases you don'e even really see the important navigation of the world)

This is another old video of the non retracted navmesh I was experimenting with way back as well, once I got dynamic obstacle support working. There was a lot to like about where this was heading, even though the navmesh was manually built, but I ultimately burned out on compensating for the radius stuff at runtime. There are a lot of edge cases in a real world 3d game map.

I've recently gotten re-interested in my bot and am thinking about giving this approach another go, since after spending a lot of time on the automatic recast drive building, I'm finding that I still have to spend considerable time fixing up the data to trim out data where I don't want it, mark team specific regions, where the resulting time to produce a completed navigation doesn't save much, especially since the manual navmesh creation process is not only very fast with the way I have it set up, but it's very precise and results in far less regions in the resulting data set. Plus I have various shortcuts that take advantage of my problem space, such as the tendency for symmetry in the game maps the bot supports, so I can do half the map and then mirror all the sectors over to the other half.

Here's a map where only the green areas have been layed out, and the cyan sectors have been mirrored for the other half of the map

Edited by DrEvil

Posted (edited)

There is something super appealing about the manual approach. The quality of output is way more controllable and optimal, in terms of numbers of regions, size of regions, etc, and this even includes a number of manual "special" sectors, like floating "ledge" sectors that I use to cast downwards to make drop connections and such.

Just gotta solve that pesky radius stuff.

Edited by DrEvil

On 5/12/2018 at 12:17 PM, DrEvil said:

I understand, but typically that means laye﻿rs are built for ﻿each agent radius. It's a trade-off of course, but it solves enough of the issues that it is usually worth it for most games out there. Especially when there are freely usable navigation mesh libraries out there like recast﻿﻿.﻿

Initially, I tried what is described in this tutorial because our project is in Java and uses the jMonkey Engine 3.1. It was such a hassle just to configure the pathfinding librairies because CritterAI is so obscure. I had to modify the source code.

What is CritterAI? According to the mentionned tutorial

Quote
Parameter Insight

The jme3AI system uses CritterAI, which is based off Recast and Detour navigation. The author of Recast lays out a few specific rules for NavMesh creation in this blog post, which logically apply to jme3AI. Below is a translation of this post as it pertains to jme3AI.

Also, the navmesh generator just didn't work and is extremely old. Everytime I fix something related to it, something else would break. So I just rage quit and decided that we would make our own navmesh module. That would allow us to have more control over the AI.

Another thing to note is that our game has differents worlds made of randomly generated levels. Each world geometry is quite different from each other. For example, we use a navigation mesh for the jungle level because of its organic nature but can directly use the A* in the savannah because it's made up of voxels with the marching cubes algorithm.

Making our own library comes with downsides too. We cannot directly use the awesome features of the libgdx ai library for example.

Furthermore, we have a lot of units with different radiuses : humanoids, snakes, bigger snakes, tigers, mini-bosses, bosses, legendary bosses and so on. That would make a lot of layers for the navigation mesh.

Now for your problem and hobby project, more specifically the navigation mesh cells on top of buildings, what you can do is either check that each of them is indirectly or directly connection to a root cell or you could check that each of them has at least 1 connection.

My second proposal is easier but will likely still sometimes outputs garbage. My first proposal is most expensive in terms of performance and also requires a human to flag the root cell or a certain root position if you can get the closer cell to a point.

That's cool to have obstacles!

Our needs and problems are actually different : you work on a prebuilt level (mesh) when we work with a completely generated level. The best designer for our generated world is the generator actually. That's why it is he (it?) which creates the navigation mesh for our game. The only things that is not in the generator are the octree, the automatic connections, which is a lot of work by the way, and cells metadata.

What do you want to do with your navmesh generator? Is its purpose only to generate the navmesh of several prebuilt maps or is it to work for a community of modders? Do you plan to add a lot of new maps?

My day job is on http://www.creativersegame.com/ so I'm also familiar with navigating procedural data. Given the voxel nature of our game we haven't had the need or desire to generate any navmesh data, and instead just path on the grid, since much of the pathing looks at meta data associated with the specific block types that are being pathed through, and that would be lost by abstracting it up to a navmesh. Some blocks allow a mob to move over them, others don't, etc. I recently added support for pathfinding for larger than 1 block units. It ended up being relatively simple to get good results, while still ultimately pathfinding at the grid level. The core changes involved doing a loop based check along the horizontal axis to confirm spacing for the desired radius for both clearance as well as whether the node is 'grounded'. It's relatively simple and allows me to maintain the per block logic we use for 1 block pathing. What it the scale of your game that you need to build nav meshes? Even pathfinding within the visible chunk range, we haven't been tempted to try and generate any higher level pathing abstraction on top of the voxel data. If we did, I would probably start with a higher level chunk based graph, but still keep the path following and such at the block level.

12 hours ago, DrEvil said:

My day job is on http://www.creativersegame.com/ so I'm also familiar with navigating procedural d﻿ata. Given the voxel nature of our game we haven't had the need or desire to generate any navmesh data, and instead just path on t﻿he grid, since much of the pathing looks at meta data associated﻿ with﻿ the﻿ specific block t﻿ypes th﻿at are being p﻿ath﻿ed through, and that would be lost by﻿ abstracting it up to a navmesh. Some blocks allow a m﻿ob to move over them, others don't, etc. I r﻿ece﻿ntl﻿﻿y added support for pathfinding for larger than 1 block units. It ended up being relatively simple to get good results, while still ultimately pathfinding at the grid level. The core changes involved doing a loop based check along the horizontal axis to confirm spacing for the desired radius for both clearance as well as whether the node is 'grounded'. It's relatively simple and allows me to maintain the per block logic we use for 1 block pathing. What it the scale of your game that you need to build nav meshes? Even pathfinding within the visible chunk range, we haven't been tempted to try and generate any higher level pathing abstraction on top of the voxel data. If we did, I would probably start with a higher level chunk based graph, but still keep the path following and such at the block level.

We have different worlds for our game and everything is still "work in progress" and can change. Until yesterday, those were the planified worlds :

1. Savannah - Zones made up of marching cubes voxels where they are applied like a heightmap
2. Jungle - Organic brute force dungeon generation
3. Goblin cave - Marching cubes voxels with a lot of tunnels and with layers
4. Sky - Extremely organic and relying a lot on assets

The world we are currently working on is the jungle, which needs generated navigation meshes.

We put a lot of effort into navmesh and yesterday we changed the design of the jungle level generator for a classic 2D array dungeon digger. Reading your comment, it made me think that maybe we wasted our time on the navmesh because we could have just used the newly designed 2D array for the A* and string pulling. We could still use our work on the sky world but we will probably work on it in like 6 months... Damn it! We try to follow Agile methodologies and still fail to not waste our time. It just proves that I'm more of a bottoms-up and backend / engine programmer than what I should be for our game.

Nice game by the way

I definitely understand what it's like to follow rabbit holes. They can feel like the right decisions in the moment(not saying it isn't in your case). My hobby AI bot represents over a decade of hobby experimentation, multiple restarts on various systems, lots of experimentation, ~4 different navigation systems. Bots are one of the few areas of modding that an aspiring A.I. developer could do solo work to get noticed in the industry, and it got me my first few jobs at Pandemic and id Software. It can be counter productive when an actual product is your end goal though. Sounds like you got a pretty big scope planned out. Not sure what your team size is and such, but best of luck with it.

1 hour ago, DrEvil said:

I definitely understand what it's like to follow rabbit holes. They can feel like the right decisions in the moment(not saying it isn't in your case). My hobby AI bot represents over a decade of hobby experimenta﻿tion, multiple restarts on various systems, lots of experimentation, ~4 different navigation systems. Bots are one of the few areas of modding ﻿that an asp﻿i﻿ring A﻿.I. develop﻿er could do sol﻿o work to g﻿et ﻿noticed﻿ in the indu﻿stry,﻿ and﻿ it go﻿t me﻿ my first few jobs at Pandemic and﻿ id Software. It can be﻿ counter﻿﻿﻿ productive when an actual product is your end goal tho﻿ugh. Sounds like you got a pretty big scope planned out. Not sure what your team size is and such, but best of luck with it.

We are a small team of 2. I'm the "lead" programmer and my friend is the lead designer; whilst being what I believe is a talented junior programmer, the word lead really only means that I do the most programming. I still have a lot of work to do on my maturity / professionalism side however

Nice for your jobs! My experimentations with voxels and procedural generation made me win all the internships that I really wanted in university and I chose one at Ludia, which is a video game development company naturally.

It's been an extremely constructive discussion with you, so thank you very much   I hope everything you have planned goes well for you too!

It have much simple solution for humanoids. Just add a Minkowski'y summ to nav mesh while creating it.

## Create an account

Register a new account

• ### Similar Content

• By Ey-Lord
Hello everyone

I am here to gather your opinion, remarks, ideas or any constructive criticism you may have about what I am going to present. Don’t be shy!

A bit of background:

I am working alone on an indy web-based game, a simulation of RPG (idle game) where the player controls a group of 4 characters that he can sent into battle and that will fight automatically based on some AI preference that are similar to the FF 12 system (but more complex / powerful). He then earns some experience and resources that he can use to improve his unit’s gear, talents and skills. He has a lot of control on what skills his characters will use and how/when.

What brings me here today:

The AI of Monsters. I have the AI settings for players covered (basically a bunch of if/then/and/or/else settings that he can combine and order so that his units will act as he intends in battle). I’ve been working on the AI of monsters for quite some time, made a long break and came back recently to it.

Short description of the battle system:

No movement involved. Battle is fully automated. Players setup its units AI settings before battle and monsters are controlled by a separate AI. This is a 4v4 battle, like FF7 with some kind of ATB and any time a unit fill its ATB, it can play and the then the next unit who will fill it will play, etc. The player is completely free of his playstyle and may create very offensive group or very defensive ones. 4 healers or 4 tanks is completely possible.

The battle system is very complex and allows for very varied and sometimes unusual strategies, like killing your own allies to proc an “on death buff” that will be devastating for the opponent.

What I want for my AI?

It needs to be fun to fight against and challenging. Ideally, I would like an AI as smart as possible (not omniscient but thinking as a human would). I know that a super-smart AI is not always the best way to make a game fun or challenging but in the context of my game, this is the result I want to achieve. It may seem unfair to have the AI try to kill your squishy while your tank is standing right there but my class design gives the tools to players to counter that so it’s not an issue (tanks are not purely aggro based for example). I want players to always be challenged by AI moves and that they must carefully think about their strategy because if they leave a big hole in it, I want the AI to exploit it.

In practice, it means a few requirements:

No dumb decision / do not fall into obvious player’s traps Exploit obvious flaws of the opponent Act in coordination when appropriate with other units Able to find who should be their focus in the player’s team (some notion of threat) Find the best move to use and if there is some kind of combo possible, use it

These requirements are harder to meet than it looks. The issue is the sheer number of different mechanisms and strategies available to players and to monsters as well. For example, there are many cases where killing or attacking a player unit might be detrimental (units that return damages or that gain power when you hit then for example).

What I have tried before?

I have tried or at least reviewed many different AI concepts so far.

-          A simple copy of my player’s AI system (hierarchical if/then/else). It was easy to script as I already have the UI in place for players so I can quickly define a basic AI for any new monster’s group. The main drawbacks are that it needs to be written for every monster group, it does not allow smart targeting and cannot find the best target or the best skill to use. It will also make dumbs decision as the targeting options cannot assess threats at all.

I’ve rules out planners since for purely selecting the best pair of (skill, target), they do not seem to match my needs.           (H)FSM or BT does not seems to match my needs as monsters do not have states / transition condition that can lead to something useful for me.        I’ve ruled out aNNs as they might, with proper training, be able to find the best action at a given time but it’s very tedious to implement and will not solve my need of finding combo or coordinating with other units very well. (plus, let’s be honest, I’d be a bit out of my depth to program them)           I have spent an extensive period of time trying with tree searches. Mainly: monte-carlo with random sampling and came to the conclusion that due to the complexity of my battle system, it is excessively costly to compute any kind of reliable data this way.
-        My current AI system is a version of my first one (the same as the players) but with access to some “smarter” targeting function that in theory allow to choose the best target. These functions work by gathering data for thousands of simulated fights during the AI time to play (1 second). It’s a first step to find the best target but not very accurate (lots of big flaws that can be exploited by players) and it is very time consuming and that is something I’m trying to get away from. I do not want to use 100% of the players CPU as I do now.

What is my latest idea?

I started to study more in-depth the Utility theory as described by Dave Marks (I read his book and watched his GDC AI lectures as well). I liked the idea. I like that I can start on something relatively simple and add more considerations as things progress to handle more and more situations. While my work began as something very close to utility theory, it evolved a bit afterward. Here is what I plan on doing to compute a unit’s best course of action:

A – Score every of its move (each move is a pair [skill, target]).

B – Chose the move according to a selection strategy (highest score, weighted random, random amongst the top scores… lots of different selection algorithm can be used there).

So far, easy, right? Let’s dig deeper into our first phase of scoring (A), which is the hard part. For all the damage or healing skills:

Step 1: The final scoring of the move [skill,target] will be function of the a “Survival” scoring for the player team and for the enemy team. An example of this relationship could be: Adding all the survival scores of each unit in Team A and divide the result by the addition of all the survival scores for each unit in team B.

Step 2: The survival score of each unit will be its Health after the move we are evaluating, divided by the total damage per turn that we estimate other units can deal to her (minus the total heal it ca receive). [This a step where we can process damage and heal over time as well]

Step 3: This damage per turn estimation will be, initially, the sum for every unit in battle of the damage or heal per second it can deal to that unit. For example: If I’m alone vs 2 bad guy that can deal 1 dmg/turn and if I can deal 1 heal/turn, the damage per turn estimation against me will be 2-1 = 1. [This is not optimal since we are counting the damage of each unit once per enemy unit but it’s a start]

Step 4: To compute the DPS or HPS of each unit, we review the unit’s skills and compute their output against the unit we want to evaluate it against. From that, we construct a skill sequence to maximize the damage output and once we got the optimal skill sequence, we can compute its DPS or HPS output and pass it along for Step 3.

It might seem like a lot of work, since, in a world with only damage or healing skills, the DPS or HPS sequence of each unit will be the same in every situation and as such only the damage done or healing done by the skill evaluated would be enough. But…

The tricky part comes from buffs and debuffs. If we use the above algorithm, (de)buffs that changes the damage or healing someone does or receive will be evaluated correctly as it will change the damage or heal per second output of units and it would affect the survival score and the final scoring. That is why I chose to include DPS and HPS computations for each unit for each move.

This is all fine until we consider (de)buffs that changes the power of other (de)buffs. Like: I cast a buff that double the length of all my future buffs. My algorithm can’t evaluate it correctly. It’s a situation that will be common enough in my game and I want my AI to deal with it. Note: there are more complex situations where a unit could buff a buff that buffs a buff that buff a buff [….] that will end-up buffing a damage or healing skills, but those cases will not be addressed as they will hopefully be rare and too cumbersome to compute anyway.

So, my goal is to score properly buffs that:

Buffs the damage or healing output of someone           Buffs that buffs a skill that does the above

L    Long story short of how I am doing that. I’m using my initial algorithm but while also estimating damage or healing per second change for each dps or hps sequence.To do that I’m evaluating every move of the unit (or every unit in case of AoE but lets keep it simple with single target) that is targeted by the buff. So, we are switching PoV here compared to the initial unit we are evaluating (unless the move evaluated is buffing itself)

-          I’m doing the above in 2 situations:

o   A : After a cast of the buff skill I’m evaluating

o   B : Without the cast of the buff, just like if it was that unit’s turn to play

-          Using a sort of min/max approach: if the unit targeted by the buff is an ally, we will take the best branch of our tree in A and compare it with the same branch (pair [skill,target]) in B. If the unit targeted by the buff is an enemy, we want to lower their maximum score and will select the tree branch that does that in A to also compare it with the same branch in B.

-          The information we extract here are DPS or HPS delta for each sequence of DPS/HPS for each unit vs each other unit.

-          Then, we go back to our steps 1 to 5 and compute our scoring for the move (buff) while using our new dps/hps deltas to get better and more accurate dps/hps sequence for units affected by the buff.

This is basically it. I’ve ran a manual version of the algorithm in 2 different battle settings to test it and see if it gave good results. It worked. Not flawlessly but it worked. Lots of cases will still require tweak and additions to the basic idea but I think its promising. (taunts and CCs are not easy to deal with but it’s manageable)

What I like is that I can add more considerations later (as in the utility theory) like: resource cost, general unit strategy (cleave or focus), behavior (careful, lunatic, reckless). While this will still be a bit time consuming it should be a good order of magnitude faster than my current AI. It also does not prevent me from adding hardcoded AI move if I want to “script” more some monsters. Debugging and tweaking might be a bit painful though, especially when fights will involve lots of skills & stats but that’s an issue that most AI for my game would likely have anyway.

To come back with my initial goals:

No dumb decision / do not fall into obvious player’s traps
o   Not perfect but it should choose the best target whenever possible

Exploit obvious flaws of the opponent
o   Same as above

Act in coordination when appropriate with other units
o   This can be done simply by adding weight to some targets or computing moves for all units of a group before deciding which one to take (for example to take the best move vs a specific unit, on average)

Able to find who should be their focus in the player’s team (some notion of threat)
o   It will naturally focus the unit who is the easiest to kill and debuff or CC the ones that deal the more heal/damage. But, to better solve this, we will need to add other considerations to the AI scoring process, It should not be *too* hard

Find the best move to use and if there is some kind of combo possible, use it
o   Combo are very often in the form of buff/debuff setup before an actual damaging or healing skills and my AI can compute up to a 3 moves combo (buff > buff > skill that dmg or heal) which should cover most cases.

I’m quite happy with my initial tests. I’m not going to be coding it now. My goal was to reflect on the subject on paper and try to see if designing my AI would be a roadblock or not for my project. There are a few other area I want to design and take time to really think about before getting back to my project full time. I’d love to hear your toughs and feedbacks about my AI ideas. Do you see huge roadblocks I’m missing? Does it sound ok to you?

If you read that far…. thank you and I can"t wait to hear from you guys😊

• By Sword7
I googled for terrain collisions but found only some sources about flat worlds.   I want collision detection for spherical terrain worlds.   I have "3D Engine Design for Virtual Globe" book but it did not mention any algorithms about collision detection for landing, rolling, walking, crashing, etc...   Does anyone have any good sources for spherical terrain collision detection?
• By bzt
Hi guys,

I've an OpenGL question, which is quite math ad linear algebra related.
Let's assume we have two coordinate systems, S (scene) and O (object). I'd like to place O inside S, so I need O' (in S coordinates). Using the following transformation matrices I can do that: rotation, scale, displacement. So far so good. I have two questions though:

1) assuming the place of O' is specified with 4 points (zerus, and one for each axii unit vector end points) how can I calculate the required transformation matrices?
It's a "simple" case, as let's say points are P0, P1, P2, P3 and x = P0->P1, y = P0->P2, z = P0->P3. Also |x| = |y| = |z| (all has the same length) and they enclose 90 degree with each other. This surely can be solved using standard GL transformations easily, I just need an algorithm to calculate the matrices from P0, P1, P2, P3.

2) the more difficult question, how can I do the same if O' can be distorted, so |x| != |y| != |z| and their angle is not necessarily 90 degree? (Imagine that you "sit" on O, so O' became stretched and it's top became larger and moved to the side, so that angle(x, y) = 80 degree for example). How to get the required transformation matrices in this universal case, when you only know P0, P1, P2, P3?

Hope it's clear what I'm asking. I need an algorithm to generate a transformation matrix that I can then use to transform all points in O into O'.
bzt

• I'm working on a step-time strategy game (not sure if that's the proper term, but it moves in discrete time steps, about 4 frames, before pausing for input) with positively monstrous pathfinding calculations. I've solved the biggest of my problems already, but there is one algorithm that causes significant slowdown, and I'm sure it could be optimized, but I'm not sure how best to do that because my case has a unique property that isn't covered in any A* optimizations I've researched. (Or perhaps I'm just not thinking about it the right way.)
My game allows the creation of formations, which are defined as positional groupings of units relative to a leader. It's extremely important to the gameplay that these formations keep their shape as best as possible while balancing the need to move smoothly through any narrow spaces that constrict their shape. The algorithm I'm asking about now is responsible for choosing a path for the leader of the formation that (1) avoids unnecessarily traveling into narrow spaces and (2) always allows a certain amount of clearance between itself and obstacles, whenever those things are possible. Where I have found typical clearance-based approaches to A* fall short in addressing my problem is, ultimately, the leader will choose a path that goes through areas even just 1 space wide if there is no other option. In other words, I can't have the algorithm disregard those, but I can't have it explore those nodes either until it has explored the entirety of the rest of the map.
Here is my basic test map for this. The 5 green symbols at the top-left represent a formation led by the V in the center. The yellow circle at the center of the map is the goal.

Here is the solution that a regular A* would give, finding the shortest path:

Here is the solution I want (and the algorithm already does this, just slowly)

Here is the Python code that achieves this -- I know there are going to be some questions about what the vectors are and all of that, but basically, this is just a typical A* setup with the addition of a "c_cost," which is 0 if the given node has all the space that it wants (as defined by optimal_area), or 2 ^ the number of spaces missing from the amount of space we want.
def leader_pathfind(leader, going_to, treat_passable = [], treat_impassable = []): """ Seek the shorest path that attempts to avoid obstacles at a distance equal to that of the furthest member in the formation """ global game_width global game_height global diagonal_movement_cost global formation_dict at_now = leader['position'] treat_passable.append(going_to) # Prevents indefinite hanging while rerouting if an obstacle has moved to the destination if at_now == going_to: return [] formation_obj = formation_dict[leader['id']] vectors = formation_obj.get_formation_vectors() max_dimension = 0 for vector in vectors: for axis in vector: if abs(axis) > max_dimension: max_dimension = abs(axis) optimal_area = basic_pow(max_dimension * 2 + 1, 2) def clear_area(position): """ Check in square rings around the position for obstacles/edges of map, then return the total area that is passable within the square that first hits an obstacle/edge of map """ if not position_is_passable(position, treat_passable = treat_passable, treat_impassable = treat_impassable): return sys.maxint distance = 0 hit_obstacle = False while not hit_obstacle: distance += 1 corner_a = (position[0] - distance, position[1] - distance) corner_b = (position[0] + distance, position[1] + distance) outline = rectline(corner_a, corner_b) for point in outline: if not within_game_bounds(point): hit_obstacle = True break elif not position_is_passable(point, treat_passable = treat_passable, treat_impassable = treat_impassable): hit_obstacle = True break elif distance == max_dimension: # We have all the space we want, no need to compute further hit_obstacle = True break tent_area = rectfill(corner_a, corner_b) area = 0 for point in tent_area: if within_game_bounds(point): if position_is_passable(point, treat_passable = treat_passable, treat_impassable = treat_impassable): # Some stray diagonals may get counted here occasionally, but that should be acceptable for this purpose area += 1 return area def c(position): # c_cost is in terms of clear area relative to the optimal clearance area = clear_area(position) c_cost = 0 if area >= optimal_area else basic_pow(2, abs(area - optimal_area)) # Cost grows exponentially by powers of 2 for each point less than the optimal area return c_cost # Traditional A* # def h(position): return basic_distance(position, going_to) / 100.0 f_dict = {at_now: h(at_now)} g_dict = {at_now: 0} open_set = [at_now] closed_set = [] get_last = {} found_path = False current = at_now while len(open_set) > 0: current = open_set[0] for op in open_set: if op in f_dict.keys(): if f_dict[op] < f_dict[current]: current = op open_set.remove(current) closed_set.append(current) if current == going_to: found_path = True break for neighbor in neighbors(current, treat_passable = treat_passable, treat_impassable = treat_impassable): if neighbor in closed_set: continue if not neighbor in open_set: open_set.append(neighbor) movement_cost = diagonal_movement_cost if diagonally_adjacent(current, neighbor) else 1 tent_g = g_dict[current] + movement_cost tent_f = tent_g + c(neighbor) + h(neighbor) if not neighbor in f_dict.keys(): g_dict[neighbor] = tent_g f_dict[neighbor] = tent_f get_last[neighbor] = current else: if tent_f < f_dict[neighbor]: g_dict[neighbor] = tent_g f_dict[neighbor] = tent_f get_last[neighbor] = current if found_path: path = [going_to] current = going_to while current != at_now: current = get_last[current] path.append(current) path.reverse() path.pop(0) return path else: write_log('leader_pathfind: could not find a path from ' + str(at_now) + ' to ' + str(going_to)) return False Jump Point Search, which I've used elsewhere, relies on the assumption of a uniform-cost grid, so I believe I need to rely on abstracting my map down to a smaller number of abstract nodes, and I'm at a loss for how I would do that for this case. What I've read on using abstract nodes rely on things like identifying entrances/exits from abstract nodes, but that won't be useful if the algorithm chooses a node with acceptable entrance/exit points but lots of obstacles in the middle. Here is an arbitrarily chosen slice of the map with the correct path:

Broken up into abstract 5x5 nodes, I am failing to see a pattern I could rely on to give the algorithm the information it needs, especially since a crucial part of the path rests right on the right edge of the two leftmost abstract nodes. To know that this is the correct path, i.e. has sufficient clearance, it would need to calculate the clearance from that edge, dipping into the nodes in the middle. So I can't choose nodes that will be helpful without already having the information that I need the nodes to speed up the calculation of.
That's as far as I've gotten with my thought process. Can anyone make any suggestions, general or specific? Thanks in advance