Jump to content

  • Log In with Google      Sign In   
  • Create Account

We're offering banner ads on our site from just $5!

1. Details HERE. 2. GDNet+ Subscriptions HERE. 3. Ad upload HERE.


A* question


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
35 replies to this topic

#21 DeathCarrot   Members   -  Reputation: 188

Like
0Likes
Like

Posted 19 August 2007 - 10:27 AM

That makes sense, I understand how smoothing like that is applied to triangulated graphs, but not fully connected ones as, well, they're already fully connected =)

But yeah, I see now how I could work with triangulated graphs, might make a simple test app tomorrow to see how it works in practice if I'm not too busy.

Thanks.

Sponsor:

#22 Symphonic   Members   -  Reputation: 313

Like
0Likes
Like

Posted 19 August 2007 - 11:31 AM

Trying to follow the AI side of this...

I was hoping that the secondary edges (the ones that are not in the triangulation) would actually be detected and considered at search time, not as a post-processing step on the already created path.

So instead of doing something like this:

path = A*(to, from) // returns 'ABCDEF'
find_and_remove_redundant_edges(path) // returns 'ACEF'

the idea is that A* itself perceives the existence of the extra edges without the need to put them in the original triangulation, in this way, a truly optimal path would be found.

But I really don't know A*, so I'm gonna go read, and then come back to this :)

OK, and the other thing I said, about not taking the optimal path to begin with, I guess in a field with some rocks in it, I can see that it shouldn't be difficult to find the optimal path, but in a large building complex, maybe the optimal path really isn't obvious.

#23 Steadtler   Members   -  Reputation: 220

Like
0Likes
Like

Posted 19 August 2007 - 01:12 PM

Quote:
Original post by Symphonic
Trying to follow the AI side of this...

I was hoping that the secondary edges (the ones that are not in the triangulation) would actually be detected and considered at search time, not as a post-processing step on the already created path.

So instead of doing something like this:

path = A*(to, from) // returns 'ABCDEF'
find_and_remove_redundant_edges(path) // returns 'ACEF'

the idea is that A* itself perceives the existence of the extra edges without the need to put them in the original triangulation, in this way, a truly optimal path would be found.

But I really don't know A*, so I'm gonna go read, and then come back to this :).


A* is a graph algorithm, it works with edges and nodes, dont forget that.

Computing and using the "Fully connected" graph will work for very small projects, but will become terribly inneficient as the size and complexity of your levels increase, as others already explained.

Take a 30 nodes example. With a full connectivity graph, we are talking up to 870 edges. With a correct triangulation, I believe the maximum number of edge will be 6*n (dont ask me the proof, I read it somewhere). So 180 edges at most, less if you use a polygonal navmesh.

Now with 1000 nodes, which isnt that much really. triangular navmesh: 6000 edges at most. Full connectivity: up to almost 1 million edges. *whistle*

#24 Vorpy   Members   -  Reputation: 869

Like
0Likes
Like

Posted 19 August 2007 - 02:23 PM

It's not necessary to know every possible edge when finding an optimal path. Even if the complete graph was used, only the edges that are local tangents to both of their obstacles need to be stored. Edges that are not locally tangent to an obstacle will never be used. A line is locally tangent to a polygonal obstacle if the points on either side of the point that the line intersects are both on the same side of the line.

For convex shapes there are 4 shared tangents. Now the number of edges is at most quadratic in the number of obstacles, not the number of nodes. Under most common circumstances, most of these edges will be obstructed by another obstacle.

#25 Symphonic   Members   -  Reputation: 313

Like
0Likes
Like

Posted 19 August 2007 - 07:51 PM

Quote:
Original post by Vorpy
A line is locally tangent to a polygonal obstacle if the points on either side of the point that the line intersects are both on the same side of the line.


Yes, and a tangent to a polygon is a line which intersects the boundary of the polygon, and for which the entire polygon lies to one side.

My Thruff with using the tangent map is that you can create problems for which the tangent map's size is exponential in the number of obstacles.

Not to mention issues like local tangent maps for simple polygons, which have O(n) bounded size.

I'm working on a proof now that A* applied to a triangulation whose path is subsequently optimized for blank spaces will yield an optimal result. I expect the trick to lay in constructing the path using A* and then using A* again to choose the optimal path across all your potential optimizations (this second search tree is linear in the size of the original path).

What concerns me now is that if the triangulation is sufficiently 'bad' you might have chosen a path that's must be longer than the optimal by mistake.




#26 Symphonic   Members   -  Reputation: 313

Like
0Likes
Like

Posted 19 August 2007 - 08:42 PM

ok, the news is good and bad:

Bad news first, look at this:


So the blue circles are start/stop points, the blue lines are the (relevant portion of the) triangulation, the green line is the optimal path, and the light blue line is the path chosen by A* over the triangulation, because the red path which, if optimized would yield the green path, is actually longer.

So, no proof :(

Good news next:

A delaunay triangulation cannot have this bad case.

more good news?

I conjecture that with some spiffy coding you can make an A* variant that will allow for discovery of the shorter secondary edges at search time; going on the Wikipedia description of A*:

function A*(start,goal)
var closed := the empty set
var q := make_queue(path(start))
while q is not empty
var p := remove_first(q)
var x := the last node of p
if x in closed
continue
if x = goal
return p
add x to closed
foreach y in successors(x)
enqueue(q, p, y) // magic goes here
return failure

At the line which I have marked magic goes here you would add discovery code to check for each successor, if it can create a secondary path with a node earlier in the path, this then provides a newer cheaper path to the successor.

Will this work?

#27 Vorpy   Members   -  Reputation: 869

Like
0Likes
Like

Posted 19 August 2007 - 10:00 PM

I see no reason why that wouldn't work.

If you are correct that the optimal path in the constrained delaunay triangulation always contains the nodes of the true optimal path, it may still be more efficient to use an unmodified A* and then smooth the resulting path. Adding edges during the A* search can waste time adding edges for nodes that won't be on the optimal path.

#28 Steadtler   Members   -  Reputation: 220

Like
0Likes
Like

Posted 20 August 2007 - 07:17 AM

or merge your triangles into convex polygons and just use a full navmesh.

and/or pathfind using the middle-point of the edges or the center of the triangles/poygons THEN constraint your navigation to the corners if that makes the path shorter.

or a dozen variations in-between...

but yeah, a delaunay triangulation, which tries to get close to equilateral triangles, would help. There are some pretty good implementations floating on the web.

#29 Symphonic   Members   -  Reputation: 313

Like
0Likes
Like

Posted 20 August 2007 - 09:37 AM

Quote:
Original post by Vorpy
Adding edges during the A* search can waste time adding edges for nodes that won't be on the optimal path.


Excellent point. Well now I've worked through some delaunay examples, and I can't find a counterexample, but a proof that A* with the pruning step will yield the optimal path eludes me...

Reading about navmeshes, more to come ;)

#30 Vorpy   Members   -  Reputation: 869

Like
0Likes
Like

Posted 20 August 2007 - 11:40 AM

Bad news: I found a counterexample.


I drew it the same way as the image above. In this case it is a Delaunay triangulation, and the path found does not go through the nodes of the shortest path. The red arc makes the upper path longer than the lower path when following the triangulation, but taking the upper path and going straight across is the shortest route. Time to try proving bounds on how far from optimal the path can be?

Edit: Actually it looks like I made a mistake and surprisingly in this picture the green and red path is actually shorter than the light blue path. The counterexample still works though, some stuff needs to be nudged around a little.

#31 Steadtler   Members   -  Reputation: 220

Like
0Likes
Like

Posted 20 August 2007 - 03:03 PM

Quote:
Original post by Vorpy
Bad news: I found a counterexample.


I drew it the same way as the image above. In this case it is a Delaunay triangulation, and the path found does not go through the nodes of the shortest path. The red arc makes the upper path longer than the lower path when following the triangulation, but taking the upper path and going straight across is the shortest route. Time to try proving bounds on how far from optimal the path can be?

Edit: Actually it looks like I made a mistake and surprisingly in this picture the green and red path is actually shorter than the light blue path. The counterexample still works though, some stuff needs to be nudged around a little.


This bad case happens because your discretisation of the continuous search space is too sparse. In other words, your triangles are too big for the precision you want in finding the minimum path.

#32 Vorpy   Members   -  Reputation: 869

Like
0Likes
Like

Posted 20 August 2007 - 03:23 PM

It's a counterexample to Symphonic's conjecture that the optimal path in a delaunay triangulation will contain a superset of the nodes in the true optimal path. The path it finds is still pretty good though, within sqrt(2) of the length of the optimal path, at least for this counterexample.

Even if the discretization is denser, if the obstacles do have the shape of those triangles the bad case will still exist, it will just be much more limited in how much error it can produce. There's an idea: adding extra points to the empty space could limit the error in the pathfinding around polygonal obstacles when using a triangulation as the navigation graph.

#33 Symphonic   Members   -  Reputation: 313

Like
0Likes
Like

Posted 23 August 2007 - 12:09 AM

fresh thoughts:

I read up on navmaps and other fun things like that. Our (deathcarrot's) problem can be described like this:

We have a space with some 1st order explicit geometric obstacles (points and straight lines forming polygons), we want the Geometrical shortest path, and which means that nodes in the search tree MUST be the points of the geometry, and not some secondary things (like the centroids of convex portions of the non-obstacle space).

We can construct a complete graph and cull edges that don't cross obstacles, but in the worst case this means storing O(n^2) edges, and an exponentially larger path space for searches than with triangulations.

There is no constrained triangulation that will add the necessary steiner points to provide an equivalent planar triangulation without exploding the search space. This is actually trivial to show; suppose there are O(n) secondary lines each of which crosses O(n) triangulation lines then for a constrained triangulation to detect these provide these secondary lines, there must be O(n^2) steiner points added to the plane.

Sooooo what do we do?

I think, the best approach is my original idea :P create a simple triangulation, and add some mechanism to the search to detect simpler paths at search time. Strictly speaking you can't escape the complexity of the search you're trying to do, but at least this way, you're not storing alot of data that's redundant anyway.

Steiner Point: a non-data point added to the space by an algorithm, (in this case, added by the constrained triangulation)

#34 Jotaf   Members   -  Reputation: 280

Like
0Likes
Like

Posted 23 August 2007 - 03:03 PM

This might seem a bit out of context, I didn't read the *whole* discussion. But judging from the example map you showed, I'd use simple steering behaviors for outside areas, since they seem pretty sparse (cast ray to target, for the first object it hits, find an intermediate point to go around it), and A* only inside buildings. Since the buildings seem pretty simple I'd just figure out every possible edge and eliminate the ones that intersect with the walls... A bit brute-force, really, but from my experience it would look good and get the job done!

#35 DeathCarrot   Members   -  Reputation: 188

Like
0Likes
Like

Posted 23 August 2007 - 07:22 PM

Quote:
Original post by Jotaf
This might seem a bit out of context, I didn't read the *whole* discussion. But judging from the example map you showed, I'd use simple steering behaviors for outside areas, since they seem pretty sparse (cast ray to target, for the first object it hits, find an intermediate point to go around it), and A* only inside buildings. Since the buildings seem pretty simple I'd just figure out every possible edge and eliminate the ones that intersect with the walls... A bit brute-force, really, but from my experience it would look good and get the job done!

Yup, that's the approach I'm currently most considering, although there will be much larger indoor areas as well that encompass several cells, so I'll need to think up a proper indoor navigation system as well (navmeshes sound promising for the moment).

As for the triangulation discussion (taking simple collision avoidance out of the equation for the sake of argument) - I'm still not convinced that with the numbers we're talking about here that it's worth sacrificing run-time performance for a decrease in complexity at the node-generation stage.
Memory consumption certainly shouldn't be an issue, each edge would just be 2 pointers (one at each vertex pointing to the other) so 200-300 vertices worth of edges per 3x3 segment still wouldn't be much of a problem, a few hundred K at most.
Also, say we have a worst case of ~100000 edges to check per segment, this is still a one-time calculation, and I can't see that taking a long time given a point-line intersection is fairly quick to check for (no (maybe one?) sqrt, no trig), and there's quite a few optimisations like bounding circle checks that could be done to reduce the number of possibilities.

Thanks for the input everyone =D

#36 Symphonic   Members   -  Reputation: 313

Like
0Likes
Like

Posted 24 August 2007 - 07:46 AM

Quote:
Original post by DeathCarrot
Thanks for the input everyone =D
You're welcome :) I guess this is why I'm not an AI guy [rolleyes]






Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS