• Create Account

## Penalized/Constrained Distance Function

Old topic!

Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

10 replies to this topic

### #1ater1980  Members

100
Like
0Likes
Like

Posted 09 April 2012 - 01:18 AM

Assume a character is located on a n by n grid and has to reach a certain entry on that grid. Its current position is (x1,y1). Also on the same grid is an enemy with coordinates (x2,y2). Each step algorithm randomly generates new candidate locations for the hero (if there are k candidates then there is a kx2 matrix of new potential locations.

What I need is some distance objective function to compare the candidates. I'm currently using d1 - c * d2, where d1 is distance to the objective (measure in terms of number of pixels for each axis), d2 is distance to the enemy and c is some coefficient (this is very much like a set-up for Lagrangian). It's not working very well though. I'd be quite keen to learn how what constrained distance function are used for similar cases.

Any suggestions or references to articles are very much appreciated.

### #2jefferytitan  Members

2508
Like
0Likes
Like

Posted 09 April 2012 - 01:44 AM

I have to ask... why generate candidates randomly? It's an unusual approach to pathfinding.

### #3ApochPiQ  Moderators

21387
Like
-1Likes
Like

Posted 09 April 2012 - 10:26 AM

There's a couple of options that come to mind right off the bat.

One is to separate pathfinding from local navigation, and use something like a steering system with an avoidance force to accomplish your enemy dodging.

The other would be to generate an influence map of the enemy's location and do a gentle falloff of his influence value in the surrounding grid squares, probably using a 1/distance-squared basis function. Then simply add the score of the enemy's influence map values on the grid to the pathfinding cost of each grid tile during A*, and you'll automatically dodge the areas where the enemy is located.

Best part of the second approach is that you can add dozens of enemies to dodge into a single influence map and avoid all of them for the same computational cost during pathing.
Wielder of the Sacred Wands

### #4jefferytitan  Members

2508
Like
0Likes
Like

Posted 09 April 2012 - 10:34 PM

@Apoch: I thought of mentioning influence maps, but they seem much less useful when using the "pick x random locations" approach to navigation as opposed to A*. Calculating the influence map would dwarf selecting a location if you only consider a handful of locations each frame and enemies can move.

I do definitely agree that enemy influence needs to drop off. Many functions that are asymptotic to zero would be fine, because otherwise the weights of a few enemies anywhere on the map could totally overwhelm the goal-seeking behaviour.

Lastly I would suggest reconsidering using a better pathfinding method unless there's a good reason. Imagine this scenario:
- Goal G is at (0,0).
- Enemy E1 is at (50,0).
- Enemy E2 is at (0,50).
- Player P is at (50,50).

For values of c >= 0.5, P will almost never reach G, instead being repelled by E1 and E2. In fact the same applies if P starts anywhere on the map outside the triangle G, E1, E2. You may think that tweaking the value of c will fix the problem, but if you keep adding enemies it will break again.

### #5ater1980  Members

100
Like
0Likes
Like

Posted 10 April 2012 - 03:44 AM

I have to ask... why generate candidates randomly? It's an unusual approach to pathfinding.

Simple evolutionary algorithm. A number of candiate solutions are generated base on the current one. Next step is to attach a fitness/objective function to each of them and then derive probability distribution. The second step is a clincher for me since I don't know how to do find this fitness function. Currently I'm just using d1-c*d2, which I guess is a pretty rough way of doing it.

### #6ater1980  Members

100
Like
0Likes
Like

Posted 10 April 2012 - 05:04 AM

@Apoch: I thought of mentioning influence maps, but they seem much less useful when using the "pick x random locations" approach to navigation as opposed to A*. Calculating the influence map would dwarf selecting a location if you only consider a handful of locations each frame and enemies can move.

I do definitely agree that enemy influence needs to drop off. Many functions that are asymptotic to zero would be fine, because otherwise the weights of a few enemies anywhere on the map could totally overwhelm the goal-seeking behaviour.

Lastly I would suggest reconsidering using a better pathfinding method unless there's a good reason. Imagine this scenario:
- Goal G is at (0,0).
- Enemy E1 is at (50,0).
- Enemy E2 is at (0,50).
- Player P is at (50,50).

For values of c >= 0.5, P will almost never reach G, instead being repelled by E1 and E2. In fact the same applies if P starts anywhere on the map outside the triangle G, E1, E2. You may think that tweaking the value of c will fix the problem, but if you keep adding enemies it will break again.

what you say makes sense, but honestly it's not my impression: for c<1 P pretty much ignores enemies regardless of the distance to them(or so far I haven't noticed much strategy in the trajectory), but for c>1 it just stays in one spot for a long time scared to death by them. I'm quite sure there should be some systematic way people design these ditance functions.

### #7ApochPiQ  Moderators

21387
Like
0Likes
Like

Posted 10 April 2012 - 11:47 AM

I'm really honestly puzzled as to why you're using an evolutionary algorithm for pathing, when pathing is a very well understood problem with a number of extremely good solutions.

It kind of strikes me as similar to using a sandwich to hammer in nails. If you're incredibly patient and have really cooperative nails, you might get somewhere before you die of old age... but it seems to me like you should trade the sandwich for a hammer and just get the job done.
Wielder of the Sacred Wands

### #8ater1980  Members

100
Like
0Likes
Like

Posted 11 April 2012 - 01:24 AM

I'm really honestly puzzled as to why you're using an evolutionary algorithm for pathing, when pathing is a very well understood problem with a number of extremely good solutions.

It kind of strikes me as similar to using a sandwich to hammer in nails. If you're incredibly patient and have really cooperative nails, you might get somewhere before you die of old age... but it seems to me like you should trade the sandwich for a hammer and just get the job done.

Can you suggest some articles or manuals on this topic? I really have never done this before.

### #9ApochPiQ  Moderators

21387
Like
0Likes
Like

Posted 11 April 2012 - 01:55 AM

Wielder of the Sacred Wands

### #10ater1980  Members

100
Like
-1Likes
Like

Posted 11 April 2012 - 04:02 AM

Ok thanks. Does it work if enemies perform eandom walk on the grid?

### #11ApochPiQ  Moderators

21387
Like
0Likes
Like

Posted 11 April 2012 - 11:43 AM

If your obstacles are moving, you probably also want to look into steering systeems, particularly the canonical "Boids" demo by Craig Reynolds.
Wielder of the Sacred Wands

Old topic!

Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.