ater1980

Members
  • Content count

    5
  • Joined

  • Last visited

Community Reputation

100 Neutral

About ater1980

  • Rank
    Newbie
  1. Penalized/Constrained Distance Function

    [quote name='ApochPiQ' timestamp='1334130956' post='4930168'] You should probably start with [url="http://en.wikipedia.org/wiki/A*_search_algorithm"]A*[/url]. [/quote] Ok thanks. Does it work if enemies perform eandom walk on the grid?
  2. Penalized/Constrained Distance Function

    [quote name='ApochPiQ' timestamp='1334080058' post='4929954'] I'm really honestly puzzled as to why you're using an evolutionary algorithm for pathing, when pathing is a very well understood problem with a number of extremely good solutions. It kind of strikes me as similar to using a sandwich to hammer in nails. If you're incredibly patient and have really cooperative nails, you might get somewhere before you die of old age... but it seems to me like you should trade the sandwich for a hammer and just get the job done. [/quote] Can you suggest some articles or manuals on this topic? I really have never done this before.
  3. Penalized/Constrained Distance Function

    [quote name='jefferytitan' timestamp='1334032474' post='4929766'] @Apoch: I thought of mentioning influence maps, but they seem much less useful when using the "pick x random locations" approach to navigation as opposed to A*. Calculating the influence map would dwarf selecting a location if you only consider a handful of locations each frame and enemies can move. I do definitely agree that enemy influence needs to drop off. Many functions that are asymptotic to zero would be fine, because otherwise the weights of a few enemies [i]anywhere on the map[/i] could totally overwhelm the goal-seeking behaviour. Lastly I would suggest reconsidering using a better pathfinding method unless there's a good reason. Imagine this scenario: - Goal G is at (0,0). - Enemy E1 is at (50,0). - Enemy E2 is at (0,50). - Player P is at (50,50). For values of c >= 0.5, P will almost never reach G, instead being repelled by E1 and E2. In fact the same applies if P starts anywhere on the map outside the triangle G, E1, E2. You may think that tweaking the value of c will fix the problem, but if you keep adding enemies it will break again. [/quote] what you say makes sense, but honestly it's not my impression: for c<1 P pretty much ignores enemies regardless of the distance to them(or so far I haven't noticed much strategy in the trajectory), but for c>1 it just stays in one spot for a long time scared to death by them. I'm quite sure there should be some systematic way people design these ditance functions.
  4. Penalized/Constrained Distance Function

    [quote name='jefferytitan' timestamp='1333957488' post='4929482'] I have to ask... why generate candidates randomly? It's an unusual approach to pathfinding. [/quote] Simple evolutionary algorithm. A number of candiate solutions are generated base on the current one. Next step is to attach a fitness/objective function to each of them and then derive probability distribution. The second step is a clincher for me since I don't know how to do find this fitness function. Currently I'm just using d1-c*d2, which I guess is a pretty rough way of doing it.
  5. Assume a character is located on a n by n grid and has to reach a certain entry on that grid. Its current position is (x1,y1). Also on the same grid is an enemy with coordinates (x2,y2). Each step algorithm randomly generates new candidate locations for the hero (if there are k candidates then there is a kx2 matrix of new potential locations. What I need is some distance objective function to compare the candidates. I'm currently using d1 - c * d2, where d1 is distance to the objective (measure in terms of number of pixels for each axis), d2 is distance to the enemy and c is some coefficient (this is very much like a set-up for Lagrangian). It's not working very well though. I'd be quite keen to learn how what constrained distance function are used for similar cases. Any suggestions or references to articles are very much appreciated.