Sign in to follow this  

Path Finding Question

This topic is 3736 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I am looking for advice on implementing a path finding algorithm. My game is a Red Alert2 based game - RTS, isometric, movement constrained to 8 cardinal directions, full obstacle detection, bridges, max map size of 125,000 iso tiles with 9*125,000 possible unit destinations, 400-500 moving units maximum. I am using Dijkstra's Algorithm. My question is this: Is it worthwhile to limit the number of points that the algorithm considers when calculating a path or is it faster to simply let the algorithm consider every point on the map as a possible point in the path. In my implementation I create an initial "straight line(s)" path from start to finish then in a recursive fashion include detours around any obstacles found along my initial path. (Major obstacles like cliffs and walls are highly optimized [precalculated] to avoid recursive penalty.) It is this list of points that I feed my path search algorithm. Now the path calculated by the algorithm is constrained by the list of input points and will be neither optimized nor the shortest path. I then post process the path using a heuristic that attempts to find unobstucted straight line paths between pairs of corners in the current path. Here the unit can begin moving using the start of the unoptimized path while the simplification process runs on the remainder of the path to reduce perceived latency. So I currently have a very robust path search algorithm that seems sufficiently fast (although I have not yet stress tested it with 100's of units moving and colliding) but it has a lot of complexity that would be eliminated if I simply let the path search algorithm do a global optimization. Any comments or insights would be greatly appreciated. thanks

Share this post


Link to post
Share on other sites
It's going to depend on the implementation. Investigating more options takes more time, unless the act of segregating the options into a group to be searched and a group not to be searched takes more time. But cutting down the search options increases the chance of you finding a sub-optimal path. Just how much it does so, will depend on the search space you're working with. There's no simple answer for you. What's most important to you in this algorithm?

Share this post


Link to post
Share on other sites
let try to pre-calculate some part and attach it with the map. such as devide the map in to 20 * 20 zone and then calculate the part from each zone to every zone and then save it in file. after player command troop from 1 zone to other zone use the data that saved in file with some additional calculate on detail.

how about my solution? I need suggestion too, since I never implement this problem yet? :P

Share this post


Link to post
Share on other sites
You may use the Dijkstra/A* algorithm because it's really fast. But even that algorithm will have troubles finding paths in so many tiles so I advice you to divide the map into sections (groups of tiles) as Neon2302 said and find the shortest path between sections.

Share this post


Link to post
Share on other sites
I have previously outlined some of my requirements. Finding the absolute shortest path is not necessary. Giving the players the perception of responsiveness under all game conditions is the most critical objective.
(An algorithm that can quickly return a reasonable start of a path while flushing out the remainder of the path seems very advantageous in meeting this goal.)

Breaking the map into rectangular regions is not feasible. The regions must be based on map topology since even a very small rectangular region could contain differing elevations, a small river, etc., requiring pre calculation of multiple paths per region.

My map size for unit movement is 750 x 1500. Using 25x25 regions would require 1.6 million path calculations. 50x50 requires 100,000. Furthermore, bridges pose an additional problem. Bridges can be blown/repaired dynamically during game play. Presence of a bridge drastically alters map topology. A map with 10 bridges (each one independently "on" or "off") is another 2^10 factor in the number of paths.

It seems this approach would place restrictions on the complexity of the map topology and size of the map. Perhaps I am overlooking something? Maybe someone could provide a few details from an actual implementation of this type of path search algorithm?


thanks

Share this post


Link to post
Share on other sites
Consider the solution that I have used:

Dynamically create sectors and their connections (and connection types) to neighboring sectors. Sectors should be natural, not rectangular:
Each sector should consist of tiles that are grouped and reachable using same movement type.
If needed, create also sector groups of larger scale.
Change sector data as needed during the game: New connections, lost connections, splitting sectors and merging them.

Pathfinding can start by searching in the largest scale sector graph and then smaller scales (if they exist) and then the actual tiles between sectors.

This approach enables you to quickly find out paths and the generated sector connection and quality information is also very useful for the AI.

-Osmo Suvisaari
waspgames.com

Share this post


Link to post
Share on other sites
Why not set up a hiarchy traversal.
"quadtree" the world, and pathfind over that. You start with a map that is 512x512.
Brick that into a grid of 64x64 tiles that are each 8x8 game tiles. Again devide that level over an 8x8 grid.
Now you can pathfind at the highest level to select the most promising tiles from the next level down to path find over.
That way you can quickly Start the object moving on the right path, while processing the rest of the pathing in the background.

Share this post


Link to post
Share on other sites
Yes, hierarchy will bring speed. But instead of using standard sized subdivisions (64x64, 8x8 etc) in many cases it is better to divide the areas along the natural barriers.

Irregular-shaped sectors are more complex than standard shaped sectors, but for example if a wall, river or several goes through a 64x64 tile area, then the pathfinder must deal with it anyway. Creating irregular-shaped sectors is lesser evil when compared to the problem of dealing with barriers inside standard shaped blocks.

-Osmo Suvisaari
waspgames.com

Share this post


Link to post
Share on other sites
If you're looking for speed and you're already using A* or Dijkstra (I assume you're using one of these), try IDA*. The greatest overhead in A* is due to keeping the open list of nodes to visit in sorted order. IDA* eliminates the need for this and essentially does a depth first search up to a specified depth. If the goal is not found, you increase the allowed search depth to the minimum value just higher than your current maximum. Then, you start the search process over at the start node, do another depth first search to your new maximum depth. You can use the same heuristic and more or less the same structure for the search - just don't worry about the open/closed lists.

It seems counter intuitive that repeating the early part of the search won't slow you down. The frontier nodes will always greatly outnumber your previous search nodes if you have a fairly high branching factor. In this case, you have 8 cardinal directions leading to a branching factor of 7 after the first movement - that's big.

I've compared IDA* and A* for finding solutions to a 2x2xs Rubik's cube and found that in the time it took A* to find optimal solutions for 1 cube, IDA* found solutions for over 100 cubes. For the 2x2x2 cube, I have a branching factor of 9 for the first move and then a factor of 6.

-Kirk

Share this post


Link to post
Share on other sites
On the issue of obtaining irregularly shaped regions from a spatial decomposition... this is a clustering problem and as such carries with it the usual problems and complexities.

If you use a regular decomposition (quad/oct/k-d tree), then you must perform aggregation on primitive regions (leaf nodes) to obtain irregular leaf nodes, where the aggregation strategy reflects your preferences for cluster classes (usually similarity of region features)

This aggregation can be viewed as a search through the space of possible sequences of pairwise combinations of leaf nodes. Because your decision to pair two neighbour nodes into a newer, larger node affects all subsequent decisions, this is a computationally intractable problem for decompositions of any reasonable size.

One way around this is to interleave decomposition and aggregation and to write code to decompose arbitrary shaped (and not necessarily convex) polygons using your decomposition strategy. Perform n steps of decomposition, then aggregate and repeat until termination.

Having said that, there are alternative methods that might be more fruitful for you: contour/region growing for example. Consider a random primitive element (for example, a point) in the region you wish to decompose. Now consider a small contour around that region (for example, a circle of small radius). Now imagine dilating that circle, but allowing it to deform as it hits the boundaries of the target cluster, until it coincides smoothly with the true boundary of this cluster. You can achieve this in a discrete fashion by discretising your region into simple primitives (for example, in 2D: squares, triangles, hexagons, etc), choose one at random and start adding neighbours to a cluster containing the selected region if and only if their attributes are similar. You'll discover problems when clusters grow to touch each other (or come close to touching). You'll need to implement a boundary 'clean up' routine that traverses all primitives on or near the boundary of clusters and determines which cluster they should be in, based on the aggregate feature score of all other members of the cluster. A simple example score would be mean squared deviation from the cluster mean for that attribute value. More complex scores look at the probability that the primitive belongs in each and makes assignments to maximise this (but again, you should be able to see that any one decision affects all others). The most complex strategies compute information measures on all clusters given an assignment and optimise this over all possible assignments.

It's not an easy problem to solve in constrained time/resources! ;)

Cheers,

Timkin

Share this post


Link to post
Share on other sites

This topic is 3736 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this