Pathfinding made simple
Recently i have become really interested in AI for games, but dont really know where to start.
I have read the AI section in Tricks of the Windows Game Programming Gurus over and over as well as some other articles on the net and understand the basic stuff, but i cant seem to manage pathfinding.
I''ve read tutorials on the A* algorithm but dont really understand it, and dont understand how it can be used for pathfinding.
So if anyone can point me towards a good simple tutorial on pathfinding which i can use as a starting point, or if anyone has any hints, tips, advice (etc) please let me know.
Thanks everyone.
I found this site to be very helpful in getting mine working.
http://www.geocities.com/jheyesjones/astar.html
If you want to see A star working, there is an engine called blackfish that has full source code. Hope this site helps.
http://www.geocities.com/jheyesjones/astar.html
If you want to see A star working, there is an engine called blackfish that has full source code. Hope this site helps.
Pathfinding is just one example of a wider field sometimes called ''state-space search''. Your position on a map is one example of a state. A position on a chessboard is another state. Search methods like A* take your current state (such as your current position) and the desired state (eg. where you want to get to) and calculate all the intermediate states you would have to pass through in order to get there. In the case of pathfinding, this means A* will generate a list of positions that you would use to get from the start to the destination. This list is therefore your path. The A* algorithm is one that makes a guess at how ''close'' each intermediate state is to the destination state. This allows it to have a good chance of finding the path without checking more useless intermediate states than necessary.
[ MSVC Fixes | STL | SDL | Game AI | Sockets | C++ Faq Lite | Boost ]
[ MSVC Fixes | STL | SDL | Game AI | Sockets | C++ Faq Lite | Boost ]
That was a beautiful description, Kylotan.
To the original poster... if you understood the algorithm, fire up the compiler, write the algorithm and a simple program that uses it inside a small labyrinth, and you''ll see. Another option is to sit down and execute the algorithm using pen & paper and a 3x3 mini-maze... I heartily recommend doind this, you''d be surprised how much you can learn and really understand about the way the algo works.
To the original poster... if you understood the algorithm, fire up the compiler, write the algorithm and a simple program that uses it inside a small labyrinth, and you''ll see. Another option is to sit down and execute the algorithm using pen & paper and a 3x3 mini-maze... I heartily recommend doind this, you''d be surprised how much you can learn and really understand about the way the algo works.
Not to rain on Kylotan''s parade - because in some respects he is correct - but pathfinding is a specialised subset of the general planning problem. Planning is the problem of finding a set of actions/operations that when applied to an agent in a given state, will cause the transition to a goal state - or configuration of states - within the problem domain. In pathfinding, the goal is to be in a particular location state.
One method of performing planning is via a state space search and hence, a very common method of pathfinding is to apply state space search techniques to choosing paths between states. In reality, what you are doing is a search through the possible action set of the agent to define all states that can be transitioned to from any given state. There is a cost associated with this transition and the path planning problem usually seeks to minimise the cost of moving to the goal state. In domains where the action space is continuous and unbounded, discretisation is in order; be it a discretisation of the action space or of the resulting state space (which implies a discretisation of the action space).
For TeraByte:
Visualising a state space search is often easier if thought of in terms of expanding states based on the possible actions that can be taken from that state. If you have only 4 actions - move(north), move(south), move(east) and move(west) - then it is fairly easy to see how these actions cause a change in state and how a sequence of these moves corresponds to a plan to get from one location to another. Let''s consider sequences of these actions of length 3. I''m going to abbreviate the actions to N,S,E,W, and expect that you will know what I mean. I''ll also skip a few lines in the full set... you can fill them in yourself!
So, the sequences are:
N,N,N; N,N,E; N,N,S; N,N,W;
N,E,N; N,E,E; N,E,S; N,E,W;
N,S,N; N,S,E; N,S,S; N,S,W;
N,W,N; N,W,E; N,W,S; N,W,W;
E,N,N; E,N,E; E,N,S; E,N,W;
E,E,N; E,E,E; E,E,S; E,E,W;
E,S,N; E,S,E; E,S,S; E,S,W;
E,W,N; E,W,E; E,W,S; E,W,W;
S,N,N; S,N,E; S,N,S; S,N,W;
...
...
W,W,N; W,W,E; W,W,S; W,W,W;
So, now consider these as a tree. At the root of the tree we are in our starting state. There are 4 branches that leave this root node, corresponding to the 4 actions available in this node. In the list above, these four actions are each of the 4 different first letters in each plan. At the end of each of these first branches there is another node, corresponding to the state that we end up in after executing the first action. Each of these nodes has 4 branches leaving it, leading to new state nodes, which have branches,... and so on. This tree can have as much depth as you like, however with every new layer added to the tree, that new layer has 4 times as many nodes as the previous layer (for this problem). The nth layer has 4n nodes in it (because the branching factor is 4), where the 0th layer is our root node. You can see that searching this state space can take a very long time; in fact, the search cost is exponential in the depth of the tree. Hence, various search algorithms have been designed to try to minimise the amount of computational effort required to return an optimal plan/path through the action space (represented by the search tree).
A* is a useful search algorithm for certain domains - in particular pathfinding in simple, linear domains - because it is optimally efficient. Which means that it guarantees to return the lowest cost path - if it exists - before returning any other path (so it is optimal), and guarantees to do it by searching less nodes than any other algorithm applied to the same search tree (and efficient).
There is an immense amount of literature online about pathfinding, planning and state-space search. I highly recommend that you peruse as much of this literature as possible (until your brain overloads) and then try implementing an algorithm on a simple small problem.
Best of luck,
Timkin
One method of performing planning is via a state space search and hence, a very common method of pathfinding is to apply state space search techniques to choosing paths between states. In reality, what you are doing is a search through the possible action set of the agent to define all states that can be transitioned to from any given state. There is a cost associated with this transition and the path planning problem usually seeks to minimise the cost of moving to the goal state. In domains where the action space is continuous and unbounded, discretisation is in order; be it a discretisation of the action space or of the resulting state space (which implies a discretisation of the action space).
For TeraByte:
Visualising a state space search is often easier if thought of in terms of expanding states based on the possible actions that can be taken from that state. If you have only 4 actions - move(north), move(south), move(east) and move(west) - then it is fairly easy to see how these actions cause a change in state and how a sequence of these moves corresponds to a plan to get from one location to another. Let''s consider sequences of these actions of length 3. I''m going to abbreviate the actions to N,S,E,W, and expect that you will know what I mean. I''ll also skip a few lines in the full set... you can fill them in yourself!
So, the sequences are:
N,N,N; N,N,E; N,N,S; N,N,W;
N,E,N; N,E,E; N,E,S; N,E,W;
N,S,N; N,S,E; N,S,S; N,S,W;
N,W,N; N,W,E; N,W,S; N,W,W;
E,N,N; E,N,E; E,N,S; E,N,W;
E,E,N; E,E,E; E,E,S; E,E,W;
E,S,N; E,S,E; E,S,S; E,S,W;
E,W,N; E,W,E; E,W,S; E,W,W;
S,N,N; S,N,E; S,N,S; S,N,W;
...
...
W,W,N; W,W,E; W,W,S; W,W,W;
So, now consider these as a tree. At the root of the tree we are in our starting state. There are 4 branches that leave this root node, corresponding to the 4 actions available in this node. In the list above, these four actions are each of the 4 different first letters in each plan. At the end of each of these first branches there is another node, corresponding to the state that we end up in after executing the first action. Each of these nodes has 4 branches leaving it, leading to new state nodes, which have branches,... and so on. This tree can have as much depth as you like, however with every new layer added to the tree, that new layer has 4 times as many nodes as the previous layer (for this problem). The nth layer has 4n nodes in it (because the branching factor is 4), where the 0th layer is our root node. You can see that searching this state space can take a very long time; in fact, the search cost is exponential in the depth of the tree. Hence, various search algorithms have been designed to try to minimise the amount of computational effort required to return an optimal plan/path through the action space (represented by the search tree).
A* is a useful search algorithm for certain domains - in particular pathfinding in simple, linear domains - because it is optimally efficient. Which means that it guarantees to return the lowest cost path - if it exists - before returning any other path (so it is optimal), and guarantees to do it by searching less nodes than any other algorithm applied to the same search tree (and efficient).
There is an immense amount of literature online about pathfinding, planning and state-space search. I highly recommend that you peruse as much of this literature as possible (until your brain overloads) and then try implementing an algorithm on a simple small problem.
Best of luck,
Timkin
Damn, Timkin. You know how many times Steve said "too bad we can''t get Timkin up here" this past week at the GDC? Well done, friend.
Dave Mark
President and Lead Designer
Intrinsic Algorithm Development
"Reducing the world to mathematical equations!"
Dave Mark
President and Lead Designer
Intrinsic Algorithm Development
"Reducing the world to mathematical equations!"
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement