Sign in to follow this  
DaAnde

Path-Finding with unknown goal Positions

Recommended Posts

Hey, I am writing a paper for school on AI in video games. Part of my paper is devoted to Path-Finding. I would like to know which are some of the best Path-Finding Algorithms if the AI does not know where its goal is located. I was thinking of using some sort of Line of Sight Algorithm, but I cannot find any which are applicable to this situation. Here is an illustration to help show what I mean: Image Hosted by ImageShack.us

Share this post


Link to post
Share on other sites
If you're playing an adventure game and they don't tell you where a key is, how do you go about searching for it? Break down your own exploration approach and make your program do that. Then refine it if possible.

IMO path-finding by definition requires a goal, so trying to find an object without knowing its location is not path-finding; it's exhaustive exploration. You may be able to just leave this situation out of your paper.

Share this post


Link to post
Share on other sites
This isn't pathfinding, it is exploration. The way in which the agent senses the world is just one small detail of the actual problem. It could just as well sense the contents of certain tiles that are a knight's move away, or something similiarly arbitrary. Assuming the goal could be anywhere in the world, the agent should choose its actions to minimize the expected time to find the object it is looking for. This can be a difficult optimization problem, and there could be other domain specific details to make things more complicated.

I think this problem could be modeled as a POMPD (partially observable markov decision problem) because the agent does not know where its goal is, but you could give it a probability function that describes the chances of the goal being in any particular place. The goal is to reach the state in which the agent knows where the goal is. As the agent explores the world, the chances of the object being in the explored areas is reduced to 0, and the chances of it being in the unexplored areas correspondingly increases. This could even be used to bias the search or possibly to represent additional domain information (maybe the item is more likely to be found in a forest or on a mountain).

Once the goal is found, then it becomes a pathfinding problem.

There are also the related problems of pathfinding in an uncertain or unknown environment, exploring an unknown environment, and determining where you are in an environment given a map and local observations (localization). That last one is important in robotics but not in games, where the easiest solution is to cheat since there's no reason to model how the agents read a map...

Share this post


Link to post
Share on other sites
In the paper I am going to discuss advanced techniques involved in creating realistic AI. The reason I am wondering about this is because with an algorithm such as A* there are certain situations where this will seem unrealistic. Such as a RTS where the computer has find gold, I feel that as the player has to explore to find gold it would be most realistic that the computer would also have to explore to find gold and just already know where it is and just automatically head there.

Share this post


Link to post
Share on other sites
I see what you guys are getting at. I will most likely instead of discussing how to go about exploring an environment I will discuss how to cheat to make it seem more realistic in order to save processing power and memory.

Share this post


Link to post
Share on other sites
Exploration is a perfectly valid problem, it just isn't pathfinding and doesn't belong in a section that is purely about pathfinding. A lot of stealth games don't let their agents cheat, and this allows the player to hide from them.

Share this post


Link to post
Share on other sites
Exploration is an example of the sort of problem known as decision making, whereby an agent needs to choose between multiple (usually conflicting) options. In the exploration problem the options are typically movement directions (and maybe speeds), however the value of a move is not a known function of position, since you don't have a fixed goal to move toward (and hence cannot use distance functionals as objective functions). Subsequently you need some other measure of the quality of a move. A common metric is expected utility and one definition of a rational agent is one that seeks to maximise the expected utility associated with its actions.

In the absence of such measures, a system would either perform a preset search pattern or a random walk.

Cheers,

Timkin

Share this post


Link to post
Share on other sites
From the look of it you want a hungry algorithm like a breadth first search. Ie you start at the current location search each adjacent node(tile), then search each adjacent node ad infinitum until you find what you are looking for or all nodes have been searched. As for if this is the best algorithm well :)

Share this post


Link to post
Share on other sites
i think you were along the right tracks when you were talking about line-of-sight. If you don't know where to go, then perhaps the best thing to do would be to use your line-of-sight to see if there is a decent vantage point, from which you can then look in several different directions, either to see the gold, or to then find another vantage point and so on. When no more vantage points (or gold) can be seen via the line-of-sight algorithm, then you will probably want to employ some kind of walking algorithm which will take your agent into areas that cannot be seen.

I would certainly suggest against cheating in order to save processor power. Worse-case scenario you can just do a random walk and this wouldn't take up any processing power. Cheating agents soon get found out, and this is just becomes frustrating to the users of the game. Think 'turing test'.

Share this post


Link to post
Share on other sites
The whole reason I started this topic was to be pointed in some direction for this path finding part of my paper that is about creating realistic AI in video games. I feel that an AI should be the same as a regular player so in my picture above the AI needs to find some tinder in order to start a fire.

Share this post


Link to post
Share on other sites
Quote:
Original post by makar
I would certainly suggest against cheating in order to save processor power. Worse-case scenario you can just do a random walk and this wouldn't take up any processing power. Cheating agents soon get found out, and this is just becomes frustrating to the users of the game. Think 'turing test'.


Every game AI cheats to some extent. Game AIs aren't expected to interpret a graphical display the way a human does. They also aren't usually expected to pass any turing tests. If it comes down to it, it's usually better to cheat than to cut a feature out of a game because it couldn't be solved without cheating. A lot of games even have the opposite of cheating, where the AI is intentionally made unaware of information that a human would have available, since the goal isn't always to make the AI behave like a human.

Share this post


Link to post
Share on other sites
Quote:
Every game AI cheats to some extent. Game AIs aren't expected to interpret a graphical display the way a human does.

I'm afraid I disagree quite heavily with this. In fact, I am currently working on a game sequel where my aim is to re-write the AI system so that it no longer cheats because it could be very frustrating for the user.

Although it depends what sort of AI system you're creating though, because there are certainly times when you may want to cheat in order to add to the entertainment. But I think that if you're writing an AI system that is meant to represent a player of some sort, I would strived to ensure that the AI can only 'see' what the human would see and can only perform operations the human can perform. Playing against a cheating AI can be extremely frustrating, no-one likes competeting against a cheat. Of course the overall aim for an AI system is to perform entertainment for the user, but in many cases, I would argue that the AI system that cheats will not give good entertainment.

Share this post


Link to post
Share on other sites
Quote:
Original post by DaAnde
I feel that an AI should be the same as a regular player so in my picture above the AI needs to find some tinder in order to start a fire.


Well, based on your example image I am going to assume that you have a simple grid-based map with three possible cell types : tree, tinder, and open space.

Typically, in exploration type problems you first want to identify elements of the map that are relevant to your particular search goal, and make logical assumptions based on these elements as humans tend to do in real life.

If finding tinder is the goal, then logically we would look for tinder in the near vicinity of wooded areas as opposed to open-areas. Given the one example map you created, this is the only assumption that can be derived from the map that could aid in finding the timber more quickly.

To represent the assumption that tinder would more likely be near trees, you would weight the cells adjacent to tree cells in the grid with more favorable weightings, with these rewards diminishing the farther away from the tree you get. Your agent would then commence a walk through the grid, making greedy decisions and choosing successor cells with higher values where available. Doing this would make the agent prioritize searching wooded areas over searching open-space. Previously-visited cells would have their weights diminished to represent the fact that they have already been explored and are no longer of significant value to the search.

The proper term for these weighted grids would be "influence maps" I believe. There are plenty of papers available on the net discussing influence maps, so you should be able to devise a pretty workable solution to this problem using influence map techniques.

Share this post


Link to post
Share on other sites
Quote:
Original post by AngleWyrm
This looks like a Search problem. You have a clearly defined goal, but don't know where it is.


Which means it is not clearly defined... ;)

Share this post


Link to post
Share on other sites
Instead of thinking about the problem as a path-finding problem, consider the problem to be a searching problem. What little remains of the path-finding part (it will be within line of sight as soon as it is found) can be done after the destination has been found.

In the illustration, there is a two-dimensional grid of locations. They can be categorized as Witnessed/UnWitnessed. First look at all the visible locations, and then categorize them as Witnessed. The second part is to reduce the number of UnWitnessed locations in the most efficient manner. Perhaps moving in the direction that adds the most unwitnessed cells into the line of sight. That would be a useful Expected Utility.

It might also be useful to consider a global perspective of the overall search; will the character run into edges of the map? A spiral generally covers the best ground. Are there locations of higher probability? Then start with the most likely one of those. etc.

[Edited by - AngleWyrm on February 11, 2008 1:17:48 AM]

Share this post


Link to post
Share on other sites
I read the first post and did a word find "dijkstra" and it did not appear!

Dijkstra's algorithm isn't the fastest search algorithm BUT it doesn't require a destination so it is used for resource gathering in strategy games.

Because the character in your diagram cant see the target, you will need to implement a simple line of sight algorithm that determines whether two tiles are visible from each other (no obstructions in a straight line).

Once the line of sight algorithm compares the characters position with every other tile on the map, there will be two hidden tiles in sight, one leading to a dead end, another that leads to the goal.

So the character will need to make a choice between the dead end route (just as one would in real life), the route could be chosen at random.

If the character chose to move to the hidden tile leading to the dead end, the line of sight algorithm would reveal no more hidden tiles to go to so he would then choose to take the shortest path to the last hidden spot that was found (the one leading to the goal). Note: because this time the destination is known, A faster search algorithm such as A* may be used.

[IMG]http://i140.photobucket.com/albums/r22/Islip/ai_path.jpg[/IMG]

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this