Jump to content
  • Advertisement
Sign in to follow this  
DaAnde

Path-Finding with unknown goal Positions

This topic is 3746 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hey, I am writing a paper for school on AI in video games. Part of my paper is devoted to Path-Finding. I would like to know which are some of the best Path-Finding Algorithms if the AI does not know where its goal is located. I was thinking of using some sort of Line of Sight Algorithm, but I cannot find any which are applicable to this situation. Here is an illustration to help show what I mean: Image Hosted by ImageShack.us

Share this post


Link to post
Share on other sites
Advertisement
If you're playing an adventure game and they don't tell you where a key is, how do you go about searching for it? Break down your own exploration approach and make your program do that. Then refine it if possible.

IMO path-finding by definition requires a goal, so trying to find an object without knowing its location is not path-finding; it's exhaustive exploration. You may be able to just leave this situation out of your paper.

Share this post


Link to post
Share on other sites
This isn't pathfinding, it is exploration. The way in which the agent senses the world is just one small detail of the actual problem. It could just as well sense the contents of certain tiles that are a knight's move away, or something similiarly arbitrary. Assuming the goal could be anywhere in the world, the agent should choose its actions to minimize the expected time to find the object it is looking for. This can be a difficult optimization problem, and there could be other domain specific details to make things more complicated.

I think this problem could be modeled as a POMPD (partially observable markov decision problem) because the agent does not know where its goal is, but you could give it a probability function that describes the chances of the goal being in any particular place. The goal is to reach the state in which the agent knows where the goal is. As the agent explores the world, the chances of the object being in the explored areas is reduced to 0, and the chances of it being in the unexplored areas correspondingly increases. This could even be used to bias the search or possibly to represent additional domain information (maybe the item is more likely to be found in a forest or on a mountain).

Once the goal is found, then it becomes a pathfinding problem.

There are also the related problems of pathfinding in an uncertain or unknown environment, exploring an unknown environment, and determining where you are in an environment given a map and local observations (localization). That last one is important in robotics but not in games, where the easiest solution is to cheat since there's no reason to model how the agents read a map...

Share this post


Link to post
Share on other sites
In the paper I am going to discuss advanced techniques involved in creating realistic AI. The reason I am wondering about this is because with an algorithm such as A* there are certain situations where this will seem unrealistic. Such as a RTS where the computer has find gold, I feel that as the player has to explore to find gold it would be most realistic that the computer would also have to explore to find gold and just already know where it is and just automatically head there.

Share this post


Link to post
Share on other sites
I see what you guys are getting at. I will most likely instead of discussing how to go about exploring an environment I will discuss how to cheat to make it seem more realistic in order to save processing power and memory.

Share this post


Link to post
Share on other sites
Exploration is a perfectly valid problem, it just isn't pathfinding and doesn't belong in a section that is purely about pathfinding. A lot of stealth games don't let their agents cheat, and this allows the player to hide from them.

Share this post


Link to post
Share on other sites
Exploration is an example of the sort of problem known as decision making, whereby an agent needs to choose between multiple (usually conflicting) options. In the exploration problem the options are typically movement directions (and maybe speeds), however the value of a move is not a known function of position, since you don't have a fixed goal to move toward (and hence cannot use distance functionals as objective functions). Subsequently you need some other measure of the quality of a move. A common metric is expected utility and one definition of a rational agent is one that seeks to maximise the expected utility associated with its actions.

In the absence of such measures, a system would either perform a preset search pattern or a random walk.

Cheers,

Timkin

Share this post


Link to post
Share on other sites
From the look of it you want a hungry algorithm like a breadth first search. Ie you start at the current location search each adjacent node(tile), then search each adjacent node ad infinitum until you find what you are looking for or all nodes have been searched. As for if this is the best algorithm well :)

Share this post


Link to post
Share on other sites
i think you were along the right tracks when you were talking about line-of-sight. If you don't know where to go, then perhaps the best thing to do would be to use your line-of-sight to see if there is a decent vantage point, from which you can then look in several different directions, either to see the gold, or to then find another vantage point and so on. When no more vantage points (or gold) can be seen via the line-of-sight algorithm, then you will probably want to employ some kind of walking algorithm which will take your agent into areas that cannot be seen.

I would certainly suggest against cheating in order to save processor power. Worse-case scenario you can just do a random walk and this wouldn't take up any processing power. Cheating agents soon get found out, and this is just becomes frustrating to the users of the game. Think 'turing test'.

Share this post


Link to post
Share on other sites
The whole reason I started this topic was to be pointed in some direction for this path finding part of my paper that is about creating realistic AI in video games. I feel that an AI should be the same as a regular player so in my picture above the AI needs to find some tinder in order to start a fire.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

Participate in the game development conversation and more when you create an account on GameDev.net!

Sign me up!