Archived

This topic is now archived and is closed to further replies.

Dovyman

path planning

Recommended Posts

Dovyman    277
alright so for my research project im going to do path planning, wondering if you guys had any advice or cool articles. I know so far that I want to use a fuzzy system for the low level reactive behaviors, and some type of NN for higher level planning. I''m going to use the FEAR framework, which mods Q2. The part I need the most guidance on at the moment is how you put the two, I suppose they might be called "layers" together. Also any fuzzy path finding tutorials that are out there would be useful, but creating a system on my own shouldn''t prove terribly difficult. I''ve already found a number of resources, I''m just looking for any more input you guys have. Paul.

Share this post


Link to post
Share on other sites
Dovyman    277
Ok, I suppose it would. I''m going to use a fuzzy system for the actual pathfinding. The neural network would be used to reinforce desired behaviors, like picking up powerups, or going via a route that doesn''t get the bot killed. At least thats my understanding of a way it could work. I''m still reading through Alex C.''s dissertation concerning a similiar system, however I''d like mine to be much simpler, since I''m going into a science fair, not a masters degree. Does what I am saying make sense?

Share this post


Link to post
Share on other sites
Timkin    864
It sounds that what you''re attempting to do is use a fuzzy neural model to make decisions about branches to take in a tree structure (defined by action choices), based on inputs that rate the quality of each branch (local powerups, numbers of enemies in vicinity, etc). Is this correct?

If it is, then my personal opinion is that there are far more reliable, far easier to implement, more well established methods for performing such planning that offer provable and measurable results.

Regards,

Timkin

Share this post


Link to post
Share on other sites
Dovyman    277
alright, well I guess I'm not heading in a wonderfully productive direction.. so what kind of stuff in the area of subjective path-planning would make some good research? You're talking about lots of different methods but I've found only a few papers dealing with the topic. Could you give me some insight?

ED: perhaps some more information is in order now that i think about it.. I want to do a project in the area of finding a path through a dynamic world, when the robot has no pre-concieved map of the area.

[edited by - dovyman on September 5, 2003 3:54:34 PM]

Share this post


Link to post
Share on other sites
Dovyman    277
Yeah, I''m in the process of reading it.

I was throwing around some ideas with a CS guy tonight, and he mentioned a project they had done which got readings on chemicals, and returned a confidence level of its predictions of what chemical it was.

So we started talking about applying something like that to a project. Because it seems like theres a missing link almost, because most autonomous agents that navigate through unknown environments do so very reflexively, for example a subsumption architecture. And on the other side of things there are obviously algorithms like A* that deal with pathfinding when you have an internal representation. Now, applying the confidence level idea, it would seem interesting if algorithms could be found that could "mix" these two areas.

The human brain must maintain internal representations of some sort because we can find our way easily around familiar places, like our houses, yet it is not limited to this, because obviously if someone moved your chair, your unlikely to then be unable to navigate a room. (ability to reason concerning your path, and yet implement reactive responses) Now of course this brings up the issue of, well if you have an internal rep, you have to check it for accuracy. I think you might work around this problem by blurring the precision of the representation. For example, you could tell me the basic layout of your house, but you are unlikely to be able to say, I have two rooms, x ft by y feet, connected by a hallway z feet long.

In short, if you could calculate the confidence quickly of the bots knowledge of an environment to a degree (fuzzy), for example knowing the layout of a room, just the fundamentals like the boundaries of the room, then you could do this "mixing" with reactive behaviors to plan a path through the rooms to your destination, and yet keeping an eye out for obstacles. The amount of confidence would determine the "mixing" of the methods, if you know nothing, then you must rely on purely reflexive behaviour, but if you know the boundaries of the room, you are in considerably better shape.

I hope someone actually reads this extremely long post, I''d like some feedback on what I''ve said, don''t be too harsh, I''m just kinda spewing out some ideas that I''ve been mulling over.

Share this post


Link to post
Share on other sites
Timkin    864
quote:
Original post by fup
Alex (FEAR) has written a paper about this:



I wrote a whole PhD thesis on this!

quote:
Original post by Dovyman
So we started talking about applying something like that to a project. Because it seems like theres a missing link almost, because most autonomous agents that navigate through unknown environments do so very reflexively, for example a subsumption architecture.



If you have no knowledge of your environment beyond simple sonar-style (or similar vision based) sensor readings, then you can only react to what you detect in your sensors, implementing a reactive strategy, like ''identify objects then avoid objects while moving north''. Reactive plans offer no guarantees of global optimality. Agre & Chapman give a really good coverage of this problem (reactive planning) in their 1980 paper.

quote:
Original post by Dovyman
And on the other side of things there are obviously algorithms like A* that deal with pathfinding when you have an internal representation. Now, applying the confidence level idea, it would seem interesting if algorithms could be found that could "mix" these two areas.



They already exist. They involve learning an internal representation of the environment - for example, a map of the environment - and utilising this for path planning. In my PhD thesis, I extended this idea to learning in extremely dynamic environments (fully autonomous robotic aircraft flying in a cyclone) and to dealing with uncertainty in the internal model when planning and when deciding to throw the current plan away and find a new one. This uncertainty covered two aspects: 1) mismatch between the model (and model dynamics) and the real world; and, 2) the uncertainty inherent in the evolution of the beliefs (because of initial uncertainty in the state of the domain).

In particular, I developed a robust algorithm for triggering replanning in dynamic environments that were subject to uncertainty; it''s called Probabilistic Prediction-based Replanning (PPR). The general idea is that you start with a model of the environment (which could be complete ignorance) and come up with a plan. While you''re executing that plan, you''re learning a better model of the evolving environment and using this to re-evaluate your beliefs about the quality of your current plan. Given the agents preferences and new beliefs about the evolving environment, there will be a point at which the agent would prefer to spend some time formulating a new plan rather than continue to execute its current plan to completion. This new plan can be computed while still executing the current steps of the current plan and the method then guarantees a gradual decline in the perceived value of the current plan prior to plan failure. Plan failure is actually avoided so the agent is always working to complete its goals as best it can. The algorithm also offers guarantees about the optimality of the plan it ultimately executes (composed of the parts of each of its successive plans it executes) given the dynamic nature of the environment. That is, given the parameter choices, the algorithm guarantees that the agent follows the lowest cost plan at all times, given its beliefs, which change throughout time.

Unfortunately you won''t find any publications on PPR at this time as I''ve been too swamped in my current job to publish the couple of half written papers on the subject. As for my PhD thesis, it''s still being examined (yes, STILL... apparently there are issues with the appropriateness of the examiners), so I can''t give you a copy yet. However, I''m happy to discuss general ideas with you and to help with analysing others'' ideas.

Cheers,

Timkin

Share this post


Link to post
Share on other sites
fup    463
I know that Timkin , but your thesis is based in the real world, whereas the OP mentioned he is going to use FEAR... exactly the same environment Alex used for testing his design.

Share this post


Link to post
Share on other sites
Dovyman    277
Damnit, I hate when I come up with cool ideas and then realize I wasn''t the first to think of them. Well this isn''t exactly college research though, so moving in a similiar direction as other people isn''t an issue. You mentioned that the mixing idea had already been done, do you know the title of the paper? Also if you have any interesting ideas for new research in this area please share them with me :-).

Share this post


Link to post
Share on other sites
Timkin    864
quote:
Original post by Dovyman You mentioned that the mixing idea had already been done, do you know the title of the paper?


Off the top of my head I cannot name a paper... but that''s because there have been many on this issue. Most particularly, the crop up in the robotics community, so you should start there. If I can find some time I''ll take a look through my bibliographic database and see if anything jumps out at me!

quote:
Original post by Dovyman You mentioned that the mixing Also if you have any interesting ideas for new research in this area please share them with me :-).



I have plenty of ideas actually... but then there''s the issue of giving away my ideas to other people/other research institutions!

One interesting problem that still needs to be solved is how to represent an environment internally in an efficient manner, that also lends itself to answering queries about the environment efficiently. So, for example, how does one represent the inside rooms of a house and the items that are scattered around inside the house. Typically people use geometric representations listing the location and extent of objects and then create geometric paths around objects for robots/agents to move along. Is this efficient? It is certainly efficient if you want to visualise the room exactly as it looks to the eye... but its not very efficient for finding paths. Particularly if you have a robot/agent that can navigate around obstacles with reactive behaviours. Then what you really want is a semantic description of the environment, so that the robot/agent knows there is a coffee table behind the couch is it trying to avoid and it knows that it might find the car keys on the coffee table, without actually having a representation of the keys sitting on a representation of the table. Get it?

In terms of replanning, are there better ways to estimate the value of a state than expected utility? Some people have looked at using risk to moderate the value of paths. A classic example is the cliff-side problem, where an agent must choose between two paths. One has a higher cost but lower risk of danger, while the other is a cheaper path but with a higher risk of danger. Are there other ideas obtained from analysing human behaviour that could help to quickly and efficiently estimate the value of a plan?

These are just two ideas. There are plenty more that you should easily come up with if you read the literature on planning for autonomous agents.

Good luck,

Timkin

Share this post


Link to post
Share on other sites
Neoshaman    180
argh....
i''m shaking from fear
i''m building an AE and was a total noobs from AI
well i have a design issue and i try to resolve it without trying to know it was about AI

when i ask help someone tell me that''s was an issue from the AI field, then i''ve start reading AI text but from game field only

now i start to see that there is something out this field that i should go

the fact is in my AE i''m dealing with mix problem and the model is (from my knowledge) like a SFuWSNN (semantics fuzzy weigted state neural network, that''s mean that the input are semantics wich is affect a dinamic weight which is apraise by a FMS like neurone in a network and there is a loopback from ouput to input of the network which chage the weight of the semantics apraisal), that''s terribly annoying because it may be something else that i doesnot know and i''m reinventing the wheel

actually the engine isn''t fully implemented then i can''t see flaws actually(and i''m not a real programmer), but it work with an internal representation of the world which is divide in a kind of two layer:objects, wich are a vector of wieghted attributes (attributes are things which come by sensor and send by the objects) ; and relation between object which are also weighted

i think one problem AI have to deal is the information discrimination, i think as far as i gone that the fact is they overestimate the brain, and under estimate some of his abilities, emotion (as i read) are othen seen as a stimuli-respond then the study i read on neurologic resherch present emotion as an appraisal and discrimination of information, and also a supervising system for learning things which is both in concurence with logical and apparaise logical as well, emotion are leads by primary needs and goal and inhibit or reinforce discrimination of information and sub goal/needs

see for ex the maslow tree

i think all system intelligent system have to deal whith some kind of ''emotion'' (which is apraisal and supervising of a task) but which can be far away from human emotion

if i made that post it''s for having more information on what i''m doing, illusion and strength, related work, new terms that i doesn''t know etc...

i have begin the work one month ago

well i''m sorry if it''s confusing, expressing some subject (which i have doesn''t much knowledge about) in english is hard because i''m french native speaker

>>>>>>>>>>>>>>>
be good
be evil
but do it WELL
>>>>>>>>>>>>>>>

Share this post


Link to post
Share on other sites
Neoshaman    180
well

when i get into AI problem and world representation i was lead by hindu''s concept of leelaa and maya (which leads me to memesis by MGS2) and by quantic physics (useful for problem about dinamic world representation)
rather than having a specific task focus, i have fought the whole as a system and agent can be define only at a layer of this system (it is more diffuse, just like a fish in a mare are dissolve anisotropically in the whole mare in quantics physics)

well i have never mind when i begin it was AI

>>>>>>>>>>>>>>>
be good
be evil
but do it WELL
>>>>>>>>>>>>>>>

Share this post


Link to post
Share on other sites