• Advertisement
Sign in to follow this  

What is Reasoning?

This topic is 4702 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hey guys, Ive been doing alot of research on Reasoning mechanisms, but i have yet to find a hard refined definition of reasoning in terms of AI. So far my idea of reasoning is the constraint of problem space based using rules and special operators on the states. if anyone can help me out with this topic, it would be appreciated beyond the bounds of this universe !!!!!! This may be a bit of a n00b question and may be a bit inappropriate for this forum... :P sorry :D cheers, Leo

Share this post


Link to post
Share on other sites
Advertisement
I am not really sure so please don't flame me if I am wrong but...

It seems that reasoning is a special way of coding AI where you have a set of possible actions, for example Kill(), Rob(), Suffocate() (cruel...). So how does your AI 'guy' know what to do? There comes reasoning. You sort of scan the environment, whatever it is, and add points to a specific action accordingly. For example, you check if that guy is rich. If he is, add points to Rob(). Then check if he is your evil nemesis. If so, add points to Kill(). Then check if people are around. If they are, add points to Suffocate(). Check your money balance. If it's low, add points to Rob(). Check if you have any weapons. If none, add points to Suffocate() etc. etc.

And then, after all the checking is done, action with the most points is to be performed.

Again, I might be totally wrong, so sorry :(

Hope that helps anyway.

Share this post


Link to post
Share on other sites
Quote:
Original post by Zodiak
I am not really sure so please don't flame me if I am wrong but...

It seems that reasoning is a special way of coding AI where you have a set of possible actions, for example Kill(), Rob(), Suffocate() (cruel...). So how does your AI 'guy' know what to do? There comes reasoning. You sort of scan the environment, whatever it is, and add points to a specific action accordingly. For example, you check if that guy is rich. If he is, add points to Rob(). Then check if he is your evil nemesis. If so, add points to Kill(). Then check if people are around. If they are, add points to Suffocate(). Check your money balance. If it's low, add points to Rob(). Check if you have any weapons. If none, add points to Suffocate() etc. etc.

And then, after all the checking is done, action with the most points is to be performed.

Again, I might be totally wrong, so sorry :(

Hope that helps anyway.


hey dude any feed back is appreciated mate :D ...

Im just trying to solidify what the idea is in my own mind, which is proving a little troublesome so any thoughts on the topic is welcome.. thanks mate :D

Share this post


Link to post
Share on other sites
Reasoning is simply making the "right" or "proper" choice based on the information given.

So, for example, when trying to cross a busy street, you know that you have to look both ways to see that there are no cars coming before you cross. So, then when there are no cars around, you make the choice to cross thinnking that it is a "proper" choice. However, the fact that you get killed halfway across by a crate falling out of a cargo plane 3000 feet up doesn't negate the fact that you made a "proper" choice. This is because given the information and experience, you made a decision that seems "proper."

So, to make reasoning actually possible in AI in gaming, you must have two things. One, a memory to store things that are learned, and a mechanism to analyze the memory to make a "proper" choice based on it and the situation.

That's my take on reasoning, which is actually like a page out of texts on agents and their interaction with environments.

Share this post


Link to post
Share on other sites
The definition provided by WeirdoFu is pretty good -- its almost straight out of agent/reasoning description in Russell and Norvig I think :). Just one minor point, particularly applicable in the context of games:

While learning is ideal, it is not necessary for reasoning. Instead of generating the AIs knowledge at runtime, you can use some sort of offline knowledge database generated by hand. An expert systems using production rules, frequently written by a knowledge engineer working with an expert in the domain, is one example.

Share this post


Link to post
Share on other sites
You usually have rules.

You then use then to 'prof' and action before making it.

Ie.

Question: Should i kill x

Rules/data:
People should not be killed
Criminals should be killed
Nemisis should be killed
Anymody how attacks my friends is a nemisis
X attacked freddy
Freddy is my friend
Therefore x is a nemisis
And therefore x should be killed

Very logical.

From,
Nice coder

Share this post


Link to post
Share on other sites
Quote:
Original post by ballist1c
Thanks guys, really helpful :D

now to start the nitty gritty :D


Cheers


We're allowed to do that?

Ok.

You have your rules, which reflects attributes.
You check all of the rules for one thing, then as things change, you only update rules that use the atribute that changes.

Really easy way, is a set of linked lists. One per attribute.
Now, when you make a new rule, you add an entry to each one of the linked lists, with a pointer to the rule.

When one of the atributes changes, go thorough the correstponding linked list, and update all the rules.

From,
Nice coder

Share this post


Link to post
Share on other sites
IMO, Reasoning is just chaining rules and facts together to decide on an actio, as Nice Coder has shown.

I want to go to _place_.
On my path to _place_, there is a road.
The road is busy with cars.
Crossing a road while it is busy with cars leads to death
Death would prevent me from getting to _place_ (or cause me to take longer, depending on the game).
Thus, I must find an alternative to crossing the busy road.

There is another path that does not require walking across this road.
The alternate path is 30 seconds longer than this path.

Roads that are busy become no longer busy after some time.
Roads often stay busy from 4 PM to 5 PM.
It is 4:30 PM, thus the road will be busy for 30 minutes.

The alternate path will take less time than waiting for the road to no longer be busy.

-> Take alternate path

Share this post


Link to post
Share on other sites
I think reasoning is thinking thru your problems and coming up with reasons that you might want to go this way or that way. Subconcious reasoning is different in that you need to come up with decision quickly and perhaps feelings and past experiences play a part in that process. Relying on your gut feelings comes to mind.

Maybe:

if(I'm on fire)
if(water is nearby)
if(I'm mobile)
go get water and pour over my body

and/or

if(I'm on fire)
if(I can lay on ground)
if(ground is unobstructed)
roll on the ground

Now, the most important thing to take from the example is that when you're inexperienced you don't know which one to take. Perhaps try the water then the rolling on ground method or do it in reverse. You can't trust your reason without past experience. So you take a gamble now. Notice something else, there is a rule for fire here. If you know the fact that fire needs oxygen to sustain itself (along with fuel) then to cut off this oxygen you can apply water ie. liquid or sand or roll on ground. But you need to know that about a fire first. You might make logical jump by saying that rolling on floor doesn't let oxygen into fire and that would be like pouring water over yourself. Thus even if you have the facts you can combine various facts to come up with new ways to solve a problem. I think these ideas are spur of the moment type things so your AI would have to create these associations in real time. No wonder this processing takes time when you're under fire. Reminds me of a deer in headlights situation.

Share this post


Link to post
Share on other sites
Within AI at least, there are actually several different sorts of reasoning. Philosophically speaking there is inductive reasoning and deductive reasoning. The over-arching theme is the production of new information, or the revision of beliefs in a piece of information, held by an agent/entity/thing. This may include the assimilation of observations from a domain or it may be completely internal, relying on the hypothesis of new information ("what would be the result of *this* happening?). Typically such reasoning is assumed to rely on a Logic, which provides rules for manipulating information.

There's plenty of literature out there, from methods for computational reasoning (AI) to philosophical works on the nature of reasoning (and whether it's possible; c.f., Hume on the problem of Induction)

Enjoy!

Cheers,

Timkin

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement