Archived

This topic is now archived and is closed to further replies.

For the real AI gurus: Timkin, and so forth

This topic is 5894 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Ponder this. I am not going to elaborate on it entirely in this first post, but if discussion follows, than I''ll say more about it. Program knowledge into your agents via some dialect of predicate calculus. Parse the statements into clause normal form. Produce a graph from these statements connecting those which share common predicates or atoms. So far, this is not a major programming undertaking. Actually, programming the knowledge requires time and careful thought, but the mechanism to accept that knowledge and build the graph is not. Now, if you apply the standard resolution refutation method to the knowledgebase, you can extract all which is both explicit and implicit in the knowledgebase. You can accept or reject new incoming knowledge based on what the existing knowledgebase knows. The problem is the resolution process and how it beraks down with large knowledgebases. Ok, enter the vertex min cut algorithm. I don''t entirely understand it, but Diodor, another poster on GDnet does. If you apply this algortihm to the graph, you can compartmentalize the graph into subgraphs connected by minimal ''pipes'' between the subgraphs. Basically, the resolution search space is reduced dramatically. The algorithm only needs to search into another subgraph if that atom / predicate combination was not eliminated while in the current subgraph. Further, research seems to say that common sense knowledge bases don''t suffer from the same intense search spaces that most theorem provers are put through. Bottomline: current research in the area may lead to tractable planners and reasoners which could be put into game environments.

Share this post


Link to post
Share on other sites
I''m flattered by the label ''guru'', although I would consider that an exageration of my knowledge!

Anyway...

I believe we''ve discussed this idea of yours in the past b_p. Some new general comments would be along the lines of:

1: The only conclusions your knowledge base could draw are any and all entailed by the knowledge base.

2: The biggest concern (and I don''t know WHY it ended up at number 2!) is that if your KB recieves contradictory evidence at a latter time, then ANYTHING can be inferred from the KB.

3: There is a striking similarity between your idea and the application of the minimum cutset algorithm being applied to Bayesian networks. Each of your subgraphs would, I suspect, represent a clique. However, the Bayesian Network implements a non-monotonic logic, meaning that it is not subjected to (2).

4: Given that Bayesian Networks implement a non-monotonic logic and given that they can easily be augmented to include decisions and utilities, wouldn''t it be better to utilise them for planning (as is a current thrust of research atm)?

However, throwing aside my doubts... I''d be very interested to see this in action, even if only on paper and only on a small problem.

Cheers,

Timkin

Share this post


Link to post
Share on other sites
quote:
Original post by Timkin
1: The only conclusions your knowledge base could draw are any and all entailed by the knowledge base.


Not true, because sensory information brings in new knowledge. For example, another agent tells our agent that the revolver is in the drawer. This information, if not contradicted by the agent's existing knowledge, is augmented into the knowledgebase, thus enabling new conclusions to be draw, such as: the gun is in the desk, the gun is in Bill's study, the gun is in Bill's house, and so on.

quote:
Original post by Timkin
2: The biggest concern (and I don't know WHY it ended up at number 2!) is that if your KB recieves contradictory evidence at a latter time, then ANYTHING can be inferred from the KB.


No, because contradictory information is not allowed to be assimilated if it contradicts the existing knowledgebase. For example:

Mary is Bill's parent
Mary is the sister of Alice
Alice is the sister of Mary
If somebody is a sister, then they are female
if somebody is female, they are not male
if you are a parent, then you are an animal
if you are an animal, then you are a female or a male
everyone has only one mother
if you are a parent and female, you are a mother
if you are a parent and male, you are a father

Now, assert that Alice is Bill's parent.

Then, logically, through resolution refutation,

Alice is an animal
Alice is male or female
Alice is a mother
Bill has two mothers!!!! contradiction

reject initial assertion: Alice is Bill's parent.

quote:
Original post by Timkin
3: There is a striking similarity between your idea and the application of the minimum cutset algorithm being applied to Bayesian networks. Each of your subgraphs would, I suspect, represent a clique. However, the Bayesian Network implements a non-monotonic logic, meaning that it is not subjected to (2).

4: Given that Bayesian Networks implement a non-monotonic logic and given that they can easily be augmented to include decisions and utilities, wouldn't it be better to utilise them for planning (as is a current thrust of research atm)?


Can you give an example of how you program knowledge into a Bayesian network? And can they maintain logical consistency. I merely don't know that much about them.

Edited by - bishop_pass on October 23, 2001 11:53:48 PM

Share this post


Link to post
Share on other sites
sorry to jump in here, just my 2P but...

surely.


Mary is Bill''s parent
Mary is the sister of Alice
Alice is the sister of Mary
somebody is a sister, if and only if they are female
if somebody is female, they are not male
if you are a parent, then you are an animal
if you are an animal, then you are a female or a male
everyone has only one mother
if you are a parent and female, you are a mother
if you are a parent and male, you are a father


fine, but surely if at a later date the kb receives the information that

Bill is the father of Alice
and Alice is the mother of Mary

then adding the condition that
X is a grandparent of Y if and only if X is a Parent of Z and Z is a Parent of Y

the we have Bill is the grandparent of Mary
AND Mary is the parent of Bill

agreed that it''s logically contradictory data that causes this but new rules are going to be needed to prevent that.

it''s a very trite example I know (and apologise in advance) but the problem as I see it is that the amount of rules needed to keep the integrety of the knowledge database expands exponentially due to the thousands of little cases that occur for what we laughingly refer to as "common sense"

just a thought, hope it helps.

Share this post


Link to post
Share on other sites
No, in the case of family relationships, there is a finite number of rules. Figuring them out requires a little bit of thinking and testing, but it isn't that hard to do.

So, effectively, one programs knowledge modules for different domains of knowledge. The beauty is though, the knowledge modules merge seamlessly. AND, the addition of a new rule which contradicts an existing fact is also caught and rejected by the same means! Therefore, invalid rules cannot be entered.

An example of family relation rules:

Father(x, y) -> Parent(x, y)
Mother(x, y) -> Parent(x, y)

Parent(x, y) -> Ancestor(x, y)
Ancestor(x, y) & Ancestor(y, z) -> Ancestor(x, z)
Ancestor(x, y) -> ~Ancestor(y, x)

Parent(x, y) & Parent(y, z) -> Grandparent(x, y)

Brother(x, y) -> Sibling(x, y)
Sister(x, y) -> Sibling(x, y)
Sibling(x, y) -> Sibling(y, x)

Sibling(x, y) & Female(x) -> Sister(x, y)
Sibling(x, y) & Male(x) -> Brother(x, y)

Sibling(x, y) & Father(x, z) -> Father(y, z)
Sibling(x, y) & Mother(x, z) -> Mother(y, z)

Mother(x, y) -> Female(y)
Father(x, y) -> Male(y)

Male(x) -> ~Female(x)
Female(x) -> ~Male(x)

Parent(x) -> Animal(x)

Animal(x) -> Male(x) v Female(x)

Things get categorized in increasingly more general form and those rules flag the impossible things. Look closely at the Ancestor rules.

The point is, I wrote those in about five minutes. It is easy and quick to develop domains of knowledge. Other areas are harder, but the results are powerful. No doubt with some thought, there are other family relationships to come up with through debugging and testing.

Other interesting areas of common sense to explore are:
ownership and property
physical space
time: before and after and during
danger and survival
taxonomy of things: living things, nonphysical things, etc.
etc.

Edited by - bishop_pass on October 24, 2001 11:00:47 AM

Share this post


Link to post
Share on other sites
I think that there''s a problem with just rejecting knowledge that doesn''t match the current KB. It may be that it is not the knowledge, but the KB that is wrong.

Lets say, for example, the following rules are set up:

1. You can only wear gloves on hands.
2. You can only wear on glove on each hand.
3. People have up to two hands.

Given the following fact:

> I am wearing two purple gloves.

The KB doesn''t find fault with that: I can wear two gloves quite nicely.

However, what if I loose a hand, and then tell the KB:

> I have one hand.

The KB will determine that this is not possible, because I am wearing two gloves, and thus must have two hands. A human would assume that you were either lying about the gloves, the hand, or you''ve lost a hand. If the KB simply rejects your assertion out of hand (pun intentional), then it can only represent one of these choices.

Similarly, you could say:

> I am wearing my green pair of gloves.

The KB would reject this, because you can only wear one pair of gloves at a time, and you''re already wearing your purple gloves. A human would presume that you were lying about the purple gloves, green gloves, or had removed your purple gloves.

This KB is extremely naive. In reality, we don''t consider facts to be immutable, a particular fact may be more or less certain than another.

If I tell you, ''I ate the moon for breakfast this morning'', you would rightly consider that to be a falsehood. The KB also would, assuming it knew what the moon and breakfast were. But, if I told you, ''I ate toast for breakfast this morning'', you wouldn''t automatically consider that to be a lie. If you have seen me eating corn flakes, then you would consider it a lie, if you knew there was no bread and that I had not gone to a shop, you would consider it a lie. However, if there was no bread last time you checked, but you don''t know if I''ve been to the shop, you might assume I was telling the truth, and had bought some bread and toasted it. The KB, faced with the knowledge that there is no bread, would deny that I had toast for breakfast, if it knew about the relationship between bread and toast.

The KB can''t afford to be this stupid. For one thing, I can''t ever correct anything that it has wrong. If I tell it ''I have no bread'', for example, and later tell it ''I have bread'', it would reject that on the basis that it knows I don''t have bread. The KB needs to modify its existing knowledge to make ''I have bread'' fit, but it also needs to know that it mustn''t modify its existing knowledge to make ''I ate the moon'' a valid assertion - it must always reject that.


Uuuuuulrika-ka-ka-ka-ka-ka

Share this post


Link to post
Share on other sites
Those are valid issues, and they fall into the realm of belief systems. However, the underlying logic and mechanism which drives a system is not necessarily the issue at fault.

There are modifications and extensions which are possibly feasible. I don''t think the underlying mechanism should be rejected out of hand, but rather modified or extended to handle things such as those you discussed.

As it stands, it encodes more intelligence and reasoning power than, say a C++ program which hardcodes such rules. It benefits from maintaining consistency among its rulesets, something standard programming paradigms don''t offer.

The concept of truth maintenance covers such ideas as rejection, assertion, contradiction, and belief revision. These are big issues in AI. The above system, in its simple form, handles assertion, unassertion, and contradiction. Belief revision requires more work.

As for non-monotonic reasoning, look at the following methodology. Apply the following truth values to each rule and fact:

T = true absolutely
DT = by default true
U = unknown (absent fact or implied fact)
DF = by default false
F = false absolutely

T overrides DT and U and DF
F overrides DF and U and DT

F and T clashing produce a contradiction which requires either a rejection of the new fact or a reevalution of the KB''s existing knowledge. Metarules, essentially more knowledge in the KB could deal with this.

If a rule with T acts on a fact with DT the resulting fact is DT.
If a rule with DT acts on a fact with T the resulting fact is DT.
If a rule with T acts on a fact with T the resulting fact is T.

If a rule with F acts on a fact with DF the resulting fact is DF.
If a rule with DF acts on a fact with F the resulting fact is DF.
If a rule with F acts on a fact with F the resulting fact is F.

The resulting fact is tested against any contradictions per the truth value merging method restated below:

T = true absolutely
DT = by default true
U = unknown (absent fact or implied fact)
DF = by default false
F = false absolutely

T overrides DT and U and DF
F overrides DF and U and DT

F and T clashing produce a contradiction .

Share this post


Link to post
Share on other sites
Here is an example of non-monotonic reasoning:

T: Father(Mary, Bill)

T: Father(x, y) -> Parent(x, y)

T: Spouse(Bill, Alice)

DT: Spouse(x, y) & Parent(z, x) -> Parent(z, y)

Now, let''s ask a question.

Question: Parent(Mary, Alice)?
Answer: DT (meaning: yes, Alice is Mary''s parent by default)

Now,

Assert: T: ~Parent(Mary, Alice)
We are saying with absolute certainty that Alice is NOT Mary''s parent. This is exactly the same as:
Assert: F: Parent(Mary, Alice)

No contradiction is created because the new assertion overrides the existing assertion which says Alice is Mary''s parent because that fact has only a DT truth value, because the rule is a decent default rule, but cannot be depended on for absolute inferences.

Now, let''s ask a question again.

Question: Parent (Mary, Alice)?
Answer: F (meaning with absolute certainty that Alice is NOT Mary''s parent)

Share this post


Link to post
Share on other sites
quote:
Original post by Timkin
1: The only conclusions your knowledge base could draw are any and all entailed by the knowledge base.


quote:
Original post by bishop_pass
Not true, because sensory information brings in new knowledge.



Okay, I wasn''t precluding the addition of new knowledge to the knowledge base... at any stage (even after knew knowledge has been added) a truth maintenance system can only infer what is entailed by the knowledge base... and that''s all it will ever be able to infer.

quote:
Original post by Timkin
2: The biggest concern (and I don''t know WHY it ended up at number 2!) is that if your KB recieves contradictory evidence at a latter time, then ANYTHING can be inferred from the KB.


quote:
Original post by bishop_pass
No, because contradictory information is not allowed to be assimilated if it contradicts the existing knowledgebase. For example:



Okay, I''ll have to come up with an example for this one. I know I have one around here somewhere...

Timkin

Share this post


Link to post
Share on other sites
quote:
Original post by bishop_pass
No, in the case of family relationships, there is a finite number of rules.

An example of family relation rules:

Father(x, y) -> Parent(x, y)
Mother(x, y) -> Parent(x, y)

Parent(x, y) -> Ancestor(x, y)
Ancestor(x, y) & Ancestor(y, z) -> Ancestor(x, z)
Ancestor(x, y) -> ~Ancestor(y, x)

Parent(x, y) & Parent(y, z) -> Grandparent(x, y)

Mother(x, y) -> Female(y)
Father(x, y) -> Male(y)


Things get categorized in increasingly more general form and those rules flag the impossible things. Look closely at the Ancestor rules.



yes I see that, but my point wasn't that Grandfather(x,z) cannot be implimented safetly (that was just an example), simply that you've had to add 3 more rules in order to do it, and that is a relatively simple predicate.

In game AI you'll have about 5%-10% or CPU time, and most of the memory will be taken up with gfx and non-AI data, my point is simply is it worth it?

the only language I've ever used that uses this type of logic is Prolog and I seem to remember on large KB's it got quite big and slow.

there are other methods that will work just as well and just as quickly in the tight worlds of game AI, the only place I can see it being essentially useful is in a MMORPG type game where you might want a set of rules that determine the growth of the species on a planet (for example) or perhaps to define an underlying set of magic rules.

you see I'm looking at this from a game point of view and playing devil's advocate (I'm certainly not some kind of guru!)
What you've been saying so far is entirely valid and resonable. It's certainly a well researched field of AI, I know for a fact that it's been used to produce complex Expert Systems (for example, to produce medical KB's for doctors), and has been used to understand grammer in NLP, but I so far fail to see it's use in game AI.

however, in your first post you say that the vertex min cut algorithm could be used to increase search speed so I guess you're looking at ways around this.

I'd certainly be interested in hearing your plans regarding this and possibly seeing a demo!

Edited by - Nutter2000 on October 25, 2001 4:47:51 AM

Share this post


Link to post
Share on other sites