What is really AI?

Started by
80 comments, last by Emergent 16 years ago
Quote:Original post by Hinkar
As I said, it's like a river, blind and stupid, and it'll find a solution to a problem, but it'll use the same amount of intelligence as a river finding its way to the sea.
A person walking down the street equally blind and stupid: Why did he lift his foot two inches up, when he clearly wanted to travel forward? There were no obstacles in that particular stretch of sidewalk. He should have lifted his foot only a fraction of an inch to break the friction.
--"I'm not at home right now, but" = lights on, but no ones home
Advertisement
(City-dwelling) people walking down a sidewalk do only lift their foot a fraction of an inch. This is why you're more likely to trip over a sidewalk tile jutting out by half an inch than a curb. The latter is recognized as an obstacle, and you plan your motion accordingly. Not that this has much to do with the topic, or the point the other person made (comparing river pathfinding to human pathfinding). :)
Quote:Original post by Rixter
AI is search.


Ignorance is bliss
Quote:Original post by Fingers_
(City-dwelling) people walking down a sidewalk do only lift their foot a fraction of an inch. This is why you're more likely to trip over a sidewalk tile jutting out by half an inch than a curb. The latter is recognized as an obstacle, and you plan your motion accordingly. Not that this has much to do with the topic, or the point the other person made (comparing river pathfinding to human pathfinding). :)
And if we call the bump in the road something like a local minimum? What about avoiding a child's tricycle: Move it or walk around?

Avoiding obstacles such as bumpy tiles in the sidewalk is a form of pathfinding, and people routinely perform sub-optimally at the task; a condition we refer to as tripping or stumbling. The river pathfinding algorithm was used to illustrate a stumbling condition for the river.

But the river pathfinding algorithm was somewhat incomplete, because a river also cuts a path and alters it's course, according to how soft the soil is, as well as how steep it is.

[Edited by - AngleWyrm on March 26, 2008 2:23:20 AM]
--"I'm not at home right now, but" = lights on, but no ones home
Quote:Original post by Timkin
Quote:Original post by Rixter
AI is search.


Ignorance is bliss


I figure while we're assigning arbitrary definitions to an apparently ill defined concept, why not take the simplest? Isn't that what most philosophers of thought try to do? :)
Here's my own checklist of an Intelligence:

-Senses: Can receive information about its environment
-Knowledge: Has ideas about how its environment works
-Reasoning: Can infer new information from senses and knowledge
-Behavior: Can affect its environment
-Memory: Can remember past sensations and deductions
-Planning: Can reason about time
-Learning: Can adapt its behavior to its environment

However, my most honest answer to the question "What is really AI" is "I dont really care. Check this out...".
To answer the question what is AI one must first define intelligence. Unfortunately intelligence like the universe and God is one of those things which cannot be defined. Only filtered. That is, we can only say what intelligence is not, and not what it is.

Nonetheless there is a checklist or a list of axioms I have for myself that I feel serves as a good approximation that serves as a fair model which captures much of the essentials of what people mentally invoke when they say intelligence.

Axiom 1: Extreme High Adaptability

The entity is capable of adapting to all sorts of environments - both abstract (internal, communicable and shareable) and physical (external, experienced). While the entity may have a set of in built automatic responses to certain stimuli the entity is capable of new behaviours that are not inbuilt. These may be due to structures or connections and networks that have been built and are then leveraged to create unique or emergent behaviours. One may say that the entity is capable of learning or building on its set of behaviours.

Corollary - Framing is important

A side-effect of this is that how such an entity chooses to frame a series of inputs and variables or filter and structure its network of associations or whatever will affect how it perceives a problem and thus how it solves it. A learned and alters its internal representational structures and can now handle more variables on the problem and deal with it more deftly.

Axiom 2: Observably Intelligent

This one is tricky and you may disagree with it. In essence it states that For All Entities there Exists some Entity which may observe its behaviours and state that this entity satisfies its criteria for intelligent and self driven behaviour. If no such entity exist for some entity E, then this entity is not intelligent.

Suppose there is some entity of type E. Its behaviours are so complex that Humans cannot perceive it as intelligent. But other entities of type E and a meta-entity of type F can attest to E's intelligence. It does not matter that humans cannot. This type of reasoning is best placed in a modal type of logic where for any entity is this entity intelligent can have [local] values beyond true and false but universal values of T and F.

Theorem 1: Self Direction and Choice

I feel self direction is important. Here the notion of direction is weak. What is meant by this is that the entity in question is believes it can direct its actions. It does not matter whether it can or not, simply that it feels that it can and there is some other entity that can agree with it. Where this belief can be observed (per axiom 2) to be in a way that suffices axiom 1 in that this belief is emergent and not built in. This entity is capable of thinking on the free will problem and may make choices that attempt to maximize some abstract concept of utility. In essence the entity is capable of treating itself as an environment to be built on or learned from.


Theorem 2: Entity Can Communicate

An entity may perceive itself to be intelligent and capable of self direction and choice but if its actions have no effect on the external then it may as well not be intelligent. It is not intelligent because it is not observably so. Thus some hypothetical rock might perceive itself to have free will but truly it does not and is neither intelligent.

Thus for all observably intelligent entities there exists some entity for which some method of interaction can serve as communication between them. And all such entities can solve the communication problem in terms of ways which draw from concepts that were poorly sketched in axiom 1.

Theorem 3: True AI

True AI is an intelligent entity that did not evolve or come about by natural means, accidentally or some act of god but rather was willfully created by some entity where the proposition I(x) = "Is x intelligent" returns a value of true. And this new entity, AI also satisfies the proposition I(x).

[Edited by - Daerax on March 28, 2008 1:10:32 PM]
I want to recommend Asimov's essay More Thinking About Thinking which talks about the subject and it's pretty good.
[size="2"]I like the Walrus best.
Sorry this is gonna meander a bit, but I don't know how else to put it.

Theorem 1, on Self Direction and Choice, may be over-rated. I offer two examples to point to extremes:

1). Ants gathering food follow paths drawn by other ants. They don't have a choice, they are programmed to do so; and yet it seems an intelligent behavior.

2). People often try to lose weight/quit smoking/drugs/gambling. They say "for real, this time". Again. Like they were kidding last time. Like maybe they weren't sincere 'enough' last time. And even prayer and tears don't help. The suffering person wished to be free, and decided to do something about it. Yet they aren't.

Hm. Predictable, and also somewhat short of reasonable. There must be something wrong with them, because it doesn't match the preachings. As for me, I'm much different: I wouldn't for instance repeatedly promise to take better care of the [whatever] next time.

[Edited by - AngleWyrm on March 29, 2008 12:37:19 PM]
--"I'm not at home right now, but" = lights on, but no ones home
Quote:Original post by AngleWyrm

Theorem 1, on Self Direction and Choice, may be over-rated. I offer two examples to point to extremes:

1). Ants gathering food follow paths drawn by other ants. They don't have a choice, they are programmed to do so; and yet it seems an intelligent behavior.


You might want to review this experiment I performed some time ago.
[size="2"]I like the Walrus best.

This topic is closed to new replies.

Advertisement