View more

View more

View more

Image of the Day Submit

IOTD | Top Screenshots

The latest, straight to your Inbox.

Subscribe to GameDev.net Direct to receive the latest updates and exclusive content.

What is really AI?

Old topic!

Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

81 replies to this topic

#21AngleWyrm  Members

Posted 23 March 2008 - 11:48 AM

Quote:
 Original post by HinkarAs I said, it's like a river, blind and stupid, and it'll find a solution to a problem, but it'll use the same amount of intelligence as a river finding its way to the sea.
A person walking down the street equally blind and stupid: Why did he lift his foot two inches up, when he clearly wanted to travel forward? There were no obstacles in that particular stretch of sidewalk. He should have lifted his foot only a fraction of an inch to break the friction.

#22Fingers_  Members

Posted 25 March 2008 - 07:40 AM

(City-dwelling) people walking down a sidewalk do only lift their foot a fraction of an inch. This is why you're more likely to trip over a sidewalk tile jutting out by half an inch than a curb. The latter is recognized as an obstacle, and you plan your motion accordingly. Not that this has much to do with the topic, or the point the other person made (comparing river pathfinding to human pathfinding). :)

#23Timkin  Members

Posted 25 March 2008 - 02:17 PM

Quote:
 Original post by RixterAI is search.

Ignorance is bliss

#24AngleWyrm  Members

Posted 25 March 2008 - 07:23 PM

Quote:
 Original post by Fingers_(City-dwelling) people walking down a sidewalk do only lift their foot a fraction of an inch. This is why you're more likely to trip over a sidewalk tile jutting out by half an inch than a curb. The latter is recognized as an obstacle, and you plan your motion accordingly. Not that this has much to do with the topic, or the point the other person made (comparing river pathfinding to human pathfinding). :)
And if we call the bump in the road something like a local minimum? What about avoiding a child's tricycle: Move it or walk around?

Avoiding obstacles such as bumpy tiles in the sidewalk is a form of pathfinding, and people routinely perform sub-optimally at the task; a condition we refer to as tripping or stumbling. The river pathfinding algorithm was used to illustrate a stumbling condition for the river.

But the river pathfinding algorithm was somewhat incomplete, because a river also cuts a path and alters it's course, according to how soft the soil is, as well as how steep it is.

[Edited by - AngleWyrm on March 26, 2008 2:23:20 AM]

#25Rixter  Members

Posted 27 March 2008 - 01:31 PM

Quote:
Original post by Timkin
Quote:
 Original post by RixterAI is search.

Ignorance is bliss

I figure while we're assigning arbitrary definitions to an apparently ill defined concept, why not take the simplest? Isn't that what most philosophers of thought try to do? :)

Posted 27 March 2008 - 02:18 PM

Here's my own checklist of an Intelligence:

-Knowledge: Has ideas about how its environment works
-Reasoning: Can infer new information from senses and knowledge
-Behavior: Can affect its environment
-Memory: Can remember past sensations and deductions
-Learning: Can adapt its behavior to its environment

However, my most honest answer to the question "What is really AI" is "I dont really care. Check this out...".

#27Daerax  Members

Posted 28 March 2008 - 07:10 AM

To answer the question what is AI one must first define intelligence. Unfortunately intelligence like the universe and God is one of those things which cannot be defined. Only filtered. That is, we can only say what intelligence is not, and not what it is.

Nonetheless there is a checklist or a list of axioms I have for myself that I feel serves as a good approximation that serves as a fair model which captures much of the essentials of what people mentally invoke when they say intelligence.

The entity is capable of adapting to all sorts of environments - both abstract (internal, communicable and shareable) and physical (external, experienced). While the entity may have a set of in built automatic responses to certain stimuli the entity is capable of new behaviours that are not inbuilt. These may be due to structures or connections and networks that have been built and are then leveraged to create unique or emergent behaviours. One may say that the entity is capable of learning or building on its set of behaviours.

Corollary - Framing is important

A side-effect of this is that how such an entity chooses to frame a series of inputs and variables or filter and structure its network of associations or whatever will affect how it perceives a problem and thus how it solves it. A learned and alters its internal representational structures and can now handle more variables on the problem and deal with it more deftly.

Axiom 2: Observably Intelligent

This one is tricky and you may disagree with it. In essence it states that For All Entities there Exists some Entity which may observe its behaviours and state that this entity satisfies its criteria for intelligent and self driven behaviour. If no such entity exist for some entity E, then this entity is not intelligent.

Suppose there is some entity of type E. Its behaviours are so complex that Humans cannot perceive it as intelligent. But other entities of type E and a meta-entity of type F can attest to E's intelligence. It does not matter that humans cannot. This type of reasoning is best placed in a modal type of logic where for any entity is this entity intelligent can have [local] values beyond true and false but universal values of T and F.

Theorem 1: Self Direction and Choice

I feel self direction is important. Here the notion of direction is weak. What is meant by this is that the entity in question is believes it can direct its actions. It does not matter whether it can or not, simply that it feels that it can and there is some other entity that can agree with it. Where this belief can be observed (per axiom 2) to be in a way that suffices axiom 1 in that this belief is emergent and not built in. This entity is capable of thinking on the free will problem and may make choices that attempt to maximize some abstract concept of utility. In essence the entity is capable of treating itself as an environment to be built on or learned from.

Theorem 2: Entity Can Communicate

An entity may perceive itself to be intelligent and capable of self direction and choice but if its actions have no effect on the external then it may as well not be intelligent. It is not intelligent because it is not observably so. Thus some hypothetical rock might perceive itself to have free will but truly it does not and is neither intelligent.

Thus for all observably intelligent entities there exists some entity for which some method of interaction can serve as communication between them. And all such entities can solve the communication problem in terms of ways which draw from concepts that were poorly sketched in axiom 1.

Theorem 3: True AI

True AI is an intelligent entity that did not evolve or come about by natural means, accidentally or some act of god but rather was willfully created by some entity where the proposition I(x) = "Is x intelligent" returns a value of true. And this new entity, AI also satisfies the proposition I(x).

[Edited by - Daerax on March 28, 2008 1:10:32 PM]

#28/ owl   Banned

Posted 28 March 2008 - 07:37 AM

I want to recommend Asimov's essay More Thinking About Thinking which talks about the subject and it's pretty good.

#29AngleWyrm  Members

Posted 29 March 2008 - 05:37 AM

Sorry this is gonna meander a bit, but I don't know how else to put it.

Theorem 1, on Self Direction and Choice, may be over-rated. I offer two examples to point to extremes:

1). Ants gathering food follow paths drawn by other ants. They don't have a choice, they are programmed to do so; and yet it seems an intelligent behavior.

2). People often try to lose weight/quit smoking/drugs/gambling. They say "for real, this time". Again. Like they were kidding last time. Like maybe they weren't sincere 'enough' last time. And even prayer and tears don't help. The suffering person wished to be free, and decided to do something about it. Yet they aren't.

Hm. Predictable, and also somewhat short of reasonable. There must be something wrong with them, because it doesn't match the preachings. As for me, I'm much different: I wouldn't for instance repeatedly promise to take better care of the [whatever] next time.

[Edited by - AngleWyrm on March 29, 2008 12:37:19 PM]

#30/ owl   Banned

Posted 29 March 2008 - 09:30 AM

Quote:
 Original post by AngleWyrmTheorem 1, on Self Direction and Choice, may be over-rated. I offer two examples to point to extremes:1). Ants gathering food follow paths drawn by other ants. They don't have a choice, they are programmed to do so; and yet it seems an intelligent behavior.

You might want to review this experiment I performed some time ago.

#31AngleWyrm  Members

Posted 30 March 2008 - 05:44 AM

Interesting experiment: The program of following the trail left by other ants seems to be intelligent behavior, but only if we consider the colony's survival, or possibly the species of ant. From one ant's perspective, it might not be so intelligent.

#32Daerax  Members

Posted 30 March 2008 - 05:54 AM

Quote:
 Original post by AngleWyrmSorry this is gonna meander a bit, but I don't know how else to put it.Theorem 1, on Self Direction and Choice, may be over-rated. I offer two examples to point to extremes:1). Ants gathering food follow paths drawn by other ants. They don't have a choice, they are programmed to do so; and yet it seems an intelligent behavior.2). People often try to lose weight/quit smoking/drugs/gambling. They say "for real, this time". Again. Like they were kidding last time. Like maybe they weren't sincere 'enough' last time. And even prayer and tears don't help. The suffering person wished to be free, and decided to do something about it. Yet they aren't.Hm. Predictable, and also somewhat short of reasonable. There must be something different: I wouldn't for instance repeatedly promise to take better care of the [whatever] next time.

I do not quite understand what you are trying to say but the framework I gave can handle your ants extreme. I stated:

Here the notion of direction is weak. What is meant by this is that the entity in question is believes it can direct its actions. It does not matter whether it can or not, simply that it feels that it can and there is some other entity that can agree with it. Where this belief can be observed (per axiom 2) to be in a way that suffices axiom 1 in that this belief is emergent and not built in.

#33Daerax  Members

Posted 30 March 2008 - 06:01 AM

Quote:
Original post by owl
Quote:
 Original post by AngleWyrmTheorem 1, on Self Direction and Choice, may be over-rated. I offer two examples to point to extremes:1). Ants gathering food follow paths drawn by other ants. They don't have a choice, they are programmed to do so; and yet it seems an intelligent behavior.

You might want to review this experiment I performed some time ago.

hehe I like how the story end as kind of a fable. The moral of the story is...

#34AngleWyrm  Members

Posted 30 March 2008 - 09:42 AM

...Self direction is a belief system?

Part two of my extremes example above illustrates cases where people believe that they have free will, that they direct their actions. Actions their friends and society also believe they have control of, and even hold them accountable for. Actions that directly impact their personal health and social status. And yet even in spite of their own desires on the matter, they still fail to accomplish inaction -- just NOT doing something.

As for the ant: What happens if instead of shampoo, we draw a circle of ant-scent?

[Edited by - AngleWyrm on March 30, 2008 4:42:49 PM]

#35Daerax  Members

Posted 31 March 2008 - 04:15 AM

Quote:
 Original post by AngleWyrm...Self direction is a belief system?Part two of my extremes example above illustrates cases where people believe that they have free will, that they direct their actions. Actions their friends and society also believe they have control of, and even hold them accountable for. Actions that directly impact their personal health and social status. And yet even in spite of their own desires on the matter, they still fail to accomplish inaction -- just NOT doing something.As for the ant: What happens if instead of shampoo, we draw a circle of ant-scent?

No, an "intelligent" (sentient, sapient, cogent, fat,etc) self directed entity in my system must have a set of beliefs. This is because this entity cannot know everything due to physical limits. Now within these set of beliefs must be an emergent belief in which this entity can think that it has the ability to make free choices and also there must be some entity with which it can communicate such. Borrowing from modal logic again, each set of entities encompasses a local world . Thus for example with respect to any given entity the proposition this entity is intelligent is a contingent one.

#36Timkin  Members

Posted 31 March 2008 - 12:45 PM

Quote:
Original post by Rixter
Quote:
Original post by Timkin
Quote:
 Original post by RixterAI is search.

Ignorance is bliss

I figure while we're assigning arbitrary definitions to an apparently ill defined concept, why not take the simplest?

Except that saying that "AI is search" is like saying that a house is a hammer. Search is a tool that can be used to create the end result (along with other tools, skill and creativity), but that doesn't magically transform it into the final product.

...and having said that we must accept that a 'so-called AI' that uses only search to solve a problem is not 'AI', but rather just an intelligently designed implementation of a solution to a computational problem.

This is the most common objection raised about AI: that it's just intelligent design, rather than an embodiment of intelligence... but then, are we any more (and here I state that I believe in 'design by evolution' rather than 'design by God', just to make my position unequivocally clear). So where do we draw the line?

#37AngleWyrm  Members

Posted 01 April 2008 - 04:48 PM

Quote:
 Original post by DaeraxNo, an "intelligent" (sentient, sapient, cogent, fat,etc) self directed entity in my system must have a set of beliefs. This is because this entity cannot know everything due to physical limits.
Beliefs as a behavior/knowledge heuristic?

Quote:
 Original post by Timkin...and having said that we must accept that a 'so-called AI' that uses only search to solve a problem is not 'AI', but rather just an intelligently designed implementation of a solution to a computational problem.This is the most common objection raised about AI: that it's just intelligent design, rather than an embodiment of intelligence...

and
Quote:
 Original post by DaeraxNow within these set of beliefs must be an emergent belief in which this entity can think that it has the ability to make free choices and also there must be some entity with which it can communicate such.

This brings up an interesting point: What exactly is a free choice? Is selecting the best option a free choice, or is it simply an optimized relationship to the environment? Is choosing randomly from a probability distribution of personal biases over the options a free choice?

Something I've noticed: When people are presented with a set of alternatives, they often express their intellect by attempting to step 'out' of the alternatives, and view the problem from a more global perspective. Searching for a solution on a higher plane, or composing other alternatives from past experience with similar problems.

Also, sometimes people will choose an alternative that is clearly suboptimal by my implied score system, but could be ranked as superior by their scoring system. For instance, by dispensing with some presumed code of conduct. When done well, it comes across as victorious, proving that my scoring system could be better if it were unburdened by needless rules.

[Edited by - AngleWyrm on April 2, 2008 6:48:17 AM]

#38Álvaro  Members

Posted 02 April 2008 - 04:15 AM

Quote:
 Original post by AngleWyrmThis brings up an interesting point: What exactly is a free choice? Is selecting the best option a free choice, or is it simply an optimized relationship to the environment? Is choosing randomly from a probability distribution of personal biases over the options a free choice?

We have a pretty hardwired dualistic view of the world, where all objects obey the laws of physics, but some seem to have "souls", or "behaviour". This gives us an illusion of free will that probably has nothing to do with how the world really works, but it's a powerful metaphor that helps us understand and predict events around us. I don't think this illusion has to necessarily be present in an agent to be able to call it intelligent. It's just a byproduct of the way we are implemented.

#39AngleWyrm  Members

Posted 02 April 2008 - 01:23 PM

A tall cool glass of beer calling out "drink me, driiiinnnk meeee" -- an anthropomorphism in jest.
Sometimes my computer doesn't want to cooperate -- an implied metaphor used to simplify what is likely a tangle of dreary detail.
Long ago, the word 'angel' meant 'messenger' -- an artistic license that should have been revoked for malpractice.

#40kiwibonga  Members

Posted 03 April 2008 - 01:41 AM

What is AI, really? A misnomer!

Intelligence doesn't exist. It is a human abstract concept that tries to poetically add some mystery to the idea that the world is a series of chemical reactions dictated by the laws of physics (and maybe some mystical forces from another dimension).

The definition of intelligence is as filmsy as the definition of life. Anything that grows, including a crystal, can be considered alive to some, others will tell you it has to have DNA and reproduce, that it has to have at least one cell...

So that's the problem. There's no set definition, you have to pick a side.

I would say you can't create an artificial version of what doesn't exist. Call me a nihilist if you will :P

But in practice... AI is a set of patterns that attempt to emulate behaviors. Those behaviors can be predictable or not. Their purpose is simply to allow non-software things, like humans, to interact with a machine in a certain context.

The start menu and the office paperclip are in fact artificially intelligent. In their own way.

Old topic!

Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.