Jump to content

  • Log In with Google      Sign In   
  • Create Account


What is really AI?


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
81 replies to this topic

#61 Daerax   Members   -  Reputation: 1207

Like
0Likes
Like

Posted 03 April 2008 - 03:23 PM

Quote:
Original post by alvaro
Quote:
Original post by AngleWyrm
This brings up an interesting point: What exactly is a free choice? Is selecting the best option a free choice, or is it simply an optimized relationship to the environment? Is choosing randomly from a probability distribution of personal biases over the options a free choice?

We have a pretty hardwired dualistic view of the world, where all objects obey the laws of physics, but some seem to have "souls", or "behaviour". This gives us an illusion of free will that probably has nothing to do with how the world really works, but it's a powerful metaphor that helps us understand and predict events around us. I don't think this illusion has to necessarily be present in an agent to be able to call it intelligent. It's just a byproduct of the way we are implemented.


No one really knows what free choice is. However alvaro is what a philosopher would call a Hard Determinist. Which may or may not be a correct stance but in my opinion is not very likely. Although I doubt he would agree with the other baggage a typical hard determinist would carry - such as a lack of a belief in the notion of moral responsibility.

Sponsor:

#62 Daerax   Members   -  Reputation: 1207

Like
0Likes
Like

Posted 03 April 2008 - 03:44 PM

Quote:
Original post by SneftelThis is why coming up with a definition of "intelligence" is useless unless you (a) have a need to objectively define a metric of intelligence, (b) have an objective test to determine whether a given metric is accurate, and © are willing to have that metric disagree with you and tell you that you're wrong about something being intelligent or not intelligent. Under any other set of circumstances, it's all just semantic flailing.


I feel that at least for humans only © will ever be possible. so such a model/metric would have to acknowledge this and work with a system that has notions beyond true and false (e.g. append modifiers like possible and necessary).

#63 Hodgman   Moderators   -  Reputation: 27774

Like
0Likes
Like

Posted 03 April 2008 - 04:10 PM

IMO defining intelligence as a relative measure to "human intelligence" is just a cop-out showing that we don't know what intelligence really is.

I have a feeling that there are a lot of people who if given such a test would receive a sub-human score!

Quote:
Original post by owl
* Is social?
* Has language?
* Uses tools?
* Builds tools?
* Is adaptable?

Sorry for going off on a tangent, but I find your metrics interesting food-for-thought:

* There are certain people in the world who are for whatever reason (injury, mutation, trauma) are completely lacking in social skills and/or instincts, however they're still intelligent (and human!). In fact, many religious figures throughout history have been known to spend long periods of time in complete isolation before coming back to society and teaching great wisdom...

* Humans raised alone would not have language, so the important thing is the ability to develop language - a capacity which many animals also have.

* Some animals can use and make tools (and can adapt), but is this a problem with the definition, or are these animals also intelligent? (see the argument that 'human rights' should extend to all great apes...)

* Any living creature can adapt - just not in a single life-time. Does this mean that if we look at a species of virus (instead of an individual virus organism) that the species as a whole has some intelligence?

[Edited by - Hodgman on April 3, 2008 10:10:32 PM]

#64 owl   Banned   -  Reputation: 364

Like
0Likes
Like

Posted 03 April 2008 - 05:04 PM

As I've been saying to Sneftel I do believe that there are levels of (individual) inteligence in nature, and that those levels are measurable in relation to the quantity and the quality of the information (read information as "data" and also "matter") certain being is able to process/organize.

Of course there are exceptions everywhere, but the test I imagined would be applied to the observation of a species as a whole and not to one individual.

I noticed we usually (always?) use the term intelligence wanting to mean intelligent as human, and if a creature lacks certain ability we have, we say it's not intelligent. I find that kind of unfair.

I also think that if a computer program that works with a certain set of rules and data can take useful/meaningful decisions to achieve expected results, we could say that it shows some degree of inteligence in it's domain. And that if it performs better than other program in the same domain, it can be said to be "more intelligent" than that other program.

If an entity is not aware of the choices it makes or if it has not the capacity of recognizing itself as an individual doesn´t turn that entity into something not-intelligent. But it is evident at least for humans that having that capacity helps to the work of processing information in a deeper way.

I find myself to be kind of verborragic today, I apologise for that.

#65 AngleWyrm   Members   -  Reputation: 554

Like
0Likes
Like

Posted 03 April 2008 - 06:41 PM

Quote:
Original post by Timkin
Quote:
Original post by Rixter
Except isn't that like saying an encyclopedia is intelligence?

No, an encyclopedia is a collection of information (pages in a book, or in the digital age usually an electronic database + interface).
There was an anti-B.F.Skinner argument called the Chinese Room:

Inside the room is a person who does not know how to read Chinese. Outside, a man who does know how writes a note and slips it under the door. Inside, the illiterate man has warehouse of symbol look-up tables that show response strings for various sequences. The illiterate person simply compares symbols and transcribes the response sequences and passes a note back. So even though the guy on the outside is having a conversation, the guy on the inside is totally oblivious.

If it walks like a duck, and it quacks like a duck...It might be just the thing to take duck hunting.

But why stop at low-flying duck resolution: What if it looks, sounds, smells, feels, and tastes like a duck to the limits of the human senses? I might have bought a package of that at the store the other day.

#66 Álvaro   Crossbones+   -  Reputation: 11906

Like
0Likes
Like

Posted 04 April 2008 - 01:25 AM

Quote:
Original post by Daerax
No one really knows what free choice is. However alvaro is what a philosopher would call a Hard Determinist. Which may or may not be a correct stance but in my opinion is not very likely. Although I doubt he would agree with the other baggage a typical hard determinist would carry - such as a lack of a belief in the notion of moral responsibility.

Hmmm... I am not so sure about determinism: There could be randomness involved. What I do believe strongly is that there is nothing special in physics for brains. I don't believe in moral responsibility, but only in the sense that I don't think it is part of the laws of physics. Of course, it is an important concept that allows us humans to develop working societies, and I feel as much of it as anyone else. I think Richard Dawkins said it best in this TED talk (the whole thing is good, but the relevant part for this discussion starts around minute 19).

[Edited by - alvaro on April 4, 2008 7:25:46 AM]

#67 speciesUnknown   Members   -  Reputation: 527

Like
0Likes
Like

Posted 04 April 2008 - 02:51 AM

Simple answer: Something that looks like its intelligent.
Complicated answer: A highly emotive acronym applied to a number of different methods of making man-made objects appear to be intelligent.

#68 Rixter   Members   -  Reputation: 785

Like
0Likes
Like

Posted 04 April 2008 - 03:33 AM

Quote:
Original post by AngleWyrm
Quote:
Original post by Timkin
Quote:
Original post by Rixter
Except isn't that like saying an encyclopedia is intelligence?

No, an encyclopedia is a collection of information (pages in a book, or in the digital age usually an electronic database + interface).
There was an anti-B.F.Skinner argument called the Chinese Room:

Inside the room is a person who does not know how to read Chinese. Outside, a man who does know how writes a note and slips it under the door. Inside, the illiterate man has warehouse of symbol look-up tables that show response strings for various sequences. The illiterate person simply compares symbols and transcribes the response sequences and passes a note back. So even though the guy on the outside is having a conversation, the guy on the inside is totally oblivious.

If it walks like a duck, and it quacks like a duck...It might be just the thing to take duck hunting.

But why stop at low-flying duck resolution: What if it looks, sounds, smells, feels, and tastes like a duck to the limits of the human senses? I might have bought a package of that at the store the other day.


I've heard the Chinese Room argument before, but is it the room + data that's intelligent? Or is it the room + data + guy using data that's intelligent? I think data (knowledge), representation, and solutions are great and all, but I believe it's the construction and use of these that is the intelligence. This does bring up an interesting point though (that's more or less been said here a few times), that things are 'intelligent' until we know how they work, then it's like 'oh, that's not magic', so we want to know which part is "intelligent". Is an ant intelligent or the colony? Is a brain cell intelligent or a billion cells linked together? Is a thermostat intelligent? What about a billion of them?


#69 Prozak   Members   -  Reputation: 865

Like
0Likes
Like

Posted 04 April 2008 - 03:53 AM

My Definition:

* A system that demonstrates behavioral evolution and the required abilities for survival in its native environment.

So, from my own definition, a neural network that reads cheques and translates handwriting to ascii, isn't AI, because even though is was initially based on neurons, the evolution of those neurons has probably been "locked" once the application was deployed, therefore inhibiting any ability for the system to demonstrate any evolution of behavior.

#70 Sneftel   Senior Moderators   -  Reputation: 1776

Like
0Likes
Like

Posted 04 April 2008 - 04:48 AM

Quote:
Original post by Rixter
I've heard the Chinese Room argument before, but is it the room + data that's intelligent? Or is it the room + data + guy using data that's intelligent? I think data (knowledge), representation, and solutions are great and all, but I believe it's the construction and use of these that is the intelligence.

This is what's been coined (by Daniel Dennett, IIRC) as the "systems reply", and it's the conclusion of most who don't agree with Searle's argument. Searle has a rebuttal to this reply, consisting of a gymnasium (and rather stretching the analogy), which doesn't really address the issue. More memorable and entertaining is his ad hominem towards those who would espouse this theory: "It is not at all easy to see how someone who was not in the grip of an ideology would find that idea at all plausible."

#71 AngleWyrm   Members   -  Reputation: 554

Like
0Likes
Like

Posted 04 April 2008 - 05:56 AM

A BBS sysop once sicked his pet software 'bot on me, which was masquerading as a person. It lasted for about half a dozen exchanges, before it became clear there was little comprehension of what was being said. Thereafter, about three or four more exchanges exposed it for what it really was -- a simulator with a stash of canned responses.

[Edited by - AngleWyrm on April 4, 2008 12:56:52 PM]

#72 animator   Members   -  Reputation: 115

Like
0Likes
Like

Posted 04 April 2008 - 07:43 AM

I think we will only have true AI when a computer has the same number of processors (CPUS) as there are in the human brain.

Since a brain has about 100,000,000,000 neurons. And right now computers have about 2-4 CPUs then according to Moore's law, the number of processors should double every to two years, we should have true artificial intelligence by....

April 2008 + 2 * log_2(100,000,000,000/4) years = May 2077

By which time I will be in to my 90's.

But seeing as neurons are much slower than CPUs it might be sooner. For example eye neurons work about 100 frames a second which is 100Hz. So a 1GHz CPU can model about 10,000,000 neurons. Then we only need 10,000 CPUs and this will take:

April 2008 + 2 * log_2(10,000/4) years = Nov 2030

where I will be about 50 so that's not too bad.

Me in a nutshell - Patchwork Personalities.

#73 Hnefi   Members   -  Reputation: 386

Like
0Likes
Like

Posted 06 April 2008 - 05:56 AM

Quote:
Original post by animator
I think we will only have true AI when a computer has the same number of processors (CPUS) as there are in the human brain.

There are no CPUs in the human brain. A CPU is a centralized computational structure; the brain is a decentralized computational network. Big difference.
Quote:
Since a brain has about 100,000,000,000 neurons. And right now computers have about 2-4 CPUs then according to Moore's law, the number of processors should double every to two years, we should have true artificial intelligence by....

April 2008 + 2 * log_2(100,000,000,000/4) years = May 2077

By which time I will be in to my 90's.

A neuron is definitely not equivalent to a CPU. A neuron is a very primitive unit in a very large distributed structure, whereas a CPU is a very complex unit at the heart of a comparatively simple structure.
Quote:
But seeing as neurons are much slower than CPUs it might be sooner. For example eye neurons work about 100 frames a second which is 100Hz. So a 1GHz CPU can model about 10,000,000 neurons. Then we only need 10,000 CPUs and this will take:

April 2008 + 2 * log_2(10,000/4) years = Nov 2030

where I will be about 50 so that's not too bad.

Computers containing 10,000 CPUs exist today. It's estimated that in terms of primitive instructions per second, the human brain is outclassed by our fastest computers in operation today by a factor of about 5. Edited to add: IBM's upcoming Blue Gene/P architecture can be configured for use with 884,736 processors.

#74 AngleWyrm   Members   -  Reputation: 554

Like
0Likes
Like

Posted 06 April 2008 - 11:07 AM

Some interesting linguistic observations:
Quote:
The Stuff of Thought, page 6, by Steven Pinker
"...language is saturated with implicit metaphors like EVENTS ARE OBJECTS and TIME IS SPACE. Indeed, space turns out to be a conceptual vehicle not just for time but for many kinds of states and circumstances. Just as a meeting can be moved from 3:00 to 4:00, a traffic light can go from green to red, a person can go from flipping burgers to running a corporation, and the economy can go form bad to worse. Metaphor is so widespread in language that it's hard to find expressions for abstract ideas that are not metaphorical. What does the concreteness of language say about human thought? Does it imply that even our wispiest concepts are represented in the mind as hunks of matter that we move around on a mental stage? Does it say that rival claims about the world can never be true or false but can only be alternative metaphors that frame a situation in different ways?"

Here's his TED talk discussing the material of this book.

#75 BreathOfLife   Members   -  Reputation: 188

Like
0Likes
Like

Posted 10 April 2008 - 06:40 PM

I think a fairly accurate definition of intelligence, is the ability to learn.

It is something that living things just seem to have, and something that is very difficult to re-create outside of a very, very narrow scope (eg. computers that learn to play chess and backgammon well). Even then, the only real way that they can "learn" is because they have a near flawless memory and razor sharp math skills.

Furthermore, they can only really "learn how to play well" the game after we tell them all the rules, and then explain what "playing well" is.

Computers and programs need everything definded for them... variable names, types, values. Learning would be like, creating a new variable type during runtime, and generating a bunch of operators to manipulate the data, and then implementing them.

Basically, in reference the the OPs origonal post reference, my 2 cents is that the difference between intelligence, and artificial intelligence, is that one exists, and the other countless people are trying to reproduce the best they can against a completely impossible goal.

#76 Hnefi   Members   -  Reputation: 386

Like
0Likes
Like

Posted 10 April 2008 - 11:05 PM

Quote:
Original post by BreathOfLife
It is something that living things just seem to have, and something that is very difficult to re-create outside of a very, very narrow scope (eg. computers that learn to play chess and backgammon well). Even then, the only real way that they can "learn" is because they have a near flawless memory and razor sharp math skills.

Not at all. It's true that it is difficult to implement learning in a large domain, but that is only the case because the complexity of learning increases very quickly with the size of the domain (probably exponentially). Remember that it takes the most intelligent creatures we know of about a year to simply learn how to walk. Most AI's aren't given that kind of timeframe to learn.

I should also point out that "flawless memory" has nothing to do with it. In fact, most machine learning techniques forgo that advantage and use various types of heuristics and approximations. I don't know of any learning technique, except the most trivial, that actually rely on perfect memory.

Quote:
Furthermore, they can only really "learn how to play well" the game after we tell them all the rules, and then explain what "playing well" is.

And this is different from humans how? If you don't tell a person the rules or goal of chess, they will never be able to play it. They may be able to infer the rules and goal by observing several games being played, but so could a computer.
Quote:
Computers and programs need everything definded for them... variable names, types, values. Learning would be like, creating a new variable type during runtime, and generating a bunch of operators to manipulate the data, and then implementing them.

I used to think so as well, but it's completely wrong. You assume that the data structures in machine code correspond directly to the objects and features observed by the program. This is wrong. Some types of machine learning do not even have discrete variables at all for the things observed, yet they manage to be very efficient learners anyway. Other strategies use a generalistic approach, where the observed objects are first classified with some very general technique (such as self-organizing maps) and then instanced as collections of classifications - features. Such techniques can be applied to any reasonable domain, though it may not be the most efficient.

Quote:
Basically, in reference the the OPs origonal post reference, my 2 cents is that the difference between intelligence, and artificial intelligence, is that one exists, and the other countless people are trying to reproduce the best they can against a completely impossible goal.

You seem to have fallen for the same unrealistic expectations that the early promoters of the field did - that it's just a matter of computing power. But unlike them, you realize that we can never have enough computing power to solve the problem of AI with trivial methods.

But the field of AI is much bigger than that. Sure, we are far from implementing sentience. That's not exactly surprising. But the efforts made to implement it has yielded a lot of valuable knowledge when it comes to machine learning, planning, searching, knowledge representation etc that are used a lot in software today.

Maybe we'll never implement a "real" AI, even if we manage to define the term. But to say that it's impossible, given what we know today, is overly pessimistic - and the knowledge gained by trying sure makes it worth the effort.

#77 BreathOfLife   Members   -  Reputation: 188

Like
0Likes
Like

Posted 11 April 2008 - 09:15 AM

Heuristics. Thats my new stance. Its the heuristic that is my "overly pessimistic" point of origin. I do not believe that a computer could ever implesment a heuristic, unless at some point we give it one, or exmplain to it somehow what a heuristic is.

Truly intellegent AI seems totally unreal, that doesnt mean that the AI we have come up with thus far isnt really, really close. In same cases, it is. But they all need a heuristic from us to start the process.


This might come out sounding entirly ignorant, but I'm not entirely sure that its true, but I bet that most if not all AI development spends alot of time tweaking heuristics.

Say we make a machine capable of feeling pain. It would not be able to avoid situations that cause pain unless we tell it pain is "bad" somehow. It would know pain, but wouldnt even bother trying to avoid it.

You could counter with a "happy meter" style stance, that its not "bad" it is just less "happy". But that only works if we first explain to it that "happy" is what it wants to be.

If we could avoid having to do so, we'd have a machine that we could teach how to play backgammon, and midway through have it refuse to do so because it doesnt want to play.

#78 Hnefi   Members   -  Reputation: 386

Like
0Likes
Like

Posted 11 April 2008 - 09:45 AM

Quote:
Original post by BreathOfLife
Heuristics. Thats my new stance. Its the heuristic that is my "overly pessimistic" point of origin. I do not believe that a computer could ever implesment a heuristic, unless at some point we give it one, or exmplain to it somehow what a heuristic is.

Truly intellegent AI seems totally unreal, that doesnt mean that the AI we have come up with thus far isnt really, really close. In same cases, it is. But they all need a heuristic from us to start the process.

I think you need to look a bit closer on the current state of the field. A heuristic is just a strategy for approximation, and they are, at this point, pretty trivial to implement in such a way that the computer can construct and fine-tune a heuristic from scratch for any purpose without any help except being told what the goals corresponding to the input is. The most common technique for this is probably a classical backpropagating neural network. I believe ANN's are probably the most promising area of research for those looking to create "real" AI; these people are, however, in the minority among the research community. Most researchers are building automated cars and bombers; not Johnny 5.
Quote:
This might come out sounding entirly ignorant, but I'm not entirely sure that its true, but I bet that most if not all AI development spends alot of time tweaking heuristics.

Any implementation work is primarily tweaking. The actual theoretics are a relatively minor part of the total work hours being put into most fields of research in computer science.
Quote:
Say we make a machine capable of feeling pain. It would not be able to avoid situations that cause pain unless we tell it pain is "bad" somehow. It would know pain, but wouldnt even bother trying to avoid it.

You could counter with a "happy meter" style stance, that its not "bad" it is just less "happy". But that only works if we first explain to it that "happy" is what it wants to be.

What you are basically saying here is that the metric of success - to know whether what one does is good or bad - is difficult to implement. This is sometimes true, sometimes not. In a game, it is trivial to implement (the closer you are to winning, the happier you are). In an agent that is supposed to emulate human behavior, it is much more difficult. I'm not sure whether any research is actually being made in that area though; it seems pretty premature. Check back in a decade or two.
Quote:
If we could avoid having to do so, we'd have a machine that we could teach how to play backgammon, and midway through have it refuse to do so because it doesnt want to play.

If we trained a computer only to be as good a backgammon player as possible, it would be "happier" the better it played. The option of refusal would not even be available to it, because it does nothing to enhance its playing abilities and is not part of the relevant domain anyway.

If we built instead a computer that operated in a larger domain (say, an entire set of different boardgames) and allowed it a say in which game to play, then a generalistic approach would most likely result in a computer favouring the games it is best at, resulting in a refusal to play the games it plays poorly.

But you must remember that whatever we implemented is limited to the domain in which we construct it to operate. A game AI, built for playing only one game, should not be able to refuse to play so we never give it that option. A car AI, built for transporting people to different places, should not be able to erase its own hard drive so we never give it that option. All agents, artificial or otherwise (that includes us), are limited by the choices available to them. We will never see an entirely "free" AI, able to breach the domain in which it operates (such as refusing to play a game it is built to play), because such a concept is as ridiculous as humans willing themselves to be able to fly or move objects with their mind. We can't break the limits built into us.
-------------Please rate this post if it was useful.

#79 BreathOfLife   Members   -  Reputation: 188

Like
0Likes
Like

Posted 11 April 2008 - 07:08 PM

Us having to give an "AI" a domain is just as similar as telling to to try some sort of aproximatation, its just not going to happen on its own.

By this token, we could differentiate between erzatz and artifitial intelligence. One is a very close re-creation of a sytem learning to do something, the other is a reproduction of learning at a base level, in accordance with my theory that "Intelligence is the ability to learn".

EI might come very very close to AI, and AI might come very very close to Intelligence, but AI will still never quite match with Intelligence.

Call it pesimistic, but I just believe that I know our bounds. Currently, Im quite happy with them, and love the process of creating AI (specifically within am game system). I code an engine, I code the rules of say RPG style combat, I code the AI to go through the motions of said combat and have it learn from the outcome. If I do a bang-up job, I've managed a very nice example of EI, but not AI as far as Im concerned.

EI deals with a scope we can ourselves can understand. AI works at an entirely different plane. It requires the ability to generate said understanding.

#80 Hnefi   Members   -  Reputation: 386

Like
0Likes
Like

Posted 11 April 2008 - 08:38 PM

Quote:
Original post by BreathOfLife
Us having to give an "AI" a domain is just as similar as telling to to try some sort of aproximatation, its just not going to happen on its own.

Of course it won't "happen on its own". No entity can change the domain in which it operates - an AI won't be able to perform actions unavailable to it any more than we are.
Quote:
By this token, we could differentiate between erzatz and artifitial intelligence. One is a very close re-creation of a sytem learning to do something, the other is a reproduction of learning at a base level, in accordance with my theory that "Intelligence is the ability to learn".

I don't see how that's useful for differentiating between true AI and weak imitations. What, exactly, is the metric and methodology used? Given perfect knowledge of a system, how would you go about determining whether it was intelligent?
-------------Please rate this post if it was useful.




Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS