What is really AI?

Started by
80 comments, last by Emergent 16 years ago
A BBS sysop once sicked his pet software 'bot on me, which was masquerading as a person. It lasted for about half a dozen exchanges, before it became clear there was little comprehension of what was being said. Thereafter, about three or four more exchanges exposed it for what it really was -- a simulator with a stash of canned responses.

[Edited by - AngleWyrm on April 4, 2008 12:56:52 PM]
--"I'm not at home right now, but" = lights on, but no ones home
Advertisement
I think we will only have true AI when a computer has the same number of processors (CPUS) as there are in the human brain.

Since a brain has about 100,000,000,000 neurons. And right now computers have about 2-4 CPUs then according to Moore's law, the number of processors should double every to two years, we should have true artificial intelligence by....

April 2008 + 2 * log_2(100,000,000,000/4) years = May 2077

By which time I will be in to my 90's.

But seeing as neurons are much slower than CPUs it might be sooner. For example eye neurons work about 100 frames a second which is 100Hz. So a 1GHz CPU can model about 10,000,000 neurons. Then we only need 10,000 CPUs and this will take:

April 2008 + 2 * log_2(10,000/4) years = Nov 2030

where I will be about 50 so that's not too bad.
Me in a nutshell - Patchwork Personalities.
Quote:Original post by animator
I think we will only have true AI when a computer has the same number of processors (CPUS) as there are in the human brain.

There are no CPUs in the human brain. A CPU is a centralized computational structure; the brain is a decentralized computational network. Big difference.
Quote:Since a brain has about 100,000,000,000 neurons. And right now computers have about 2-4 CPUs then according to Moore's law, the number of processors should double every to two years, we should have true artificial intelligence by....

April 2008 + 2 * log_2(100,000,000,000/4) years = May 2077

By which time I will be in to my 90's.

A neuron is definitely not equivalent to a CPU. A neuron is a very primitive unit in a very large distributed structure, whereas a CPU is a very complex unit at the heart of a comparatively simple structure.
Quote:But seeing as neurons are much slower than CPUs it might be sooner. For example eye neurons work about 100 frames a second which is 100Hz. So a 1GHz CPU can model about 10,000,000 neurons. Then we only need 10,000 CPUs and this will take:

April 2008 + 2 * log_2(10,000/4) years = Nov 2030

where I will be about 50 so that's not too bad.

Computers containing 10,000 CPUs exist today. It's estimated that in terms of primitive instructions per second, the human brain is outclassed by our fastest computers in operation today by a factor of about 5. Edited to add: IBM's upcoming Blue Gene/P architecture can be configured for use with 884,736 processors.
-------------Please rate this post if it was useful.
Some interesting linguistic observations:
Quote:The Stuff of Thought, page 6, by Steven Pinker
"...language is saturated with implicit metaphors like EVENTS ARE OBJECTS and TIME IS SPACE. Indeed, space turns out to be a conceptual vehicle not just for time but for many kinds of states and circumstances. Just as a meeting can be moved from 3:00 to 4:00, a traffic light can go from green to red, a person can go from flipping burgers to running a corporation, and the economy can go form bad to worse. Metaphor is so widespread in language that it's hard to find expressions for abstract ideas that are not metaphorical. What does the concreteness of language say about human thought? Does it imply that even our wispiest concepts are represented in the mind as hunks of matter that we move around on a mental stage? Does it say that rival claims about the world can never be true or false but can only be alternative metaphors that frame a situation in different ways?"

Here's his TED talk discussing the material of this book.
--"I'm not at home right now, but" = lights on, but no ones home
I think a fairly accurate definition of intelligence, is the ability to learn.

It is something that living things just seem to have, and something that is very difficult to re-create outside of a very, very narrow scope (eg. computers that learn to play chess and backgammon well). Even then, the only real way that they can "learn" is because they have a near flawless memory and razor sharp math skills.

Furthermore, they can only really "learn how to play well" the game after we tell them all the rules, and then explain what "playing well" is.

Computers and programs need everything definded for them... variable names, types, values. Learning would be like, creating a new variable type during runtime, and generating a bunch of operators to manipulate the data, and then implementing them.

Basically, in reference the the OPs origonal post reference, my 2 cents is that the difference between intelligence, and artificial intelligence, is that one exists, and the other countless people are trying to reproduce the best they can against a completely impossible goal.
Quote:Original post by BreathOfLife
It is something that living things just seem to have, and something that is very difficult to re-create outside of a very, very narrow scope (eg. computers that learn to play chess and backgammon well). Even then, the only real way that they can "learn" is because they have a near flawless memory and razor sharp math skills.

Not at all. It's true that it is difficult to implement learning in a large domain, but that is only the case because the complexity of learning increases very quickly with the size of the domain (probably exponentially). Remember that it takes the most intelligent creatures we know of about a year to simply learn how to walk. Most AI's aren't given that kind of timeframe to learn.

I should also point out that "flawless memory" has nothing to do with it. In fact, most machine learning techniques forgo that advantage and use various types of heuristics and approximations. I don't know of any learning technique, except the most trivial, that actually rely on perfect memory.

Quote:Furthermore, they can only really "learn how to play well" the game after we tell them all the rules, and then explain what "playing well" is.

And this is different from humans how? If you don't tell a person the rules or goal of chess, they will never be able to play it. They may be able to infer the rules and goal by observing several games being played, but so could a computer.
Quote:Computers and programs need everything definded for them... variable names, types, values. Learning would be like, creating a new variable type during runtime, and generating a bunch of operators to manipulate the data, and then implementing them.

I used to think so as well, but it's completely wrong. You assume that the data structures in machine code correspond directly to the objects and features observed by the program. This is wrong. Some types of machine learning do not even have discrete variables at all for the things observed, yet they manage to be very efficient learners anyway. Other strategies use a generalistic approach, where the observed objects are first classified with some very general technique (such as self-organizing maps) and then instanced as collections of classifications - features. Such techniques can be applied to any reasonable domain, though it may not be the most efficient.

Quote:Basically, in reference the the OPs origonal post reference, my 2 cents is that the difference between intelligence, and artificial intelligence, is that one exists, and the other countless people are trying to reproduce the best they can against a completely impossible goal.

You seem to have fallen for the same unrealistic expectations that the early promoters of the field did - that it's just a matter of computing power. But unlike them, you realize that we can never have enough computing power to solve the problem of AI with trivial methods.

But the field of AI is much bigger than that. Sure, we are far from implementing sentience. That's not exactly surprising. But the efforts made to implement it has yielded a lot of valuable knowledge when it comes to machine learning, planning, searching, knowledge representation etc that are used a lot in software today.

Maybe we'll never implement a "real" AI, even if we manage to define the term. But to say that it's impossible, given what we know today, is overly pessimistic - and the knowledge gained by trying sure makes it worth the effort.
-------------Please rate this post if it was useful.
Heuristics. Thats my new stance. Its the heuristic that is my "overly pessimistic" point of origin. I do not believe that a computer could ever implesment a heuristic, unless at some point we give it one, or exmplain to it somehow what a heuristic is.

Truly intellegent AI seems totally unreal, that doesnt mean that the AI we have come up with thus far isnt really, really close. In same cases, it is. But they all need a heuristic from us to start the process.


This might come out sounding entirly ignorant, but I'm not entirely sure that its true, but I bet that most if not all AI development spends alot of time tweaking heuristics.

Say we make a machine capable of feeling pain. It would not be able to avoid situations that cause pain unless we tell it pain is "bad" somehow. It would know pain, but wouldnt even bother trying to avoid it.

You could counter with a "happy meter" style stance, that its not "bad" it is just less "happy". But that only works if we first explain to it that "happy" is what it wants to be.

If we could avoid having to do so, we'd have a machine that we could teach how to play backgammon, and midway through have it refuse to do so because it doesnt want to play.
Quote:Original post by BreathOfLife
Heuristics. Thats my new stance. Its the heuristic that is my "overly pessimistic" point of origin. I do not believe that a computer could ever implesment a heuristic, unless at some point we give it one, or exmplain to it somehow what a heuristic is.

Truly intellegent AI seems totally unreal, that doesnt mean that the AI we have come up with thus far isnt really, really close. In same cases, it is. But they all need a heuristic from us to start the process.

I think you need to look a bit closer on the current state of the field. A heuristic is just a strategy for approximation, and they are, at this point, pretty trivial to implement in such a way that the computer can construct and fine-tune a heuristic from scratch for any purpose without any help except being told what the goals corresponding to the input is. The most common technique for this is probably a classical backpropagating neural network. I believe ANN's are probably the most promising area of research for those looking to create "real" AI; these people are, however, in the minority among the research community. Most researchers are building automated cars and bombers; not Johnny 5.
Quote:This might come out sounding entirly ignorant, but I'm not entirely sure that its true, but I bet that most if not all AI development spends alot of time tweaking heuristics.

Any implementation work is primarily tweaking. The actual theoretics are a relatively minor part of the total work hours being put into most fields of research in computer science.
Quote:Say we make a machine capable of feeling pain. It would not be able to avoid situations that cause pain unless we tell it pain is "bad" somehow. It would know pain, but wouldnt even bother trying to avoid it.

You could counter with a "happy meter" style stance, that its not "bad" it is just less "happy". But that only works if we first explain to it that "happy" is what it wants to be.

What you are basically saying here is that the metric of success - to know whether what one does is good or bad - is difficult to implement. This is sometimes true, sometimes not. In a game, it is trivial to implement (the closer you are to winning, the happier you are). In an agent that is supposed to emulate human behavior, it is much more difficult. I'm not sure whether any research is actually being made in that area though; it seems pretty premature. Check back in a decade or two.
Quote:If we could avoid having to do so, we'd have a machine that we could teach how to play backgammon, and midway through have it refuse to do so because it doesnt want to play.

If we trained a computer only to be as good a backgammon player as possible, it would be "happier" the better it played. The option of refusal would not even be available to it, because it does nothing to enhance its playing abilities and is not part of the relevant domain anyway.

If we built instead a computer that operated in a larger domain (say, an entire set of different boardgames) and allowed it a say in which game to play, then a generalistic approach would most likely result in a computer favouring the games it is best at, resulting in a refusal to play the games it plays poorly.

But you must remember that whatever we implemented is limited to the domain in which we construct it to operate. A game AI, built for playing only one game, should not be able to refuse to play so we never give it that option. A car AI, built for transporting people to different places, should not be able to erase its own hard drive so we never give it that option. All agents, artificial or otherwise (that includes us), are limited by the choices available to them. We will never see an entirely "free" AI, able to breach the domain in which it operates (such as refusing to play a game it is built to play), because such a concept is as ridiculous as humans willing themselves to be able to fly or move objects with their mind. We can't break the limits built into us.
-------------Please rate this post if it was useful.
Us having to give an "AI" a domain is just as similar as telling to to try some sort of aproximatation, its just not going to happen on its own.

By this token, we could differentiate between erzatz and artifitial intelligence. One is a very close re-creation of a sytem learning to do something, the other is a reproduction of learning at a base level, in accordance with my theory that "Intelligence is the ability to learn".

EI might come very very close to AI, and AI might come very very close to Intelligence, but AI will still never quite match with Intelligence.

Call it pesimistic, but I just believe that I know our bounds. Currently, Im quite happy with them, and love the process of creating AI (specifically within am game system). I code an engine, I code the rules of say RPG style combat, I code the AI to go through the motions of said combat and have it learn from the outcome. If I do a bang-up job, I've managed a very nice example of EI, but not AI as far as Im concerned.

EI deals with a scope we can ourselves can understand. AI works at an entirely different plane. It requires the ability to generate said understanding.
Quote:Original post by BreathOfLife
Us having to give an "AI" a domain is just as similar as telling to to try some sort of aproximatation, its just not going to happen on its own.

Of course it won't "happen on its own". No entity can change the domain in which it operates - an AI won't be able to perform actions unavailable to it any more than we are.
Quote:By this token, we could differentiate between erzatz and artifitial intelligence. One is a very close re-creation of a sytem learning to do something, the other is a reproduction of learning at a base level, in accordance with my theory that "Intelligence is the ability to learn".

I don't see how that's useful for differentiating between true AI and weak imitations. What, exactly, is the metric and methodology used? Given perfect knowledge of a system, how would you go about determining whether it was intelligent?
-------------Please rate this post if it was useful.

This topic is closed to new replies.

Advertisement