How close are we?

Started by
33 comments, last by Madster 18 years, 4 months ago
How close are we to having "real" AI? Have any "intelligent" computer systems been implemented yet? And have they acted in unpredicted ways (have they become more than the substance of their programming?) I am hinting that true intelligence would act in unpredictable ways. Maybe that is not a good or reliable indication of intelligence (predictability). I think maybe what I meant was...has demonstrable learning taken place in an AI system and then has the learning been applied to change behavior? ----------- Also, I have been wondering if trying to start with mimicking the human mind is a bit too much to bite-off at once. Have some AI researchers started off with trying to mimic the intelligence and learning of, let's say, a house cat, or a fly? I would want to take the simplest observable "mind"...but one that can demonstrate actual learning...and try to duplicate it. I would not start with a human mind. Am I far off base? Also...would it be easier to observe learning in a CLOSED system versus and OPEN system? For example, if I wanted to allow an AI system to have an opportunity to "learn"...should I let it roam the planet...or keep it in a laboratory?
Advertisement
Quote:Original post by Tom Knowlton
How close are we to having "real" AI?

Have any "intelligent" computer systems been implemented yet? And have they acted in unpredicted ways (have they become more than the substance of their programming?)

See this thread: What is considered an advanced AI?

The other thing is called emergent behavior, and it happens all the time. The Black and White avitar looking for food and trying to eat itself. MASSIVE, developed to model the fighting in LOTR, where the humans and orcs decided the best action was best to run away from the battlefield, etc.

Quote:Original post by Tom Knowlton
I am hinting that true intelligence would act in unpredictable ways. Maybe that is not a good or reliable indication of intelligence (predictability).

I think maybe what I meant was...has demonstrable learning taken place in an AI system and then has the learning been applied to change behavior?

Most of the original bars of "artificial intellegence" were passed long ago. The bar was raised, and then met. Whatever we consider to be a "good AI" this year will be considered "just a normal adaptive program" a few years from now.

Quote:Original post by Tom Knowlton
Also, I have been wondering if trying to start with mimicking the human mind is a bit too much to bite-off at once. Have some AI researchers started off with trying to mimic the intelligence and learning of, let's say, a house cat, or a fly?

Why would you want to? Some argue that it is impossible to make true intellegence, whether for religious reasons, random probabilities, or the open debate on the Universe being nondeterministic.

It *is* possible (but not practical) to record all the actions, nuances, and other behavoirs of a human, and with technology and algorithms already develped, make a perfect replica that would respond the same way. If nothing else, it can memorize responses and use markov chains to work out what to do next.


Quote:Original post by Tom Knowlton
I would want to take the simplest observable "mind"...but one that can demonstrate actual learning...and try to duplicate it. I would not start with a human mind. Am I far off base?

Also...would it be easier to observe learning in a CLOSED system versus and OPEN system?

For example, if I wanted to allow an AI system to have an opportunity to "learn"...should I let it roam the planet...or keep it in a laboratory?

Those are all fairly advanced AI techniques. You might look in to two of the ACM's special intrest groupss: SIGART (Artificial Intellegence) and SIGCHI (Computer Human Interaction). Also AAAI (American Association for Artificial Intellegence) has had challenges for AI robots to attend and minimally participate in an academic conference

All that and more would be appropriate for a grad school research and machine learning classes.

frob.
Quote:Original post by Tom Knowlton
How close are we to having "real" AI?


Don't misunderstand the term. "Artificial Intelligence" is the study and application of different methodologies to solve problems that are usually handled by humans , different methods already exist, so it is real.
[size="2"]I like the Walrus best.
Quote:Original post by owl
Quote:Original post by Tom Knowlton
How close are we to having "real" AI?


Don't misunderstand the term. "Artificial Intelligence" is the study and application of different methodologies to solve problems that are usually handled by humans , different methods already exist, so it is real.


Based on his questions about modeling the human mind, and having the model be externally inspected, I'm pretty sure he's not talking about AI in the computer theory sense. [grin]

Back in grad school, a similar question was asked to the machine learning teacher.

His response was basically "Why would you want to exactly match human behavior? Humans are lazy, slow to think, slow to learn, have suboptimal skills on thinking, don't work well when tired or otherwise impaired, have terrible memory, and there are lots of stupid people that are basically mental rejects. We use computers and machine learning so that we don't have to deal with all that."

frob.
Quote:Original post by owl
Quote:Original post by Tom Knowlton
How close are we to having "real" AI?


Don't misunderstand the term. "Artificial Intelligence" is the study and application of different methodologies to solve problems that are usually handled by humans , different methods already exist, so it is real.


So there are computer programs that can learn and apply what they learn?
Here's some food for thought.

Hubert L. Dreyfus Interview: Artificial Intelligence

Quote:
...
Who had taken over philosophy?

The people in the AI lab, with their "mental representations," had taken over Descartes and Hume and Kant, who said concepts were rules, and so forth. And far from teaching us how it should be done, they had taken over what we had just recently learned in philosophy, which was the wrong way to do it. The irony is that 1957, when AI, artificial intelligence, was named by John McCarthy, was the very year that Wittgenstein's philosophical investigations came out against mental representations, and Heidegger already in 1927 -- that's Being in Time -- wrote a whole book against mental representations. So, they had inherited a lemon. They had taken over a loser philosophy. If they had known philosophy, they could've predicted, like me, that it was a research program. They took Cartesian modern philosophy and turned it into a research program, and anybody who knew enough philosophy could've predicted it was going to fail. But nobody else paid any attention. That's why I got this prize. I saw what they did and I predicted it, and that's the end of them.

You write -- I think it's in the Internet book -- that "in cyberspace, then, without our embodied ability to grasp meaning, relevance slips through our non-existent fingers." And you go on to say, "The world is a field of significance organized by and for beings like us with our bodies, desire, interest and purpose."

Right. That's where Merleau-Ponty comes in. None of that would be said by Heidegger. Heidegger was just interested in the way we could disclose the world without mental representation. But Merleau-Ponty sees that there isn't anything mental about it. It's the basic level. Our body and its skills for dealing with things and getting an optimal grip on things is what we need to understand, and then it becomes clear that computers just haven't got it. They haven't got bodies and they haven't got skills.
...


"I thought what I'd do was, I'd pretend I was one of those deaf-mutes." - the Laughing Man
Quote:Original post by Tom Knowlton
Quote:Original post by owl
Quote:Original post by Tom Knowlton
How close are we to having "real" AI?


Don't misunderstand the term. "Artificial Intelligence" is the study and application of different methodologies to solve problems that are usually handled by humans , different methods already exist, so it is real.


So there are computer programs that can learn and apply what they learn?


If by "learn and apply" you mean "catching the rule" of a problem, then yes, neural-networks can do that to some extent by function aproximation.

I pointed out the specific meaning of the term because sometimes it is tought that the only objective of AI is to re-create an entire human into a computer and that until this isn't achieved everything else is a failure or isn't worth, and that's the notion I felt OP was having about it.
[size="2"]I like the Walrus best.
We are nowhere close to a "real" AI. We do not even have a reliable FRAMEWORK to base study from, nor do we have even a complete idea of what QUALITIES an intelligent system should have. Several frameworks have been proposed, claiming to be THE one, for instance Minsky proposed a logical framework, Grossberg proposed a framework using neural networks, and just recently Jeff Hawkins' On Intelligence proposed a "memory prediction framework" (which, btw, is a repetition of earlier cognitive psychology research). All of these (to varying degrees) have moreorless only provided dead-ends.

The field is mired by the simple question of defining "intelligence". If we can't characterize intelligent systems well, then how do we know whether we're moving toward a promising goal? Further, the biological data for intelligence just isn't there, and especially not the data needed for accurate mathematical models. Of central importance to the simulation community is determining how much detail is needed to elicit complex behavior, but since the mathematical simulations are so complex, really hope only fuels any effort beyond predicting biological results. (Neural networks are an attempt at formalizing neural theory, but their ability to learn is limited.)

If you clarify "true intelligence" though, you will likely find some neural models out there that meet your criteria. The problem is then perhaps that we don't really know what makes something intelligent!
h20, member of WFG 0 A.D.
AI and monsters have a lot in common-- one you know what they are they stop being that thing.

A sea-monster is only a sea-monster until you know what it is, then it's just another animal.

A chess AI is only 'intelligent' until you understand how it works-- after that, it's just an algorithm.

I think think the big question is how close are we to a sentient concious entity. To know that, we have to know what a sentient concious entity is, and WHY it is. If we can answer that, then we ourselfs are no longer 'sea monsters' and can then fairley compare ourselfs to an artifical creation.

I think there is no answer to that question.

We don't even know what it means to be a concious sentient being ourselves. The only truth, in this regard, is 'I think, therfore I am'. Of course the questions 'What is I' and 'What is Think' still go answered.

Don't think about it too hard. These questions have driven people to depression and madness. :)

Will




------------------http://www.nentari.com
Quote:Original post by RPGeezus
AI and monsters have a lot in common-- one you know what they are they stop being that thing.

A sea-monster is only a sea-monster until you know what it is, then it's just another animal.

A chess AI is only 'intelligent' until you understand how it works-- after that, it's just an algorithm.

I think think the big question is how close are we to a sentient concious entity. To know that, we have to know what a sentient concious entity is, and WHY it is. If we can answer that, then we ourselfs are no longer 'sea monsters' and can then fairley compare ourselfs to an artifical creation.

I think there is no answer to that question.

We don't even know what it means to be a concious sentient being ourselves. The only truth, in this regard, is 'I think, therfore I am'. Of course the questions 'What is I' and 'What is Think' still go answered.

Don't think about it too hard. These questions have driven people to depression and madness. :)

Will




Oh, don't worry...I don't stay up late thinking about it.

BUT....I am curious about it, to be sure.

That really was my question: how close are we to a sentient concious entity.

That is my question.



The appeal for me...or one reason it appeals to me...is the raw computational power and storage capability of computers. If you could harness THAT with some form of real intelligence it would open up new worlds for us (it would seem)

This topic is closed to new replies.

Advertisement