[Slightly Offtopic] John Searle and computational intelligence

Started by
85 comments, last by MDI 20 years, 2 months ago
quote:Original post by Timkin
Many people feel Searle's argument is wrong, however few can tell you why it is wrong (if indeed it is).

I think the reason is that most people who wish to counter Searle's argument think that strong AI should be provable.

This is based upon the, not unreasonable, assumption that true and false are a true dichotomy. However, in logic, sometimes an assertion is neither true nor false. Edit: Actually, sometimes an assertion is true and false, rather than neither.

In formal systems, that assertion is a Gödel sentence. I believe that 'X possesses understanding' is a real-world Gödel sentence.

CoV

[edited by - Mayrel on January 21, 2004 12:38:01 AM]
CoV
Advertisement
I''ll build machines with souls. Then I''ll program them all to worship my God. Then I go to heaven for saving so many souls!
The person in the Chinese room doesn''t understand Chinese, but the program he is executing (acting as a computer) can be said to understand Chinese. Thus, the computer isn''t intelligent, but the program running on the computer, is.

The brain isn''t intelligent, but the synaptic responses trained during the last X years (you choose X), is.
enum Bool { True, False, FileNotFound };
quote:Original post by Anonymous Poster
I''ll build machines with souls. Then I''ll program them all to worship my God. Then I go to heaven for saving so many souls!

Unless, of course, it turns out your God was the wrong one.

CoV
CoV
Thanks for that perspective on Searle''s argument Mayrel... most enjoyable!
quote:Original post by Timkin
Thanks for that perspective on Searle''s argument Mayrel... most enjoyable!

Hmm. I''m hoping you mean that in a good. way. :/

CoV
CoV
Was chinese room more about spoken chinesse, or about writen one. (traditional, or simplified?)


BTW what is the best aplicable deffinition of the inteligence?

I used:
Inteligence is an ability, or a measure of an ability to properly react/(solve) on unknown things/(situations).

It worked well, but I''m curious if someone did smarter. (more usable in game development, not more studious sounding)
quote:Original post by Mayrel
quote:Original post by Timkin
Thanks for that perspective on Searle''s argument Mayrel... most enjoyable!

Hmm. I''m hoping you mean that in a good. way. :/


Yes, I did... unlike some around here, I''m not usually sarcastic!

Timkin
Wow, lots of great insights here!

quote:
Timkin: Will embodiment/situatedness deal with the SGP? I honestly don''t know that one.


That''s the beauty of it; grounding is not a problem with embodiment. Everything you deal with via sensors and effectors so your AI has been designed according to this principle. Historically, the symbol grounding problem was only a problem because the algorithms in the 1970s weren''t dealing with anything real, just hypothetical world models!

Besides, consider grounding as a skill with different levels. A search-based AI from the 70s would have poor grounding and need hard-coded interfaces to interact with anything; robots have much better grounding as they can deal with arbitrary obstacles and simple objects; humans intelligence is capable of keeping higher-level abstractions grounded, to some extent... though RPGeezus seems to be having trouble with that


quote:
RPGeezus:
No one can say, without facing arguments from different directions, that a cat, a dog, an ant, or even a simple bacteria, is capable of conciousness or intelligence. If we cannot agree on these terms within the realm of the living, then to me, it seems pointless to argue about high order intelligence amongst the non-living.


I have issues with your notions of how intelligence or counsciousness should be defined. Intelligence is not a boolean thing (I say this every time, you were right Timkin). Both cats and dogs are intelligent (and conscious) to certain degrees, but you have to test them. Both intelligence and consciousness are concepts, so you need empirical measurements to establish to what degree. (I''ll stop there before I type a few pages.)


Anyway, I''m not sure how this prevents the discussion. Strong AI is defined as recreating human-level intelligence using a method that mimicks the human brain (inspired by cognitive science). It doesn''t matter how you define it, you can test it.

To me, Searles'' Chinese room argument does not show it''s not possible. The fact that the human inside does not understand what''s going on doesn''t proclude the fact that the whole system is intelligent (i.e. capable of generating chinese, as defined for this problem). A single neuron does not understand the entire problem either!

Alex


AiGameDev.com

Join us in Vienna for the nucl.ai Conference 2015, on July 20-22... Don't miss it!

quote:Original post by alexjc
Anyway, I'm not sure how this prevents the discussion. Strong AI is defined as recreating human-level intelligence using a method that mimicks the human brain (inspired by cognitive science). It doesn't matter how you define it, you can test it.

That's not the usual definition.

A "weak AI" is an AI that behaves at least as intelligently as a human, including simulating what the IEP calls cognitive mental states .

A "strong AI" is an AI that actually has "cognitive mental states" -- an AI that is aware of the processing it performs.
quote:
To me, Searles' Chinese room argument does not show it's not possible.

No, it doesn't. My opinion is that it only shows that you can't prove strong AI either way. At best, it shows you can't prove it with that kind of experiment.
quote:
The fact that the human inside does not understand what's going on doesn't proclude the fact that the whole system is intelligent (i.e. capable of generating chinese, as defined for this problem). A single neuron does not understand the entire problem either!

Searle isn't arguing against the notion of intelligent machines, he's arguing against the notion of machines that somehow have an 'awareness' of the processing that they perform.

Edit: Searle is proposing an argument from ignorance: "Since I don't understand how the parts of a computer could be aware, the parts of a computer cannot be aware."

CoV

[edited by - Mayrel on January 23, 2004 6:52:12 AM]
CoV

This topic is closed to new replies.

Advertisement