Archived

This topic is now archived and is closed to further replies.

MDI

[Slightly Offtopic] John Searle and computational intelligence

Recommended Posts

MDI    266
I''m doing a joint degree in CS and AI, and have to take a module in philosophy for AI. We studied John Searle''s arguments against the validity of the Turing and Total Turing tests, with his Chinese room argument. His arguments seem to be pretty strong, and it appears from his argument we''ll never get to actually create a real intelligent agent with computational methods (am I right?). This is seriously disheartening, as it would then appear I''m doing a degree in a "waste of time". I''m sure someone came up with arguments against Searle, but I can''t really think of any. Are there any? Are there serious objections to Searle''s arguments? [This isn''t a homework question, I''m just trying to work out how we could possibly create an intelligent agent if Searle is correct].

Share this post


Link to post
Share on other sites
alexjc    457
I have serious reservations about Searle's argument... although very eloquent, it makes many dubious assumptions. We discuss it regularly in #ai on freenode. I'll see if I can dig up the major counter-arguments.

By the way, there are many good web pages on the subject. Google it.

Alex

Edit: Start here -- Searle's Chinese Box: Debunking the Chinese Room Argument

[edited by - alexjc on January 16, 2004 8:01:49 AM]

Share this post


Link to post
Share on other sites
alexjc    457
Well, almost a MSc from Edinburgh -- 18 months after completion. I''ll spare you the stories about Scottish bureaucracy!

Still, I''m now back in Edinburgh for more

Share this post


Link to post
Share on other sites
TerranFury    142
The point of AI isn''t to create HAL. It''s to teach computers to autonomously solve problems. You don''t need consciousness, just a good algorithm. People don''t want sentient machines anyway, just better automatic dishwashers.

Share this post


Link to post
Share on other sites
GameCat    292
Suppose it was possible to replace single neurons in the brain with an electronic implant that functioned identically to the original neuron. Given the same input, the implant would provide the same output as the original neuron would have. Now, assume this procedure was simple, risk free and painless. You get one of your neurons replaced as a guinea pig for AI-research. Did you just lose conciousness? No? So we replace another neuron. And another, and another...

Obviously, at some point all of your neurons will have been replaced and you''ll be a living proof of the fact that machines CAN think. Or is that a different kind of conciousness? Maybe you believe that conciousness resides someplace else than in the neurons, like the soul, but then you don''t need Searles fancy arguments to debunk AI.

Share this post


Link to post
Share on other sites
Lorenzo    122
quote:
Original post by alexjc
I have serious reservations about Searle''s argument... although very eloquent, it makes many dubious assumptions. We discuss it regularly in #ai on freenode. I''ll see if I can dig up the major counter-arguments.

By the way, there are many good web pages on the subject. Google it.

Alex

Edit: Start here -- Searle''s Chinese Box: Debunking the Chinese Room Argument

[edited by - alexjc on January 16, 2004 8:01:49 AM]


The paper starts with:
John Searle''s Chinese room argument is perhaps the most influential and widely cited argument against artificial intelligence.

I thing Searle''s Chinese room argument is NOT against artificial intelligence, but against STRONG artificial intelligence.

Pesonally, I don''t believe the human mind to be T-computable at all; however, the point is that even if the human mind behaviour can be mimiced by a Touring Machine, it will be not the same: I can''t image a machine with consciusness. Are feelings a by product of a function of the brain?

This is the point of Searle, IMO, and I find it quite reassuring... no Matrix, for now.

Share this post


Link to post
Share on other sites
Lorenzo    122
quote:
Original post by GameCat
Suppose it was possible to replace single neurons in the brain with an electronic implant that functioned identically to the original neuron. Given the same input, the implant would provide the same output as the original neuron would have.

It''s not know exactly how neurons works... it''s not even know if a machine that "functioned identically to the original neuron" can be constructed (this is what "computable" means).


Share this post


Link to post
Share on other sites
MDI    266
quote:
Original post by TerranFury
The point of AI isn''t to create HAL. It''s to teach computers to autonomously solve problems. You don''t need consciousness, just a good algorithm. People don''t want sentient machines anyway, just better automatic dishwashers.


Isn''t that weak AI? i.e. simulating human behaviour?

Share this post


Link to post
Share on other sites
Ferretman    276
quote:
Original post by GameCat
Suppose it was possible to replace single neurons in the brain with an electronic implant that functioned identically to the original neuron. Given the same input, the implant would provide the same output as the original neuron would have. Now, assume this procedure was simple, risk free and painless. You get one of your neurons replaced as a guinea pig for AI-research. Did you just lose conciousness? No? So we replace another neuron. And another, and another...

Obviously, at some point all of your neurons will have been replaced and you'll be a living proof of the fact that machines CAN think. Or is that a different kind of conciousness? Maybe you believe that conciousness resides someplace else than in the neurons, like the soul, but then you don't need Searles fancy arguments to debunk AI.


Well, truth be told we don't actually know that what you suggest is true (though personally I believe it is). This was more or less the premise that Asimov used for this story "The Bincentennial Man" decades ago (forget the atrocious Robin Williams movie, please).

We will be faced with very interesting court cases in the coming years regarding computer sentience, and they'll get worse when we begin procedures that do as you're suggesting above. A method that can replace failing neurons with computer chips might be a boon to Alzheimer's patients, for example, but sooner or later somebody is gonna try to claim that the patient is no longer "human". That'll be fun.

I think I concur with a previous poster, however. People are way more interested in having smarter devices than in having intelligent devices per se.




Ferretman

ferretman@gameai.com
GameAI.Com is Changing Providers--details on the site!

From the High Mountains of Colorado



[edited by - Ferretman on January 16, 2004 4:00:47 PM]

Share this post


Link to post
Share on other sites
MikeD    158
Personally I find Rodney Brooks'' approach to AI via embodiment and situatedness tend to overcome the problems of John Searle''s chinese room argument against strong AI.

Essentially, by being a body in the world and adapting by experience of that world you become structurally coupled to that world and your intelligence is an offshoot of that coupling.

Try reading Brooks'' papers "Intelligence without reason" or "intelligence without representation", they''re his seminal papers on the topic.

I can go into more detail about why I think his ideas solve the chinese room argument if you like. Preferably when I''m less hungover...

Mike

Share this post


Link to post
Share on other sites
Timkin    864
Having studied (formerly) and read a lot on the philosophy of mind and AI, I can add a few useful thoughts to the discussion (I hope!)!

First, the general response to Searle''s Chinese Room Experiment (gedankenexperiment) is the system''s response : that the whole system, inputs, outputs, journal/lookuptable, letter checker, etc, understands. This is supposed to answer the problem that Searle is really trying to address in his thought experiment: the symbol grounding problem . This is still an as yet unresolved problem in AI and while there are some interesting theories about, no one yet has an adequate explanation (at least IMHO) of how internal symbols are grounded with external represenations.

Many people feel Searle''s argument is wrong, however few can tell you why it is wrong (if indeed it is). One author you should definitely read, who provides an excellent analysis of Searle''s ideas and provides excellent ideas to counter them, is Stevan Harnad. In particular, he offers a graded Turing Test, ranging from T1 to T5, with Turing''s original test as T3. T5, he explains, is really a test of intelligence comparable to human intelligence, whereas T3 is really just a test of symbol processing.

Personally, I tend to agree with a lot of Stevan''s ideas. In particular, that to pass T5, your agent has to live, learn and interact with the real world in order to understand it. I personally think that such agents are possible, but not likely in my life time.

Cheers,

Timkin

Share this post


Link to post
Share on other sites
Guest Anonymous Poster   
Guest Anonymous Poster
Not only does he try to argue against strong AI, doesn''t he also argue against evolution? Conscious animals have come into existence through a long time process that started from so simple elements that can''t be said to be intelligent or consicous alone. How could a conscious being develop from a non-conscious origin, just by multiplying and organizing into a larger whole? The whole is a lot more than it''s parts.

I have no idea how his arguments could be taken seriously by any atheist. That the brain and *thus* what we know as consciousness could well be created within computer is the only logical option for a world without fictional entities like God or soul. But maybe religious programmers have some fun arguing against him, or converting to his faith.. Have fun!

Share this post


Link to post
Share on other sites
RolandofGilead    100
quote:
Original post by Timkin
...
symbol grounding problem . This is still an as yet unresolved problem in AI and while there are some interesting theories about, no one yet has an adequate explanation (at least IMHO) of how internal symbols are grounded with external represenations


I''m very curious, has anyone thought that it doesn''t matter? How far did they get?

quote:

...
to pass T5, your agent has to live, learn and interact with the real world in order to understand it. I personally think that such agents are possible, but not likely in my life time.

Cheers,

Timkin

I''m also very curious as to how real this real world needs to be.

Share this post


Link to post
Share on other sites
MikeD    158
quote:
Original post by Timkin
...
symbol grounding problem . This is still an as yet unresolved problem in AI and while there are some interesting theories about, no one yet has an adequate explanation (at least IMHO) of how internal symbols are grounded with external represenations


If the definition of a thing is your every interaction with that thing, then any symbol grounding occurs by experience in the world and the taxonomy of that experience into entities delineated by various symbols.

Your definition of "tree" is a result of your every interaction with entities of type "tree". Of course various delineations have various levels of arbitrariness in their construction (and from an objective perspective all delineations are arbitrary). Our definitions of "tree" don''t have to match perfectly, which is good because they can''t. Unless you were me then you could not possibly have the same experience grounding (or more accurately creating) the symbol "tree". Communication between two people is only as relevant as their shared mapping between experience and symbols.

So we ground symbols by interaction (or rather symbols are the abstraction of interaction), hence the Brooksian concepts of embodiment and situatedness are integral to any system that is to have "understanding" of their world.

Mike

Share this post


Link to post
Share on other sites
RPGeezus    216
IMHO it seems like the cart has been put before the horse.

Putting my own personal opinion aside, we have yet to reach some for of agreement on even animal intelligence, let alone human.

No one can say, without facing arguments from different directions, that a cat, a dog, an ant, or even a simple bacteria, is capable of conciousness or intelligence. If we cannot agree on these terms within the realm of the living, then to me, it seems pointless to argue about high order intelligence amongst the non-living.

At what stage is a fetus instelligent? What about a baby with brain damage? To what extent must the brain operate for it to be deemed human? Would a living brain in a jar be considered human? What if the brain we''re chemically assembled in a manufacturing facility using someones DNA as a blueprint?

I don''t think it''s even worth discussing to be quite honest. It has no bearing or impact of the world at large, and will not propel human understanding in any new direction. So what if nobody thinks HAL is intelligent? He''ll still refuse to open the pod the bay doors, regardless of what you think!

Will

Share this post


Link to post
Share on other sites
KrishnaPG    122
Dear,

Dont worry much about such non-sense as that chinese room or any other rooms..Because..

The man in the chinese room (doing the lookup of question and returning the corresponding answer) - would never pass the Turing Test.

Trust TURING or atleast trust me - because, if you give the same question twice - you would get the same reply twice... But the machine that Succesfully PASSES the TURING TEST wouldn't be so foolish as this man to send you the same answer twice.

That means this "wise" machine has "something" that differentiates between the just "lookups man" and itself.

So, dont you worry about such looks or lookups.. Ofcourse, I can show you many such holes in this chinese room theory - But lets not waste our time on this dirt.

Hope I have not wasted your time.

Thanking you,
Yours,
P.GopalaKrishna.

PS: However, if you wish - Take this clue and think enough - you yourself can discover all the loops - All the best.



[edited by - KrishnaPG on January 21, 2004 11:08:51 PM]

Share this post


Link to post
Share on other sites
Timkin    864
Just a few thoughts/replies...

First, let me make it clear what I''m talking about here... the "symbol grounding problem", as defined by Harnad, is:

quote:
from Harnad, S. (1990) The Symbol Grounding Problem. Physica D 42: 335-346.
How can the semantic interpretation of a formal symbol system be made intrinsic to the system, rather than just parasitic on the meanings in our heads? How can the meanings of the meaningless symbol tokens, manipulated solely on the basis of their (arbitrary) shapes, be grounded in anything but other meaningless symbols?



''Can we just ignore it''... well, not really... not if we''re trying to manipulate symbols in a computer that are supposed to understand what it is the symbols represent (as opposed to just having another lookup value for a ''meaning'' of the symbol).

Will embodiment/situatedness deal with the SGP? I honestly don''t know that one. I think it goes a long way to answering some questions, simply because we humans are a perfect example of embodiment and situatedness (unless you postulate a soul). However, if we take Brooks'' approach, we have to then wonder about ''what is intelligence'', or perhaps, more reasonably, ''at what point does intelligence arise'', as RPGeezus points out.

As humans, we expect that we can identify intelligence by comparing it to ourselves, however that''s not a true measure of intelligence... that''s only a true measure of human intelligence. This is why a lot of people try and claim that a cat or a dog are not intelligent... they are, they just don''t have human intelligence. For that matter, a human doesn''t have simian intelligence, even though we''re closely related. If we talked about Mammalian intelligence, then perhaps we could begin to talk about degrees of intelligence...

Anyway, I digress...

I think the symbol grounding problem is important in that it relates to issues of computation as much as it does to issues of intelligence. What is 4x4? Is it 16, or a type of recreational vehicle? If its 16, why? Is it possible that symbols can have meanings all of their own and aren''t just parasitic to the meanings in our head? Are symbols always intrinsic to the system that utilises them (as Mike is suggesting) because we learn them in such a way that they are a fundamental part of our intelligence, or can we actually replace them and still be intelligent (as many might intuitively believe we can)?

Personally, I think that intelligence is inherently bound up in the relationships between symbols (as opposed to the symbols themselves) and is reflected in the ability to use and extend such relationships... which fits in with my theory of consciousness... but that''s another story for another day! Thus an agents intelligence is limited by their ability to store, utilise and extend relationships between symbols they know and the perception of their intelligence is bounded by the commonality of symbols that the peceiver and the agent have.


Cheers,

Timkin

Share this post


Link to post
Share on other sites
Timkin    864
Here''s a quick thought for people to ponder, which came to me as I was checking over my last post...

If a human baby were born into this world and all other humans were removed from it, along with all references to the human race, would this baby grow and be intelligent (ignore the obvious issue of its safety and ability to live... just assume that it can). If it were still intelligent, would that be because of some innate, hard-wired predispositions to learn and apply symbols, or some other reason. If an intelligent human were suddenly placed on the planet when the baby had grown to age 5, would they perceive it as intelligent (or merely expect it to be because it is human). What about at age 20?

What if the baby were raised by apes, would it have simian intelligence, or human intelligence, or no intelligence at all?

Do we perceive intelligence merely because we expect and therefore overlook intelligence where we don''t expect it (i.e, in anything not human)?

Cheers,

Timkin

Share this post


Link to post
Share on other sites
Mayrel    348
quote:

I''m sure someone came up with arguments against Searle, but I can''t really think of any. Are there any? Are there serious objections to Searle''s arguments?



Chinese Room is empty rhetoric.

Those of you reading are probably familiar with it, but I''ll briefly summarise it in case you are not:

1. Suppose there are two rooms, each with a slit through which slips of paper can be passed back and forth.

2. Outside are a bunch of Chinese experimenters.

3. Inside one room is a computer which has been programmed to read Chinese questions from slips of paper, and print out the correct answers on slips of paper that are sent back.

4. Inside the other is a man who does not speak Chinese, but he does have a large table of all questions he might be asked and the correct responses to them. The table is set up so that he will always produce the same responses as the computer.

5. The experimenters post their questions to each room, and receive their answers.

6. It is impossible to distinguish one room from another from their external behaviour.

7. The man who does not speak Chinese does not understand the questions or their responses, despite the fact that, to the experimenters, he appears to be making perfect sense.

8. Because the external behaviour of the man and the computer are identical, we can substitute the identities and conclude that the computer cannot understand the questions or their reponses.

Generally, this is followed by subjective assertions of meaning and understanding, and begging the question by blandly stating that a machine couldn''t understand things.

There two fallacies in concluding 8.

A. Suppose the man didn''t have a table and had instead memorised all the possible questions and their responses. Such a man could not be distinguished from a man who understood Chinese. We must therefore conclude that "understanding" is a purely subjective, internal, notion, and that it cannot be proved or disproved that someone "understands" what he or she is saying.

B. Since understanding has no external proof of its presence, we must conclude that the step 8 is in error: Searle has incorrectly substituted non-identities, he states that because the external behaviour of two things are indistinguishable, so must the internal behaviours be.

From A and B, we must conclude that the experiment has nothing to say about internal, subjective notions such as understanding.

If you''re not convinced, consider a slight modification to the experiment:

4. Inside the other is a man who speaks Chinese. The computer has been programmed with his answers to the questions, so we know they will provide the same answers.

7. The man who speaks Chinese can understand the questions and their responses.

8. Because the external behaviour of the man and the computer are identical, we can substitute the identities and conclude that the computer can understand the questions and their reponses.

This, then, is the fallacy in Searle''s argument. He presents only half of a dichotomy: by setting up an experiment in which a computer is shown to be indistinguishable from a man who doesn''t understand Chinese he fails to also assert that a computer can, in exactly the same way, be shown to be indistinguisable from a man who does understand Chinese.

Ultimately, Searle proves that strong AI is undecidable.

CoV

Share this post


Link to post
Share on other sites
Mayrel    348
quote:
Original post by Timkin
Many people feel Searle's argument is wrong, however few can tell you why it is wrong (if indeed it is).


I think the reason is that most people who wish to counter Searle's argument think that strong AI should be provable.

This is based upon the, not unreasonable, assumption that true and false are a true dichotomy. However, in logic, sometimes an assertion is neither true nor false. Edit: Actually, sometimes an assertion is true and false, rather than neither.

In formal systems, that assertion is a Gödel sentence. I believe that 'X possesses understanding' is a real-world Gödel sentence.

CoV

[edited by - Mayrel on January 21, 2004 12:38:01 AM]

Share this post


Link to post
Share on other sites
Guest Anonymous Poster   
Guest Anonymous Poster
I''ll build machines with souls. Then I''ll program them all to worship my God. Then I go to heaven for saving so many souls!

Share this post


Link to post
Share on other sites
hplus0603    11356
The person in the Chinese room doesn''t understand Chinese, but the program he is executing (acting as a computer) can be said to understand Chinese. Thus, the computer isn''t intelligent, but the program running on the computer, is.

The brain isn''t intelligent, but the synaptic responses trained during the last X years (you choose X), is.

Share this post


Link to post
Share on other sites
Mayrel    348
quote:
Original post by Anonymous Poster
I''ll build machines with souls. Then I''ll program them all to worship my God. Then I go to heaven for saving so many souls!

Unless, of course, it turns out your God was the wrong one.

CoV

Share this post


Link to post
Share on other sites