Jump to content
  • Advertisement

Archived

This topic is now archived and is closed to further replies.

MDI

[Slightly Offtopic] John Searle and computational intelligence

This topic is 5281 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I''m doing a joint degree in CS and AI, and have to take a module in philosophy for AI. We studied John Searle''s arguments against the validity of the Turing and Total Turing tests, with his Chinese room argument. His arguments seem to be pretty strong, and it appears from his argument we''ll never get to actually create a real intelligent agent with computational methods (am I right?). This is seriously disheartening, as it would then appear I''m doing a degree in a "waste of time". I''m sure someone came up with arguments against Searle, but I can''t really think of any. Are there any? Are there serious objections to Searle''s arguments? [This isn''t a homework question, I''m just trying to work out how we could possibly create an intelligent agent if Searle is correct].

Share this post


Link to post
Share on other sites
Advertisement
I have serious reservations about Searle's argument... although very eloquent, it makes many dubious assumptions. We discuss it regularly in #ai on freenode. I'll see if I can dig up the major counter-arguments.

By the way, there are many good web pages on the subject. Google it.

Alex

Edit: Start here -- Searle's Chinese Box: Debunking the Chinese Room Argument

[edited by - alexjc on January 16, 2004 8:01:49 AM]

Share this post


Link to post
Share on other sites
Well, almost a MSc from Edinburgh -- 18 months after completion. I''ll spare you the stories about Scottish bureaucracy!

Still, I''m now back in Edinburgh for more

Share this post


Link to post
Share on other sites
The point of AI isn''t to create HAL. It''s to teach computers to autonomously solve problems. You don''t need consciousness, just a good algorithm. People don''t want sentient machines anyway, just better automatic dishwashers.

Share this post


Link to post
Share on other sites
Suppose it was possible to replace single neurons in the brain with an electronic implant that functioned identically to the original neuron. Given the same input, the implant would provide the same output as the original neuron would have. Now, assume this procedure was simple, risk free and painless. You get one of your neurons replaced as a guinea pig for AI-research. Did you just lose conciousness? No? So we replace another neuron. And another, and another...

Obviously, at some point all of your neurons will have been replaced and you''ll be a living proof of the fact that machines CAN think. Or is that a different kind of conciousness? Maybe you believe that conciousness resides someplace else than in the neurons, like the soul, but then you don''t need Searles fancy arguments to debunk AI.

Share this post


Link to post
Share on other sites
quote:
Original post by alexjc
I have serious reservations about Searle''s argument... although very eloquent, it makes many dubious assumptions. We discuss it regularly in #ai on freenode. I''ll see if I can dig up the major counter-arguments.

By the way, there are many good web pages on the subject. Google it.

Alex

Edit: Start here -- Searle''s Chinese Box: Debunking the Chinese Room Argument

[edited by - alexjc on January 16, 2004 8:01:49 AM]


The paper starts with:
John Searle''s Chinese room argument is perhaps the most influential and widely cited argument against artificial intelligence.

I thing Searle''s Chinese room argument is NOT against artificial intelligence, but against STRONG artificial intelligence.

Pesonally, I don''t believe the human mind to be T-computable at all; however, the point is that even if the human mind behaviour can be mimiced by a Touring Machine, it will be not the same: I can''t image a machine with consciusness. Are feelings a by product of a function of the brain?

This is the point of Searle, IMO, and I find it quite reassuring... no Matrix, for now.

Share this post


Link to post
Share on other sites
quote:
Original post by GameCat
Suppose it was possible to replace single neurons in the brain with an electronic implant that functioned identically to the original neuron. Given the same input, the implant would provide the same output as the original neuron would have.

It''s not know exactly how neurons works... it''s not even know if a machine that "functioned identically to the original neuron" can be constructed (this is what "computable" means).


Share this post


Link to post
Share on other sites
quote:
Original post by TerranFury
The point of AI isn''t to create HAL. It''s to teach computers to autonomously solve problems. You don''t need consciousness, just a good algorithm. People don''t want sentient machines anyway, just better automatic dishwashers.


Isn''t that weak AI? i.e. simulating human behaviour?

Share this post


Link to post
Share on other sites
quote:
Original post by GameCat
Suppose it was possible to replace single neurons in the brain with an electronic implant that functioned identically to the original neuron. Given the same input, the implant would provide the same output as the original neuron would have. Now, assume this procedure was simple, risk free and painless. You get one of your neurons replaced as a guinea pig for AI-research. Did you just lose conciousness? No? So we replace another neuron. And another, and another...

Obviously, at some point all of your neurons will have been replaced and you'll be a living proof of the fact that machines CAN think. Or is that a different kind of conciousness? Maybe you believe that conciousness resides someplace else than in the neurons, like the soul, but then you don't need Searles fancy arguments to debunk AI.


Well, truth be told we don't actually know that what you suggest is true (though personally I believe it is). This was more or less the premise that Asimov used for this story "The Bincentennial Man" decades ago (forget the atrocious Robin Williams movie, please).

We will be faced with very interesting court cases in the coming years regarding computer sentience, and they'll get worse when we begin procedures that do as you're suggesting above. A method that can replace failing neurons with computer chips might be a boon to Alzheimer's patients, for example, but sooner or later somebody is gonna try to claim that the patient is no longer "human". That'll be fun.

I think I concur with a previous poster, however. People are way more interested in having smarter devices than in having intelligent devices per se.




Ferretman

ferretman@gameai.com
GameAI.Com is Changing Providers--details on the site!

From the High Mountains of Colorado



[edited by - Ferretman on January 16, 2004 4:00:47 PM]

Share this post


Link to post
Share on other sites

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!