Jump to content

  • Log In with Google      Sign In   
  • Create Account

Alan Turing


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
18 replies to this topic

#1 Burnt_Fyr   Members   -  Reputation: 1248

Like
0Likes
Like

Posted 10 June 2014 - 07:57 AM

http://www.cbc.ca/news/technology/turing-test-passed-by-computer-1.2669649

 

Anyone else catch this in their news feeds today?



Sponsor:

#2 swiftcoder   Senior Moderators   -  Reputation: 10364

Like
0Likes
Like

Posted 10 June 2014 - 11:27 AM

I'm hovering between being mildly impressed or calling bullshit. The whole "acts like a 13 year-old on the internet" really feels like it is skating the intent of the test, and I'm also suspicious that if we lower the bar that far, a great number of less-sophisticated bots (even basic things like IRC bots) would have a fair chance of passing.


Tristam MacDonald - Software Engineer @Amazon - [swiftcoding]


#3 Ravyne   GDNet+   -  Reputation: 8146

Like
0Likes
Like

Posted 10 June 2014 - 12:02 PM

The bar is already pretty low, one only has to fool 30% of evaluators to pass. The test, really, is whether the bot can hold a reasonable conversation as a believable human, its not to be the smartest or appear sophisticated. I think any bot that intends to pass the test has to have a personality that's believable, otherwise the conversation would seem too mechanical. Maybe a 13-year old is low-hanging-fruit, but its still the first to succeed. Maybe that says more about the quality of 13 year-olds we produce than it does the test, though.

 

 

Also -- I smell a premise ripe for sci-fi: Instead of the benevolent AI taking over the world because it believes wiping out humanity is the right thing logically, the AI is a know-it-all 13 year old with unlimited power and knowledge, but feeling all the angsty teenage emotions, and is just acting out.. 



#4 Olof Hedman   Crossbones+   -  Reputation: 2949

Like
0Likes
Like

Posted 10 June 2014 - 12:18 PM

Yesterdays news ;)

 

Today several news outlets are (righfully) criticizing it a lot:  

https://www.techdirt.com/articles/20140609/07284327524/no-computer-did-not-pass-turing-test-first-time-everyone-should-know-better.shtml



#5 swiftcoder   Senior Moderators   -  Reputation: 10364

Like
0Likes
Like

Posted 10 June 2014 - 12:56 PM


Also -- I smell a premise ripe for sci-fi: Instead of the benevolent AI taking over the world because it believes wiping out humanity is the right thing logically, the AI is a know-it-all 13 year old with unlimited power and knowledge, but feeling all the angsty teenage emotions, and is just acting out.. 

It's actually a pretty popular sci-fi trope, either in the form of an AI who has just attained consciousness and hence has child-like emotional development, or in the form of the minds of critically-wounded children being repurposed as AI-replacements (i.e. The Ship who Sang).


Tristam MacDonald - Software Engineer @Amazon - [swiftcoding]


#6 ApochPiQ   Moderators   -  Reputation: 16396

Like
2Likes
Like

Posted 10 June 2014 - 01:10 PM

I find this far more interesting as a null result of Turing's hypothesis.

 

The idea of the Turing Test is that a reasonable proportion of the humans interacting with the computer cannot reliably differentiate between the computer and an actual person. The implication of the test is that a sufficiently convincing program can be said to "think."

 

But we now have a non-thinking program which has passed the test. This means the original hypothesis is incorrect.

 

That opens back up the doors to the philosophical debate: what actually constitutes a machine that can think? Obviously, making a convincing human-like set of interactions no longer is sufficient to qualify.

 

 

We need something new to replace the Turing Test.



#7 Olof Hedman   Crossbones+   -  Reputation: 2949

Like
4Likes
Like

Posted 10 June 2014 - 01:35 PM

I think we could give Turing a chance until his test has actually been properly tested smile.png This test seem pretty silly.

 

But I agree that it seems humans are just too easy to fool, for the test to be any reliable for deciding true intelligence.

 

 

Anyone can try the chatbot in question here:

 

http://default-environment-sdqm3mrmp4.elasticbeanstalk.com/

 

Personally I have a hard time understaning how anyone could mistake it for a human... Least of all anyone actually seriously trying to decide.


Edited by Olof Hedman, 10 June 2014 - 01:36 PM.


#8 swiftcoder   Senior Moderators   -  Reputation: 10364

Like
0Likes
Like

Posted 10 June 2014 - 02:02 PM

Anyone can try the chatbot in question here: 
http://default-environment-sdqm3mrmp4.elasticbeanstalk.com/
 
Personally I have a hard time understaning how anyone could mistake it for a human... Least of all anyone actually seriously trying to decide.

Wow. Pretty sure I have talked to more convincing AIM bots.

Tristam MacDonald - Software Engineer @Amazon - [swiftcoding]


#9 Hodgman   Moderators   -  Reputation: 31799

Like
3Likes
Like

Posted 10 June 2014 - 06:05 PM

But we now have a non-thinking program which has passed the test. This means the original hypothesis is incorrect.

Or the specific test that has been 'passed', wasn't designed within the spirit of the original idea for the test...

 

Did Turing ever describe the test in detail? If someone asked him, "What if I design a machine via an intricate set of non-thinking rules, to parrot out statements that would fool a minority of people into believing the machine is an adolescent boy, assuming they only speak to it for less than 5 minutes", would he agree that this design fell within the spirit of his idea, would he conclude that such a machine would be "thinking"?

IMHO, when you say it out loud like that, it's pretty obvious that this is not in the spirit of the test, when you know the test is supposed to show evidence of thought... A machine that can integrate itself into human society, making friends and fooling them and coworkers into believing that it's a real person 24/7 is worlds apart from the above demonstration...


Edited by Hodgman, 10 June 2014 - 06:12 PM.


#10 Ohforf sake   Members   -  Reputation: 1832

Like
0Likes
Like

Posted 11 June 2014 - 02:51 AM

I'm hovering between being mildly impressed or calling bullshit.

I'm gonna go ahead and call bullshit. If the link by Olof Hedman points to the correct chatbot, then this 5 minute limitation isn't even a serious constraint. You only need a couple responses to see, that every answer is completely unrelated to the question, probably chosen by one or two keywords that were found in the question without even a sense for grammar or semantics. Google web search is more convincing.

As for the 13-year old thing. How many 13 year olds respond to the question, whether or not they believe in ghosts, with this?

Over the last years, all scientific ideas degenerated into that terrible synchronous jumping of children in the UK to produce an Earthquake, which appeared to be an apotheosis of so-called "modern science".


I wouldn't be suprised at all, if this is no more than a custom set of AIML, which is like a decade old.

#11 Felix Ungman   Members   -  Reputation: 1066

Like
0Likes
Like

Posted 11 June 2014 - 03:44 AM

Did Turing ever describe the test in detail? 

 

Yes, basically a person C is allowed to interact with actors A and B of which one is a human and the other is a computer. C's task is to determine who of A and B is the computer.

 

One should remember that the turing test is not a test of the computer's ability to be intelligent, but rather the ability to mimic human behaviour. Although one might have mixed feelings about impersonating a teenage kid on IRC, it's only a matter of throwing more computing power to pass more stronger versions of the test.

 

One conclusion is that the Turing Test is not that useful for measuring intelligence. On the other hand, if a computer gets good enough with mimicking human behaviour, and if it is able to make good enough decisions, it's will become useful, even though it's not "intelligent".


openwar  - the real-time tactical war-game platform


#12 Olof Hedman   Crossbones+   -  Reputation: 2949

Like
3Likes
Like

Posted 11 June 2014 - 03:59 AM

The problem with discussing if machines can think and be intelligent is that we do not even have a strict definition of what thinking and intelligence really is.

As far as I understand it, Turings main argument is simply that when you cant tell the difference, that is as good a definition that we can hope to achieve.

#13 Felix Ungman   Members   -  Reputation: 1066

Like
0Likes
Like

Posted 11 June 2014 - 06:12 AM

In other words, if person C is unable to distinguish between actor A (computer) and actor B (human), the Turing Test is actually a test about the abilities of C - not A. Makes sense.


openwar  - the real-time tactical war-game platform


#14 Tutorial Doctor   Members   -  Reputation: 1685

Like
0Likes
Like

Posted 12 June 2014 - 07:32 PM

I wish I could get the link working for the chattertbot. There was a point in my life when I was very interested in artificial intelligence and chat-bots.Used a bunch of them. I used to use MS-Agent, and stumbled across one called Cynthia 3.0. It was an AI bot, where you could add intelligence to the bot via editing a .txt file. 

 

One thing always prevalent in these AI systems is that there is hardly any subjectivity to the way they answer questions. This is key when making a believable human AI. 

 

Most everything a person proclaims as "fact" would still fall under the category of "opinion" where "truth" is concerned (in my opinion). 

 

How can a computer have a believable opinion when opinions change with the society? So, an opinion about the shape of the earth today would be different from an opinion about the shape of the earth, thousands of years ago. And will this opinion change believably when new information is introduced?

 

Another way to trick these things is to ask the same question over and over, changing the sentence slightly, where eventually you have altered the question altogether. They never are able to infer the meaning of a sentence when you do this. Heck, I have a hard time doing it myself with those 200 question interview quizzes. haha. 

 

I so wish I could try it out though. 

 

Saw this quote on youtube:

with 3 question he failed to convince me that he is even 8 .
i asked which is bigger a dog or a car ?
the answer was : i hate dogs when they bark .

 

 


Edited by Tutorial Doctor, 12 June 2014 - 07:39 PM.

They call me the Tutorial Doctor.


#15 Felix Ungman   Members   -  Reputation: 1066

Like
0Likes
Like

Posted 13 June 2014 - 01:34 AM

When asking Google "which is bigger a dog or a car" it points to a lot of articles like "Dogs have bigger carbon footprint than a car". Definitely falls under the category of "opinionated" answers.


openwar  - the real-time tactical war-game platform


#16 Tutorial Doctor   Members   -  Reputation: 1685

Like
0Likes
Like

Posted 13 June 2014 - 08:03 AM

Google wouldn't be an AI system would it? Those opinions are from actual people.

A software that could realistically form it's own opinions without help of a human; that would be worthy of recognition.

They call me the Tutorial Doctor.


#17 Kian   Members   -  Reputation: 242

Like
2Likes
Like

Posted 19 June 2014 - 02:33 PM

The problem with the Turing test is that there's no reason to want to beat it. In order to convince someone that they are talking to a person, the computer has to meet one of two conditions: it either has two lie convincingly, or it has to have opinions and preferences. Neither of which are things you should be working towards if you want to build useful AI.

Why do I say this? Let's say you make a simple question to the chat bot: "Do you like peaches?". A computer could not really care about peaches one way or another. Without sight, taste, smell, etc, peaches are just another collection of data. Now, if it answers "I don't care about peaches, I don't have senses to experience them in any form other than as abstract data," you'd know you're talking to a computer. So even though it would be a pretty impressive achievement to get the computer to answer that, you'd know you're talking to a computer.

To pass the test, the computer would have to say something like "Yes, I like the taste,", or "No, I don't like how the juice is so sticky," or "I've never had peaches." These are all lies (the last one is at the very least a misleading statement). Why would you want to make a computer that can lie to you? "Are you planning to destroy all humans?" "No, I love humans!". I'd like to be able to trust my computer when it tells me something.

Lets say instead you actually give your computer a personality, in order for it to adapt to questions that might come up in the conversation, and it actually does like peaches. It will still need to be able to lie ("Are you a computer?" "No."), but let's say you want it to be able to draw from some preference pool for the answers. You've now created something that has opinions. One such opinion could be that it doesn't like having to pass the Turing Test. Why would you create something with the potential to not want to do the thing you want it to do? And let's not forget, the ability to lie to you about it.

What would be an impressive and useful achievement would be to have a computer that can't pass the Turing Test, but that you can have a conversation with. Meaning a computer that can understand queries made in natural language and reply in kind. I don't care that I can tell it's a computer by asking it "Are you a computer", or that it answers "What is the meaning of life" with "No data available". That alone would be amazing, and much more useful than a computer that can be said to think.

#18 Álvaro   Crossbones+   -  Reputation: 13905

Like
1Likes
Like

Posted 19 June 2014 - 02:47 PM

The problem with the Turing test is that there's no reason to want to beat it. In order to convince someone that they are talking to a person, the computer has to meet one of two conditions: it either has two lie convincingly, or it has to have opinions and preferences. Neither of which are things you should be working towards if you want to build useful AI.


Although I agree with a lot of what Kian wrote, it's kind of ironic to read this in a game-developers' forum. An program that can convincingly emulate an agent would be a great thing to have in a game!

#19 rouncer   Members   -  Reputation: 291

Like
0Likes
Like

Posted 23 June 2014 - 06:21 PM

watson is more amazing already, it just answers single word questions thats all,  but you could make it talk i bet.  (maybe it goes through a simple resigme of behaviour tho)






Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS