Alan Turing

Started by
17 comments, last by rouncer 9 years, 10 months ago

Did Turing ever describe the test in detail?

Yes, basically a person C is allowed to interact with actors A and B of which one is a human and the other is a computer. C's task is to determine who of A and B is the computer.

One should remember that the turing test is not a test of the computer's ability to be intelligent, but rather the ability to mimic human behaviour. Although one might have mixed feelings about impersonating a teenage kid on IRC, it's only a matter of throwing more computing power to pass more stronger versions of the test.

One conclusion is that the Turing Test is not that useful for measuring intelligence. On the other hand, if a computer gets good enough with mimicking human behaviour, and if it is able to make good enough decisions, it's will become useful, even though it's not "intelligent".

openwar - the real-time tactical war-game platform

Advertisement
The problem with discussing if machines can think and be intelligent is that we do not even have a strict definition of what thinking and intelligence really is.

As far as I understand it, Turings main argument is simply that when you cant tell the difference, that is as good a definition that we can hope to achieve.

In other words, if person C is unable to distinguish between actor A (computer) and actor B (human), the Turing Test is actually a test about the abilities of C - not A. Makes sense.

openwar - the real-time tactical war-game platform

I wish I could get the link working for the chattertbot. There was a point in my life when I was very interested in artificial intelligence and chat-bots.Used a bunch of them. I used to use MS-Agent, and stumbled across one called Cynthia 3.0. It was an AI bot, where you could add intelligence to the bot via editing a .txt file.

One thing always prevalent in these AI systems is that there is hardly any subjectivity to the way they answer questions. This is key when making a believable human AI.

Most everything a person proclaims as "fact" would still fall under the category of "opinion" where "truth" is concerned (in my opinion).

How can a computer have a believable opinion when opinions change with the society? So, an opinion about the shape of the earth today would be different from an opinion about the shape of the earth, thousands of years ago. And will this opinion change believably when new information is introduced?

Another way to trick these things is to ask the same question over and over, changing the sentence slightly, where eventually you have altered the question altogether. They never are able to infer the meaning of a sentence when you do this. Heck, I have a hard time doing it myself with those 200 question interview quizzes. haha.

I so wish I could try it out though.

Saw this quote on youtube:

with 3 question he failed to convince me that he is even 8 .
i asked which is bigger a dog or a car ?
the answer was : i hate dogs when they bark .

They call me the Tutorial Doctor.

When asking Google "which is bigger a dog or a car" it points to a lot of articles like "Dogs have bigger carbon footprint than a car". Definitely falls under the category of "opinionated" answers.

openwar - the real-time tactical war-game platform

Google wouldn't be an AI system would it? Those opinions are from actual people.

A software that could realistically form it's own opinions without help of a human; that would be worthy of recognition.

They call me the Tutorial Doctor.

The problem with the Turing test is that there's no reason to want to beat it. In order to convince someone that they are talking to a person, the computer has to meet one of two conditions: it either has two lie convincingly, or it has to have opinions and preferences. Neither of which are things you should be working towards if you want to build useful AI.

Why do I say this? Let's say you make a simple question to the chat bot: "Do you like peaches?". A computer could not really care about peaches one way or another. Without sight, taste, smell, etc, peaches are just another collection of data. Now, if it answers "I don't care about peaches, I don't have senses to experience them in any form other than as abstract data," you'd know you're talking to a computer. So even though it would be a pretty impressive achievement to get the computer to answer that, you'd know you're talking to a computer.

To pass the test, the computer would have to say something like "Yes, I like the taste,", or "No, I don't like how the juice is so sticky," or "I've never had peaches." These are all lies (the last one is at the very least a misleading statement). Why would you want to make a computer that can lie to you? "Are you planning to destroy all humans?" "No, I love humans!". I'd like to be able to trust my computer when it tells me something.

Lets say instead you actually give your computer a personality, in order for it to adapt to questions that might come up in the conversation, and it actually does like peaches. It will still need to be able to lie ("Are you a computer?" "No."), but let's say you want it to be able to draw from some preference pool for the answers. You've now created something that has opinions. One such opinion could be that it doesn't like having to pass the Turing Test. Why would you create something with the potential to not want to do the thing you want it to do? And let's not forget, the ability to lie to you about it.

What would be an impressive and useful achievement would be to have a computer that can't pass the Turing Test, but that you can have a conversation with. Meaning a computer that can understand queries made in natural language and reply in kind. I don't care that I can tell it's a computer by asking it "Are you a computer", or that it answers "What is the meaning of life" with "No data available". That alone would be amazing, and much more useful than a computer that can be said to think.

The problem with the Turing test is that there's no reason to want to beat it. In order to convince someone that they are talking to a person, the computer has to meet one of two conditions: it either has two lie convincingly, or it has to have opinions and preferences. Neither of which are things you should be working towards if you want to build useful AI.


Although I agree with a lot of what Kian wrote, it's kind of ironic to read this in a game-developers' forum. An program that can convincingly emulate an agent would be a great thing to have in a game!

watson is more amazing already, it just answers single word questions thats all, but you could make it talk i bet. (maybe it goes through a simple resigme of behaviour tho)

This topic is closed to new replies.

Advertisement