How would we even recognise an AI anyway?

Started by
10 comments, last by GameDev.net 18 years, 10 months ago
Sorry if this is too off-topic, but I've seen other philosophical questions related to AI in here before. Suppose one day a computer sprouted a consciousness. How would we recognise it if we did? How do we know that it hasn't already happened? It's my understanding that nobody can even define what consciousness is, never mind formulate a test to recognise it. Bearing this in mind, is the search for a strong AI not premature? Should it not be stopped in case we are unwittingly performing virtual murder on these machines (a bit facetious).
Advertisement
I periodically go around downtown, asking people unsolveable questions to see if they explode.

I haven't found one yet, so I think I'm safe over here.
It would be wearing a pink carnation in its left lapel and carrying a copy of the Financial Times under its right arm.

This type of thing would almost certainly be programmed in.
You can't be sure that any other being has a conciousness. The best reason why you might think that other human beings are concious is that they look similar to yourself in many other aspects, and you are concious, so you conclude that they probably are concious too. But that's no real proof. I actually think that the question of whether something/somebody has a conciousness doesn't even make sense. It all depends on your view of the world. I'll explain.

Here is a primitive view of the world where the question makes sense. The world is populated by animated objects and by inanimated objects. The difference between animated objects and inanimated objects is that each animated object has a soul (in Latin "anima"). The word "soul" here can be replaced by "spirit" or "conciousness". All objects are subject to the laws of physics, but animated objects also have behaviours that are controled by their souls. This is a view of the world that humans find very natural. In primitive cultures, poorly understood phenomena (fire, wind, lightning...) are often assigned souls (spirits or gods) to explain their behaviour. Modern religions still keep souls attached to at least people, and often they have souls that are responsible for the behaviour of the world as a whole or for some aspects of it (gods).

If you view the world using the "animated/inanimated" dichotomy, the question you are asking is "can we tell if a machine is animated?". My view of the world is that physics is all there is, and the distiction between animated and inanimated is artificial; souls are just a handy construct of the human mind, that don't correspond to anything real (just like the spirit of fire doesn't correspond to anything real). In my view of the world, your question just doesn't make sense.
Will we ever find true AI? The answer will probably be yes and no. We'll find it eventually, alright, but we'll denounce the hell out of it before we truly accept it. We'll tear it apart, examine it, explain it, and if we can do all that, they we'll just say its a mechanical process and there's nothing special about it. And that is the fact.

For years AI researchers have torn at each other's ankles about what is intelligent. Someone comes up with something that's intelligent or seems to be. Someone else tears it apart and says, "No, its all based on this formula/equation/model, so its mechanical." And the vicious cycle goes something like that. Currently, the only thing left from classical AI that is still considered somewhat intelligent, are neural nets. Of course, that's mainly because of the unpredictability of neural nets even when give the same training data multiple times....

Like my advisor once told me, AI can be a fairly self-destructive field of research, and it'll probably stay that way for quite a while.
The Turing Test is a classic: if you can put a human on one end of a teletype, and alternate between random people, and the AI, on the other end (with no visibility except for the textual communication), and the human can't say which is which, then you have a "true" artificial human intelligence.

Quote:My view of the world is that physics is all there is


If sub-nuclear particle interaction and random particle decay (alpha, beta, gamma, etc) is all there is, then such a thing as "free choice" does not exist. In fact, nobody is responsible, or has a choice for anything. While things aren't exactly pre-determined (assuming decay is random), they're not "changeable" because there's nothing around to change it. If I knew for certain this was the truth, I would kill myself. However, I wouldn't, and couldn't, because, well, there would be no "me" there ;-)

I believe in free will, and I believe that the free will will, in the end, be what enables us to tell apart other things with free will from things that lack free will (i e, follow the random principles). It's not like this matter in our day-to-day lives, anyway, so I choose (or was randomly affected to choose?) the choice that seems less depressing.
enum Bool { True, False, FileNotFound };
Quote:Original post by hplus0603
The Turing Test is a classic: if you can put a human on one end of a teletype, and alternate between random people, and the AI, on the other end (with no visibility except for the textual communication), and the human can't say which is which, then you have a "true" artificial human intelligence.


You should check out the program ELIZA that was written like 20 - 30 years ago. It was thought to be the holy grail. It was a text-based psychiatric program. When you talked to it, it really really did feel like talking to a real psychiatrist. People were completely awe-struck. Until, someone got the source and found that it was nothing more than a string parser that looked for specific nouns, adjectives and adverbs in your sentences and used those to ask you questions....just like a real psychiatrist.

The after that, there was PARRY, a program like ELIZA that was emulating a paranoid schizophrenic person. Most testing with real psychiatrists determined that it was really a program suffering from paranoid schizophrenia. Of course, the people being tested didn'tknow it was a program.

And of course, the next test was to put PARRY up against ELIZA and all hell broke loose and the result was utter gibberish. :p

So, the turing test really isn't that accurate anymore, if you ask me. Sure it used to be a great idea, but specialized expert systems that has no clue what its really talking about, like ELIZA could easily pass the test. Well, currently the Turing test will probably be biased because people will be looking out for programs like ELIZA.
Quote:Original post by hplus0603
I believe in free will, and I believe that the free will will, in the end, be what enables us to tell apart other things with free will from things that lack free will (i e, follow the random principles). It's not like this matter in our day-to-day lives, anyway, so I choose (or was randomly affected to choose?) the choice that seems less depressing.


I have a hard time fooling myself into believing things just because they are convenient. I know my view of the world makes it difficult to understand morals or even laws. But then again, I don't think morality or laws are an integral part of the world. Morality is the way in which evolution encourages collaboration (humans that care about each other survive better as a tribe than humans that don't). Law is an artificial modification of the utility function of the members of a society, which is done to taylor their behaviour. Depressing, I know, but probably true. I don't commit suicide because of a combination of survival instinct and lack of guts. I also happen to be pretty happy, which helps.

What you said about free will enabling you to tell apart other things with free will from things that lack free will starts to sound a little like a religious believe.
Quote:
The Turing Test is a classic: if you can put a human on one end of a teletype, and alternate between random people, and the AI, on the other end (with no visibility except for the textual communication), and the human can't say which is which, then you have a "true" artificial human intelligence.


To the best of my knowledge, Searle pretty much destroyed this test, with the Chinese Room argument, didn't he? Extended Turing tests were proposed, but the ones that I have read about all refer to placing a robot in an environment and allowing it to evolve human-like behaviour. Not exactly relevant to many chatbots, etc.

Not only that, but it's not exactly clear to me that behaviour should be a metric for intelligence. Newton didn't exactly display typical human behaviour (he would sometimes concentrate on a problem so much that he would forget that he had visitors around) yet nobody will honestly state that he wasn't intelligent. Why is language, a behaviour, a central part of Turing's test?
Language is actually a highly valued thing, just because its one of the key aspects of intelligence, a medium in which two or more entities can freely exchange thoughts and ideas.

It is arguable that most animals have some sort of "language" in their calls, growls, etc, but human language is consider to be more highly developed in the sense that its not only spoken, but is written.

However, I feel that highly intelligent beings will eventually develop something similar to a language, however, the presence of a language does not conclude the existance of intelligence. So, more like, intelligence implies language, but language doesn't imply intelligence. That's sort of the feeling I get from what I see and hear.

This topic is closed to new replies.

Advertisement