Why A.I is impossible

Started by
116 comments, last by Alexandra Grayson 6 years, 1 month ago
3 hours ago, SillyCow said:

At what point would you say it is intellegent but not concious for certain? At what point would you consider it a "maybe"?

I would be interested to hear the reasoning behind your choices.

For certain? Anything below humans.

Reason being we simply can't know. 

I mean, it's likely that anything with a brain is conscious (probably somewhere between jellyfish and worms), but I'm not aware of any way we could say for certain.

But I'm not an expert on this... just my opinion.

if you think programming is like sex, you probably haven't done much of either.-------------- - capn_midnight
Advertisement
4 hours ago, ChaosEngine said:

For certain? Anything below humans.

Reason being we simply can't know. 

I mean, it's likely that anything with a brain is conscious (probably somewhere between jellyfish and worms), but I'm not aware of any way we could say for certain.

But I'm not an expert on this... just my opinion.

We have taught other species languages and conversed with them... Which is the same yard stick you're using for humans. 

Most tests for self awareness (which is easier to define than consciousness) involve mirrors, and some kind of challenge that requires realising that the image in the mirror is the self. 

I

6 hours ago, ChaosEngine said:

I mean, it's likely that anything with a brain is conscious (probably somewhere between jellyfish and worms), but I'm not aware of any way we could say for certain.

If we define the self awareness level of a worm as "consciousness", then I would argue that even today we can create "conscious" UI.

1 hour ago, Hodgman said:

Most tests for self awareness (which is easier to define than consciousness) involve mirrors, and some kind of challenge that requires realising that the image in the mirror is the self. 

If we use this definition, Then again, I would say that we can already create self aware AI. If you reconstruct the "hang the apple over the gorilla's head" mirror experiment. And you use a robot with a camera and a standard convolutional neural network,  I bet you could make said robot:

1. Reach forward for the apple on the table

2. Reach above it's head when the apple is only reflected in the mirror.

3. I bet with the proper training, you could even move the mirror.

This does not seem like a trivial research project, but I think you could definitely pull it off today. In fact, it would be a great publicity stunt from the likes of;  "Deep blue vs Kasparov", or "Watson on Jeopardy". If I were IBM or Google I would defintley try to pull this off. I mean, wouldn't it be very entertaining to create a robot which would surpass animals in famous historical intelligence tests?

 

 

 

My Oculus Rift Game: RaiderV

My Android VR games: Time-Rider& Dozer Driver

My browser game: Vitrage - A game of stained glass

My android games : Enemies of the Crown & Killer Bees

On 31/01/2018 at 7:49 PM, ChaosEngine said:

I mean, it's likely that anything with a brain is conscious (probably somewhere between jellyfish and worms), but I'm not aware of any way we could say for certain.

I agree with you.  In fact, if the subjective experience of any thing is a naturally emergent property of the physical universe then who's to say that it's isolated to biological systems.  Who's to say that any thing isn't capable of subjective experience on some level, however primitive or brief the subjective experience may be.  It's kinda odd that on the one hand one could deny anything but the highest levels of intelligent sentient beings being capable of subjective experience, and then on the other hand say that you can expect to create it artificially via a computer-simulation.  To be clear, where I stand on this is that if we are unaware of how the subjective experience emerges within physical systems, then how can we ever be sure that our artificial creations possess them? 

https://en.wikipedia.org/wiki/Philosophical_zombie

I find this subject hugely interesting. Because there is still no consensus on how to even frame the context of the subject matter.  We're all travellers walking through an ambiguous field of philosophies until some smart fellow comes up with a proof and repeatable experiment identifying the dichotomy for future textbooks.

19 hours ago, Hodgman said:

We have taught other species languages and conversed with them..

I've heard this is true up to a point. We can teach animals to communicate with language, but they only can ask for things, no other animal, other than humans, are able to use language for creative expression.  Which is what makes our use of language uniquely human.  Wolves howl, Birds chirp and Lions growl but we think none of it.  Noam Chomsky talks at great length about this.  Have you heard something different?

 

18 hours ago, SillyCow said:

If we use this definition, Then again, I would say that we can already create self aware AI. If you reconstruct the "hang the apple over the gorilla's head" mirror experiment. And you use a robot with a camera and a standard convolutional neural network,  I bet you could make said robot:

1. Reach forward for the apple on the table

2. Reach above it's head when the apple is only reflected in the mirror.

3. I bet with the proper training, you could even move the mirror.

This does not seem like a trivial research project, but I think you could definitely pull it off today. In fact, it would be a great publicity stunt from the likes of;  "Deep blue vs Kasparov", or "Watson on Jeopardy". If I were IBM or Google I would defintley try to pull this off. I mean, wouldn't it be very entertaining to create a robot which would surpass animals in famous historical intelligence tests?

O.k, so you could make a robot today that could do all that yes, but would it actually have any subjective experiences all its own?  I think this drives to the greater point ChaosEngine was getting at, that the mechanism which allows the subjective experience to arise, what he calls consciousness, probably exists on primitive levels too.

I am just trying to ground the conversation...

I personally believe that there is nothing special about the human brain. One day you would be able to create an artificial brain which could imitate a human in every way possible, (Although why would you want to ?). 

"Consciousness" is an ambiguous word, so I am just trying to find a way to measure it. In the true methods of calculus, I am trying to define a range of said ambiguity. Obviously some people are claiming that there are some intelligent entities which do not posses "consciousness". I am trying to demonstrate that this has nothing to do with "artificial" intelligence, but rather with regular old biological organisms (which posses a nervous system).

23 minutes ago, Awoken said:

would it actually have any subject experiences all its own?

When you introduce the term "subjective experience" (I assume that's what you meant) you are again introducing a vague term. Just like "consciousness" there is no single definition of what "subjective experience" means in this debate. But I bet that if you try to define it according to your own views, we would be able to make a philosophical argument as to why it is possible. If we keep on introducing new tautological terms into the conversation, it will not be a meaningful one.

What would be an experience that you define as "subjective"? Vs one which you would define as "not subjective". (did you notice that we have also introduced the word "experience" which could be defined in so many ways...)

I guess what I am saying is this thread is labelled: "Why A.I is impossible". As such, I would expect there to be some well defined tests that people with this view think that an A.I, could never pass.

Don't get me wrong. No self driving car can also function as a substitute kindergarten teacher. However, eventually it should be possible.

 

My Oculus Rift Game: RaiderV

My Android VR games: Time-Rider& Dozer Driver

My browser game: Vitrage - A game of stained glass

My android games : Enemies of the Crown & Killer Bees

47 minutes ago, SillyCow said:

When you introduce the term "subjective experience" (I assume that's what you meant) you are again introducing a vague term. Just like "consciousness" there is no single definition of what "subjective experience" means in this debate. But I bet that if you try to define it according to your own views, we would be able to make a philosophical argument as to why it is possible. If we keep on introducing new tautological terms into the conversation, it will not be a meaningful one.

to make a philosophical argument as to why it is possible.  Would like to hear one, I have yet to.  I am introducing a vague term.  To clarify, the term has been thrown around already in this topic, though perhaps only by my myself.  I've noticed it used interchangeably with consciousness by others, though that could just be my read.  I do think it's entirely meaningful because I don't think this is a topic which is deserving of restricted focus of framework or how we should frame the discussion to be had.  

How would I label or define subjective experience, it's really easy actually and I don't see it as being too vague.  The color blue, the sound of a dogs bark, an itch on my leg.  The taste of an apple, the smell of burnt toast.  The feeling of loneliness, the feeling of happiness, the feeling of love.  As most typical children would wonder, do I see the color blue as you do?  maybe my blue is your red.  All are subjective experiences, that is, they all have an experience I have a memory of.  I can try hard and almost recall the experience of what they feel like for most of them with just my memory, though my recall will not be as real as the real thing.  Those and all others are subjective experiences.  A program executing code because some sensor is triggered is not a subjective experience.

[ edit: thanks for spotting my typo, I'll be going through and correcting all my 'subject experiences' with 'subjective experiences' ][ oh my, all were spelled subject experiences :/. ]

7 hours ago, Awoken said:

How would I label or define subjective experience, it's really easy actually and I don't see it as being too vague.  The color blue, the sound of a dogs bark, an itch on my leg.  The taste of an apple, the smell of burnt toast.  The feeling of loneliness, the feeling of happiness, the feeling of love.  As most typical children would wonder, do I see the color blue as you do?  maybe my blue is your red.  All are subjective experiences, that is, they all have an experience I have a memory of.  I can try hard and almost recall the experience of what they feel like for most of them with just my memory, though my recall will not be as real as the real thing.  Those and all others are subjective experiences.  A program executing code because some sensor is triggered is not a subjective experience.

So are you saying that a machine that wonders what it is would meet your criteria? A machine trying to understand itself, and whether it was "real"?

If so, I am sure that that would happen eventually. One of the differences between the current machine learning, and true "human" intelligence, is that machines don't learn how to learn. They still require humans to tweak them, and feed them data. This is by far the most expensive part of training a machine learning algorithm. This is what Machine Learning engineers get paid big $$$ to do. In fact I would argue that it is currently the highest paid entry level job in IT.  Having an AI which could choose what data to feed a self driving car would be extremely profitable. It is allready done to a very primitive extent with "reinforced learning". As AI becomes more advanced, I think it will possible to have a machine that "learns how to learn". Once you start having that, I think "introspection" will be a very desirable trait. From that point on the questions: "Do I think like others? Do i think like humans? Do I think like other AIs? What makes me special? What makes "me"? " become sort of inevitable. And since machines will develop faster than our biological brains, they might reach another level of self consciousness hat we have yet to achieve.

This might even be desirable, if one where to create a machine psychologist to assist troubled humans. Or a machine "philosopher" to assist academics.

I don't think this generation of machine learning is powerfull enough to achieve such complex thought. But I can definitely see it happening, say within a century.

 

My Oculus Rift Game: RaiderV

My Android VR games: Time-Rider& Dozer Driver

My browser game: Vitrage - A game of stained glass

My android games : Enemies of the Crown & Killer Bees

11 hours ago, SillyCow said:

So are you saying that a machine that wonders what it is would meet your criteria? A machine trying to understand itself, and whether it was "real"?

If a machine had these 'thoughts' and had what would be an internal experience of what it 'felt' like to wonder such things, then absolutely, that machine would be extremely advanced.

11 hours ago, SillyCow said:

And since machines will develop faster than our biological brains, they might reach another level of self consciousness hat we have yet to achieve.

That'd be cool, if something like that actually does happen, I'd be interested in sparking up a conversation with it. 

5 hours ago, Awoken said:

That'd be cool, if something like that actually does happen, I'd be interested in sparking up a conversation with it

Question is whether it would want to talk to you. ?

More specifically if it could even talk to you in a meaningful way about its experience anymore than you could explain philosophy to a chipmunk. 

if you think programming is like sex, you probably haven't done much of either.-------------- - capn_midnight

 

7 hours ago, ChaosEngine said:

Question is whether it would want to talk to you. ?

More specifically if it could even talk to you in a meaningful way about its experience anymore than you could explain philosophy to a chipmunk. 

The current revolution in machine learnig, is that you don't need to write coding instructions. You feed it some data, you then give it some goals, and it figures out how to reach those goals. If you take an AI that's capable enough (say in 50 years) and you set its goals to:  "Maximise interaction with humans" I think it's concievable that it will learn how to hold a meaningful conversation. (It should be enough for it to learn your language on it's own)

Isn't that what human children's  behaviouris all about? "Make your parents proud"?. (After you get thay pesky "don't die" stuff out of the way.)

So instead of teaching it how to drive a car: Give it the same social goals you would give a baby: Attract positive attention from your parents. Try to raise your social status. 

And then it won't just drive a car. It will invent a spaceship for you while telling you jokes. (And notice that none of these were part of it's original goals)

My Oculus Rift Game: RaiderV

My Android VR games: Time-Rider& Dozer Driver

My browser game: Vitrage - A game of stained glass

My android games : Enemies of the Crown & Killer Bees

This topic is closed to new replies.

Advertisement