When does an algorithm turn alive?

Started by
87 comments, last by Timkin 18 years, 2 months ago

EDIT: This first post is outdated. Please ignore this first post and read the corrected version at page 2. Follow this link.

Hey, I'm about to write an article which relates to AI, for a magazine. Before I do this I want to check my ideas with the people on this forum, to check for inconsistency. You guys usually know what you're talking about. =) Before I start explaining the topic, let me state some key information: - I'm posting coz I would like some help to check for contradictions, faulty logic and similar errors. - This is not the ACTUAL article, only a summary of the thinking I base it on. - Enjoy! When does an algorithm turn alive? ----------------------------------- One reason to why I like AI is because it forces us to face the really big questions regarding our existence. Say I'm working on creating a self-learning AI. I build a robot which learns about it's environment and how to deal with certain problems. One generation after myself, somebody else creates an even better one. And then somebody else creates an even better one. We end up with an autonomous AI robot, which is very hard to seperate from a human. It looks like a human, it walks around like a human. When you talk to it, it responds like a human. In the end you can't distinguish it as non-human, because it's built to produce the exact same behaviour as real humans. But it's still just a machine. An algorithm which is built to produce certain output given certain input. A very advanced one, but an algorithm none the less. This scenario have been proposed many times before, such as in the movie 'Blade Runner' from the 80's. In the movie, the main character must pick out AI machines within the actual human population. He does this with psychology-tests, since the AI is incapable of having feelings. As the famous thought experiment 'The Chinese Room' (wikipedia) tells us, this would be true in real life as well. The AI could still look happy, smile and wave its hand, but it's not really happy. A smile, a frown, a shiver... to an algorithm they're just movements of facemuscles. A means to produce a certain effect. The effect could be to appear human, or to communicate, but the desired effect doesn't matter. In the end, the smile is just movement of face muscles, with nothing underneath. An algorithms output to a given input. It's obvious that an AI - even though it's learning from its mistakes and displays intelligent behaviour... still lacks something compared to real biological brains. This 'something' is really hard to decribe, perhaps even impossible. Call it life, counciousness, spirit, creativity, feelings, sapience or whatever. That is the realm of seriously deep philosophy, and I will avoid the problem altogether by just summing it up with 'what AI lacks compared to real humans/biological brains'. For the sake of simplicity in the following text, I will refer to this thing as a persons 'ghost'. This is a term I borrow from the japanese manga 'Ghost in the Shell', which suits the topic quite well. It is obvious that an AI is a created being. Whatever it does, it does because somebody designed it that way. Somebody programmed it just like that. However, both 'Blade Runner' and 'Ghost in the Shell' suggests the idea that... once a system with powerful enough capacities is created, a ghost spontaneously spawns within the system, wether you want it to or not. Perhaps not immediately, but after a while, like a child trying to learn to read. Suddenly the 'blockade' is no longer there, and she can no longer choose to NOT read something she's looking at. 'Blade Runner' suggests this could happen with autonomous robots. 'Ghost in the Shell' takes it one step further and suggests it could happen with... the Internet. Sounds quite improbable, right? If you regard the Internet as a massive array of nodes, each with a vast amount of information, constantly expanding both its storage room and its contents and each node is communicating with all nearby nodes with blazing speed... then it suddenly seems less improbable. If you're imagining the Internet 'entity' like computer code being run, then I have to correct you. That would take somebody to write the code, wouldn't it? Rather, the processing would be the actual traffic of the net. Somebody requesting a pageview. Somebody opening a new server. Switching IP adresses. This system takes on the shape of a massively paralell processor. It's processing completely random things, producing completely random outputs, in a massive environment. Is it so unlikely that somewhere, sometime, a cluster of nodes are arranged with such properties that they exhibit the behaviour of self-preservance? If such a thing ocurred once, then it would remain, wouldn't it? Couldn't you even say, if you sum up the entire Internet into One entity, that it shows the behaviour of self-preservance? If an ISP is knocked out, the little white bloodcells (the network admins) work around the clock to get it back up running. Without competition, it's spreading across the world. First settling on the most lush lands, the wealthy and highly developed countries. When that is done, it's gradually spreading into Africa too, though that land is not quite as suitable habitat, so spreading there goes slower. If there was a competitor to the Internet, they would try to defeat eachother. Claim eachothers territory, like Macintosh and Windows. Either one would win, or they would settle into an eco-system. Sounds like the behaviour of a living being, doesn't it? But that wouldn't necessarily mean Internet has got a ghost, would it? Nope. Not at all. Just maybe. One can argue that other systems such as the country France, if summed up into one entity, behaves like a living being. If attacked, it will retaliate or request help from friends. The resources it's dependent upon are either self-produced or imported. If suffering from a lack of one resource, the hunt/desire for it grows higher and higher, possibly starting a frenzy or an attack on another country which has the resource. This sounds really silly, right? Systems cannot have ghosts, can they? After all, why would they? But how do you explain that we have ghosts, then? Evolution is a heavily backed up fact. Given that we originate from a bunch of goo in the ocean, at which point did we aquire these 'ghost' thingies? Of course, there's lots of ideas about what a ghost is and were it's from. I can't tell anybody what way it is. Nobody can, at least presently. All we can securely agree on, is that we DO have it RIGHT NOW. Everybody agrees we have got ghosts (whatever that is). Evolution happened. We have ghosts. This gives us three possibilities: 1. Our ghosts was given to us. (this implies an origin such as a higher being, God, or aliens if you believe in that) 2. We aquired them ourselves. (this implies that ghosts appear spontaneously within something qualified to hold it) 3. We had them all along. (this implies that even atoms have ghosts) Who am I to say that either of these are true? I can't. It would be based on my personal belief, and would thus be for me and me alone. But based on this, we can make some rather interesting conclusions. The list above would mean either of these things: 1. A higher being exists. An idea which in all versions of it has countless and severe logical flaws. It would mean logics aren't worth much. 2. This would mean that ghosts are something which appears when a system is configured the right way. That means we can create AI which is just as alive as ourselves. It would also mean that it's possible for other systems like the Internet, the stock market, families, and the country France have ghosts. Which is straight out silly. 3. This relates heavily to quantum physics, and would mean things such as the pavement you walk on has a ghost (but perhaps no intelligence). As well as your pencil, your shoe and your toupé. It would also mean we can create AI which is just as alive as ourselves. Important would also be things such as the concept of 'Gaia' (the collective 'ghost' of planet Earth). Which is just weird. Given this, no matter what option we choose, we can conclude that the universe is a far more weird place than it first seems like. It is not as simple as 'a place where physics happen and stuff'. Don't think we know everything yet. There's major stuff still to be unveiled. ------------------------ End. It's meant to inspire the reader to further thinking, and smash the belief that 'nothing matters anymore - all exciting ages have already been in the past, with frontiers, swords and heroes and stuff'. I want to make people see this age is not any less exciting than the rest. There's still frontiers. Any comments? Merry Christmas! =) (or whatevers appropriate to say where you come from!) [Edited by - NQ on December 29, 2005 10:06:28 AM]
----------------------~NQ - semi-pro graphical artist and hobbyist programmer
Advertisement
Quote:Everybody agrees we have got ghosts (whatever that is).

No, I don't. When you think about yourself, other people and animals with complex behaviours, you imagine a ghost that explains what makes their behaviour different than that of simpler physical things. But that doesn't mean that such ghosts exist. Once machines are complex enough you may well imagine a ghost for them too.

Oh boy, where to begin...

First, I hope your article is for something like "Computers for stupid people", or for a magazine with a cover that says things like "Sex secrets your man will love" and "This month's clothes you can't live without".

I believe most of what you say is either flat-out wrong or not demonstrated.

Quote:Original post by NQ
When does an algorithm turn alive?
-----------------------------------

One reason to why I like AI is because it forces us to face the really big questions regarding our existence.

I disagree. Sure, it can lead to interesting discussion, but it doesn't force questions of our existence.
Quote:Original post by NQ
Say I'm working on creating a self-learning AI. I build a robot which learns about it's environment and how to deal with certain problems. One generation after myself, somebody else creates an even better one. And then somebody else creates an even better one. We end up with an autonomous AI robot, which is very hard to seperate from a human.

Attempting to recreate humans is an abandoned quest. We use AI to do things better than humans.

The military builds AI to listen to the water and determine if there are explosives. They listen to an engine and can tell what engine parts need maintenance. The goal of these is not to re-create humans. Similarly, AI in games is not an attempt to recreate humans. The attempt is to make a challenge for the human player to overcome.

Socially functional AI is a commonly sought goal, but that certainly is not recreating humans.
Quote:Original post by NQ
It looks like a human, it walks around like a human. When you talk to it, it responds like a human. In the end you can't distinguish it as non-human, because it's built to produce the exact same behaviour as real humans.

So?
Quote:Original post by NQ
But it's still just a machine. An algorithm which is built to produce certain output given certain input. A very advanced one, but an algorithm none the less.

The same arguments apply to people.
Prove (or actually demonstrate) the point.
Quote:Original post by NQ
This scenario have been proposed many times before, such as in the movie 'Blade Runner' from the 80's. In the movie, the main character must pick out AI machines within the actual human population. He does this with psychology-tests, since the AI is incapable of having feelings.

So?

Your statement that AI is incapable of feelings is misleading: Prove it, or at least demonstrate it. Prove that we have feelings (see comment about people being programmed) or that a complete emulation of a human (not currently done) is missing the emotions felt by a human.
Quote:Original post by NQ
As the famous thought experiment 'The Chinese Room' (wikipedia) tells us, this would be true in real life as well.

It doesn't tell us that. It is an argument without demonstration. It suggests that a thing may be that way, but it doesn't say it it would be true in real life.
Quote:Original post by NQ
The AI could still look happy, smile and wave its hand, but it's not really happy. A smile, a frown, a shiver... to an algorithm they're just movements of facemuscles. A means to produce a certain effect. The effect could be to appear human, or to communicate, but the desired effect doesn't matter. In the end, the smile is just movement of face muscles, with nothing underneath. An algorithms output to a given input.

Prove/demonstrate that we don't do the same thing.
Quote:Original post by NQ
It's obvious that an AI - even though it's learning from its mistakes and displays intelligent behaviour... still lacks something compared to real biological brains.

No, it is not obvious. Prove/demonstrate it.
Quote:Original post by NQ
This 'something' is really hard to decribe, perhaps even impossible. Call it life, counciousness, spirit, creativity, feelings, sapience or whatever. That is the realm of seriously deep philosophy, and I will avoid the problem altogether by just summing it up with 'what AI lacks compared to real humans/biological brains'.

Prove/demonstrate that an implementation of future algorithms lack it.
Quote:Original post by NQ
For the sake of simplicity in the following text, I will refer to this thing as a persons 'ghost'.
This is a term I borrow from the japanese manga 'Ghost in the Shell', which suits the topic quite well.

Okay.
Quote:Original post by NQ
It is obvious that an AI is a created being. Whatever it does, it does because somebody designed it that way. Somebody programmed it just like that.

However, both 'Blade Runner' and 'Ghost in the Shell' suggests the idea that... once a system with powerful enough capacities is created, a ghost spontaneously spawns within the system, wether you want it to or not.

Anecdotes aren't proof/demonstrations.
References to a fiction movie and a piece of philosophy aren't exactly a solid argument.
Quote:Original post by NQ
Perhaps not immediately, but after a while, like a child trying to learn to read. Suddenly the 'blockade' is no longer there, and she can no longer choose to NOT read something she's looking at.
'Blade Runner' suggests this could happen with autonomous robots. 'Ghost in the Shell' takes it one step further and suggests it could happen with... the Internet.

Again, fiction movie and philosophy, anecdotes aren't demonstration.
Quote:Original post by NQ
Sounds quite improbable, right?
If you regard the Internet as a massive array of nodes, each with a vast amount of information, constantly expanding both its storage room and its contents and each node is communicating with all nearby nodes with blazing speed... then it suddenly seems less improbable.

To any computer scientist, it doesn't sound improbable.

I suppose if your magazine article is directed to non-tech people, that statement might not be as bad.
Quote:Original post by NQ
If you're imagining the Internet 'entity' like computer code being run, then I have to correct you. That would take somebody to write the code, wouldn't it?
Rather, the processing would be the actual traffic of the net. Somebody requesting a pageview. Somebody opening a new server. Switching IP adresses.
This system takes on the shape of a massively paralell processor. It's processing completely random things, producing completely random outputs, in a massive environment.

Of course if your magazine article is directed to non-tech people, they won't understand a word of that.
Quote:Original post by NQ
Is it so unlikely that somewhere, sometime, a cluster of nodes are arranged with such properties that they exhibit the behaviour of self-preservance? If such a thing ocurred once, then it would remain, wouldn't it?

Emergent behavior? We see it all the time. Also, what is "self-preservance"? Perhaps you meant self-preservation, attempting to maintain or keep oneself from harm?
Quote:Original post by NQ
Couldn't you even say, if you sum up the entire Internet into One entity, that it shows the behaviour of self-preservance?
If an ISP is knocked out, the little white bloodcells (the network admins) work around the clock to get it back up running.

Um, no, because as far as I can tell, self-preservance is a term you just made up. And no, it isn't a self-preserving system since it is repaired by external sources (people). It does reroute around damage, but that's not preservation. Preservation implies taking action prior to damage, hence the "pre".
Quote:Original post by NQ
Without competition, it's spreading across the world. First settling on the most lush lands, the wealthy and highly developed countries. When that is done, it's gradually spreading into Africa too, though that land is not quite as suitable habitat, so spreading there goes slower.
If there was a competitor to the Internet, they would try to defeat eachother. Claim eachothers territory, like Macintosh and Windows. Either one would win, or they would settle into an eco-system.

I'm not sure how the spread of the Internet applies to your argument.
How is competition is an eco-system, but lack of competition is not? That's an interesting take on things. I always thought of an ecosystem as parts working together.
Quote:Original post by NQ
Sounds like the behaviour of a living being, doesn't it?
But that wouldn't necessarily mean Internet has got a ghost, would it? Nope. Not at all. Just maybe.

Given the warrantless assertions made so far, that speculation does not necessarily follow.
Quote:Original post by NQ
One can argue that other systems such as the country France, if summed up into one entity, behaves like a living being.
If attacked, it will retaliate or request help from friends. The resources it's dependent upon are either self-produced or imported. If suffering from a lack of one resource, the hunt/desire for it grows higher and higher, possibly starting a frenzy or an attack on another country which has the resource.

I've never heard anybody (successfully) argue that political organizational bodies aren't living sociological entities.
Quote:Original post by NQ
This sounds really silly, right?
Systems cannot have ghosts, can they?
After all, why would they?
But how do you explain that we have ghosts, then? Evolution is a heavily backed up fact. Given that we originate from a bunch of goo in the ocean, at which point did we aquire these 'ghost' thingies?

How would I explain that we have something philosophy suggests that we have, but we have no actual proof of? How did we get something that you can't demonstrate we don't provably have? Classic non sequitur arguments.
Quote:Original post by NQ
Of course, there's lots of ideas about what a ghost is and were it's from. I can't tell anybody what way it is. Nobody can, at least presently.
All we can securely agree on, is that we DO have it RIGHT NOW. Everybody agrees we have got ghosts (whatever that is).

Nope, we can't all agree that we have it right now. And not everybody agrees that we have these ghosts.
Quote:Original post by NQ
Evolution happened. We have ghosts.

The second statement is warrantless.
Quote:Original post by NQ
This gives us three possibilities:
1. Our ghosts was given to us. (this implies an origin such as a higher being, God, or aliens if you believe in that)
2. We aquired them ourselves. (this implies that ghosts appear spontaneously within something qualified to hold it)
3. We had them all along. (this implies that even atoms have ghosts)

... warrantless.
Quote:Original post by NQ
Who am I to say that either of these are true? I can't. It would be based on my personal belief, and would thus be for me and me alone.

Amen!
Quote:Original post by NQ
But based on this, we can make some rather interesting conclusions. The list above would mean either of these things:
1. A higher being exists. An idea which in all versions of it has countless and severe logical flaws. It would mean logics aren't worth much.
2. This would mean that ghosts are something which appears when a system is configured the right way. That means we can create AI which is just as alive as ourselves. It would also mean that it's possible for other systems like the Internet, the stock market, families, and the country France have ghosts. Which is straight out silly.
3. This relates heavily to quantum physics, and would mean things such as the pavement you walk on has a ghost (but perhaps no intelligence). As well as your pencil, your shoe and your toupé. It would also mean we can create AI which is just as alive as ourselves. Important would also be things such as the concept of 'Gaia' (the collective 'ghost' of planet Earth). Which is just weird.

No, we can't make these conclusions until you prove your warrants first.
Quote:Original post by NQ
Given this, no matter what option we choose, we can conclude that the universe is a far more weird place than it first seems like. It is not as simple as 'a place where physics happen and stuff'.

...
Quote:Original post by NQ
Don't think we know everything yet. There's major stuff still to be unveiled.

It's quite stupid to assume that we do.

We've been wrong as long as we have existed. Science is improving every day, disproving theories all the time. If somebody assumes we know everything, the are mis-informed.
Quote:Original post by NQ
It's meant to inspire the reader to further thinking, and smash the belief that 'nothing matters anymore - all exciting ages have already been in the past, with frontiers, swords and heroes and stuff'. I want to make people see this age is not any less exciting than the rest. There's still frontiers.

Who has that belief, anyway?

Oh, and you can say Merry Christmas in the United States, since Christmas is a federally recognized holiday.

frob.
Yikes - first two responses are quite vocal hmm?

I enjoyed your article. I've spent a lot of time thinking about what makes me... me. I believe I am more than a machine, it feels like I am, but then again - is it just environmental programming?

One thing I've learned from my brother (argumentative cuss that he can be) is to refrain from the assumption that everyone is on board with something that seems obvious to me. The responses you've gotten so far are from those who don't necessarily agree with your premises or your conclusions.

You can bridge the gap a bit by being more clear that the conclusions you are making seem the clear answer to you, but the reader's mileage may very. Stating that a particular conclusion is obvious when the reader disagrees is likely to alienate them from the rest of what you have to say.

IE - tell someone they're wrong and no evidence will change their mind, but show an open mind and an alternate path and you remove the pressure to defend what they currently believe/know.

Good luck!
Quote:Original post by Steven Hansen
The responses you've gotten so far are from those who don't necessarily agree with your premises or your conclusions.

I didn't say I disagree. I said the argument is flawed.

I agree with some of the points, but many are not demonstrated and may not be provable.

frob.
Quote:Original post by frob
Quote:Original post by Steven Hansen
The responses you've gotten so far are from those who don't necessarily agree with your premises or your conclusions.

I didn't say I disagree. I said the argument is flawed.

I agree with some of the points, but many are not demonstrated and may not be provable.

frob.


I stand corrected.

I took the view that the author wasn't really trying to prove anything (even though he did ask for us to point out logic errors). If it were just a list of possibilities and opinions, I think the article would be better received, which is why I advise changing the "facts" into possibilities or opinions instead.

An argument attempting to prove the author's view would require a great deal more formalism - as frob has pointed out. On the other hand, such an article wouldn't likely be as easy or fun to read. [smile]
Chinese room has flaws, and there were some objections against this quite useless thought experiment.

Note that first of that cited movies asked questions, should have created things rights, and how much?
I didn't seen TV series of the Ghost in the shell, but it looked like it asked question should that entity roam free?

Reall AI are more difficult to answer this question. Working strong AI are quite psychedelical. (And psychotical as well.) Nice when you'd add them to the tank gun (autonomous), not as nice when you would try to use as a bank director with a lot of implied knowledge... (The second mening of that sentence was obviously... . You know that kind of errors.)
Frob, when did you read your last book? And about what it was?
Quote:Original post by Raghar
Frob, when did you read your last book? And about what it was?


Over the past two weeks:

* re-read Harry Potter 2
* FIT for Software Development
* Garfield Weighs In
* Garfield pulls his weight
* Teachings of Wilford Woodruff
* referenced chapters from Neural and Adaptive Systems
* referenced Working Effectively with Legacy Code
* Doc2Net manuals (to write a screen scraper for it)
* read several MSDN articles (although it isn't a book...)
* Cars! Cars! Cars!
* Arther's Loose Tooth
* The Noisy Farm : Lots of animals to enjoy
* Preschool to the rescue!
* Under the Christmas Tree
* The day I had to play with my sister

And a few others that I already took back to the library and don'r remember the titles of.

As far as programming, I wrote a screen scraper, integrated google maps with the company's contact management, rebuilt portions of a billing app, integrated the Hekkus Sound System with one app, improved the framerate of spreading out the rendering work though other computation, developed about 20 pages of design specifications, fixed about 25 bugs found by the test team (all caused by people in the team I lead), wrote some SForce integration commands, and done a few things that I would consider incidental, or otherwise don't recall very well.

I also spent a lot of time with my wife and kids.

How about you?

frob.
Ouch! that was harsh of you, Frob
NQ, I like your style of writing, but your idea that we have a "ghost" and AIs don't sounds more like you believe that we'll always be in control of things, and AIs won't stand a chance. For a belief, that's fine, but surely we should sit up and take note of the masses of Sci Fi that points towards humans making AIs too smart.
Some examples(including your own);Ghost in the Shell,Bubblegum Crisis,Virus,AI,the Matrix,Blade Runer,I Robot,etc.

Science fiction has a way of predicting things sometimes that only appear many years after. Look at Star Trek, for example(the first series, I might add). Their preferred means of communication was a snazzy plastic covered notebook, which allowed them to communicate with a ship orbiting whatever planet they happened to be around. Now look in your local phone catalogue...

Anyway, back on subject, I am very surprised that you didn't pick up on a particular similarity between wetware and soft\hardware. That is the fact that our brains use the same form of energy(ie electricity) to function as computers. The only difference is that each and every neurone in our heads is the equivalent of one CPU( although not necessarily as powerful-I'm not sure which, but I'm almost certain that the CPU has more functions) in that it performs its own operations based on its inputs, and sends out an output. It's just that we have so many neurones in our head that a computers CPU would probably melt just trying to assign a pointer to each one, never mind act as each and every one in parallel...

It may be that we are just massively parallel FSMs that, like CPUs, can only do so much, but can do it so very quickly that computers can't keep up.

While it's true that most game AIs are made just to make a game challenging, they should be made to seem fairly lifelike. Take Deep Blue, the computer that beat Gary Kasparov. It made intelligent moves that seemed so real that Kasparov apparently(or so I've heared...) accused it of cheating.

There's more that I'd like to say, but I only meant this to be a quick reply, I just got carried away...

I'd love to see your finished article

This topic is closed to new replies.

Advertisement