Jump to content

View more

Image of the Day

The night is still, but the invasion brings chaos. #screenshotsaturday #hanako #indiegame #gameart #ue4 #samurai https://t.co/cgILXuokoS
IOTD | Top Screenshots

The latest, straight to your Inbox.

Subscribe to GameDev.net Direct to receive the latest updates and exclusive content.

Sign up now

When does an algorithm turn alive?

4: Adsense

Old topic!

Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
88 replies to this topic

#21 Raghar   Members   


Posted 27 December 2005 - 02:09 PM

Flamming? I hope not.

I think that article should try to force reader to think, there is no need to be too much "clear".

#22 Anonymous Poster_Anonymous Poster_*   Guests   


Posted 27 December 2005 - 03:31 PM

Well, I didnt really read the entire text or even all the replies, but i did read the first frob reply and some of the pieces of other replies, and I just wanted to give some hints I learned from a great writing teacher I had in the college (sorry bad english, i'm not acctually from a country that has english as official language anyway :P)

The subject is very controversal and interesting, and I think one good way to make the article more clear should first avoid this "true lies" by avoiding the expressions "Everybody believes", "Everybody knows", and things like that. Other thing should not to make it a science article but still make it more clear. Avoid using fiction as base of "solid" arguments and rather than that make it as probably things... And of course, consider the target public of the magazine.

Thats it, just some hints i think could help.

PS: I guess making reference to some studies about psycology, about human behavior and about spirit stuff should be very wellcomed in that case, and of course, about AI itself

#23 Anonymous Poster_Anonymous Poster_*   Guests   


Posted 27 December 2005 - 03:47 PM

David Gerrold did an excellent speculative treatment of this in "When H.A.R.L.I.E. Was One" ( http://www.amazon.com/gp/product/0345243900/qid=1135741673/sr=1-1/ref=sr_1_1/002-6417254-3419247?s=books&v=glance&n=283155 ).

I highly recommend it - very thought provoking...

And just to tie into Star Trek, David Gerrold also happened to write "The Trouble With Tribbles" when he was about 17.


#24 T1Oracle   Members   


Posted 27 December 2005 - 04:08 PM

I'd love to see someone use the excuse "I'm a machine, I do what I'm programmed to do" in a serious situation.
Programming since 1995.

#25 webwraith   Members   


Posted 28 December 2005 - 07:02 AM

True, I might as well be reading FORTRAN as Swedish, but your Article is, on the whole, quite fun to read.

Also, why not keep flaming frob? It's fun!

#26 blew   Members   


Posted 28 December 2005 - 12:56 PM


I really enjoyed reading your beta-article, NQ. I think you're on to something. Surelly, it's not a technical proof or anything but it makes you wonder - and i believe that's what you're aiming for.

I must say i'm a bit divided though. I don't know if i believe in the ghost thing. (By the way, i think some people here misunderstood the meaning of the "ghost". It's nothing concrete. It's just what makes us human beings different from everything else [animals, machines, whatever]. Call it your aura or any other name.)

I don't know if i believe that we humans have something misterious that makes us... human. Maybe we just got lucky during the evolution of species and got the nicest brain. Well, i don't know. Like i said, i'm divided.

So i kind of agree with some of the stuff frob said. But he expressed himself badly and so he lost his arguments.

Nevertheless, like others also said, you should be carefull when implying that something is universally true. As you saw, there are those who don't agree. :)

So, half of me thinks that there will never be an AI as good as human intelligence. A simple example. If i ask you for a random number you'll give me one without blinking. But ask a machine for a (truly) random number, and Houston we've got a problem. Or just think about any NP-complete problem.

But the other half of me thinks the opposite. I'm an AI enthusiast myself and i know the amazing things you can do today with it. And i believe this is just the beginning. We'll probably be able to make an AI at least as good as a human's. Let me refrase this. I think we'll be able to make an AI that in turn will generate a much more powerfull AI. How?

Well, machines can already see, smell, hear, taste and feel with sensors (to some extent). And we have machine learning, neural networks and genetic algorithms. Why not, in a near future, leave several AIs interact with each other for a million years and see what they come up with? Probably something amazing... Man has been able to "artificialize" almost everything he wants. Why not a mind?

Oh well, we could talk about this for ages. It's a great subject, indeed.

Ah, almost forgot, i like the references you made to Blade Runner and Ghost in the Shell. I don't think you have to use technical examples. But, of course, that depends on your target audience.

And here are some more references that i thought of:

- Human mind vs logic mind in Star Trek (Dr. McCoy vs Mr. Spock)
- HAL fighting for survival in 2001: A Space Odyssey
- The machine that wants to be human in Steven Spielberg's AI
- The evolution of the machines in the first 2 episodes of The Animatrix
- An AI that doesn't know it's artificial in Blade Runner (Rachael)

Good luck with your article.
And people, don't flame me just because i have an opinion. :)

#27 Sneftel   Senior Moderators   


Posted 28 December 2005 - 01:00 PM

Original post by Blew
If i ask you for a random number you'll give me one without blinking.

Think so? I'm afraid I can't quote a source, but IIRC people asked to pick a "random" number between 1 and 10 pick 7 about 30% of the time.
Or just think about any NP-complete problem.

Ever tried it? The computer will get the correct answer waaaaay before you do.

#28 blew   Members   


Posted 28 December 2005 - 01:07 PM

Original post by Sneftel
Think so? I'm afraid I can't quote a source, but IIRC people asked to pick a "random" number between 1 and 10 pick 7 about 30% of the time.

Well, i just thought about a number and it was 3. [grin]
And i'll never pick 7 again just to prove you're wrong. Haha. :)

Original post by Sneftel
Ever tried it? The computer will get the correct answer waaaaay before you do.

Well, i could be lucky. But then again, so could the computer. Well, i think you got the idea. :P

#29 JD   Members   


Posted 28 December 2005 - 07:43 PM

I don't think we have a soul. Perhaps because we have complex brains and thus lot of things to ponder about that it only seems like we have a soul. Anyways, we tend to work better in groups thus our brains don't need to be as smart. That's one consideration to take into account when building an AI/robot.

#30 NQ   Members   


Posted 29 December 2005 - 04:02 AM

Okay, here's the corrected version. I added a section, and removed the parts that made it sound like a scientific thesis. All in all it should sound less like a proof and more like a thought experiment. I also fixed several embarrassing spelling and grammatical errors... :S It should be friendlier to the eyes of all native English speakers now...

This should be rather close to the actual Swedish article. It all depends on what kind of feedback I get, I guess. So please give me honest feedback, because that's what I'll get from the magazine readers.

(Btw, a summary of the 'Chinese room' will be displayed alongside with the article. Too bad one cannot print wikipedia-hyperlinks on paper...)

When does an algorithm turn alive?


As a thought experiment, let's say I'm working on creating a self-learning AI. For this I build a robot that learns about its environment and how to deal with certain problems. Then one generation after myself, there's a younger and smarter developer who creates a better one. Then somebody creates an even better one. This goes on all the way until we end up with an autonomous AI robot, which is very hard to separate from a human.

In real life, such a robot would have little value, but let's say for the sake of this experiment that we did create one.

It looks like a human. It walks around like a human. When you talk to it, it responds like a human. In the end you can't distinguish it as non-human, because it's built to produce the exact same behaviour as real humans. It learns from its mistakes and displays intelligent behaviour.

But it's still just a machine. An algorithm, built to produce certain output if given certain input. A very advanced one, but an algorithm none the less.

This scenario has been proposed many times before, such as in the movie 'Blade Runner' from the 80's. In the movie, the main character must pick out AI machines within the actual human population. He does this with psychology-tests, since the AI is incapable of having feelings.

Based on the well-known thought experiment 'The Chinese Room' (wikipedia) we assume that this should be true in real life as well.
The AI could still look happy, smile and wave its hand, but it's not really happy. A smile, a frown, a shiver... to an algorithm they're just movements of face muscles, a means to produce a certain effect. The effect could be to appear human, or to communicate, but the desired effect doesn't matter. In the end, the smile is just movement of face muscles, with nothing underneath - an algorithms output to a given input.

Some people say that, certainly it's not going to have feelings because it's based on a logical system and not on a real biological brain. However, other people argue that the above section applies just as well to real brains as well.
There's a flaw to 'The Chinese Room'. Its conclusion assumes - without any proof - that our brains work differently.

This can also be said as: 'Do we have a soul, or are we based on physics?'

For this experiment, we shall not choose one interpretation, but both. Since I can't prove which one of these are true, we will investigate both of them and then see where that leads. First, let's assume we do not function like the AI would.


This interpretation means that there's 'something' which differs us from AI. This 'something' is really hard to describe, perhaps even impossible. Call it life, consciousness, spirit, creativity, feelings, sapience or whatever. That is the realm of seriously deep philosophy, and I will avoid the problem altogether by just summing it up with 'what AI lacks compared to real humans/biological brains'.

For the sake of simplicity in the following text, I will refer to this lacking thing as a persons 'ghost'.
This is a term I borrow from the Japanese manga 'Ghost in the Shell', which suits the topic quite well.

An AI is a created machine. Whatever it does, it does because somebody designed it that way. Somebody programmed and built it just like that.

However, both 'Blade Runner' and 'Ghost in the Shell' suggests the idea that... once a system with powerful enough capacities is created, a ghost spontaneously spawns within the system, weather you want it to or not.

Perhaps not immediately, but after a while, like a child trying to learn to read. Suddenly the 'blockade' is no longer there, and she can no longer choose to NOT read something she's looking at. Suddenly, there is a ghost in the shell.
Another thought comes to mind; what's the first memory you have? The very first one. You're probably at least one year old. Could that perhaps be the moment when you first acquired a ghost? The moment your mind was first developed enough to carry a ghost? Perhaps up until that point you were more of an AI?

'Blade Runner' suggests this awakening could happen with autonomous robots. 'Ghost in the Shell' takes it one step further and suggests it could happen with... the Internet.

For this, let me describe the Internet as a massive array of nodes, each with a vast amount of information, constantly expanding both its storage room and its contents and each node is communicating with all nearby nodes with blazing speed.
That description sounds like I'm describing a brain, doesn't it?

If you're imagining the Internet 'entity' like computer code being run, then I have to correct you. That would take somebody to write the code, wouldn't it?
Rather, the processing would be the actual traffic of the net; somebody browsing a web page. Starting a new server. Switching IP addresses. Hooking up a local area network. Downloading music.

This system takes on the appearance of a massively parallel processor. It's processing completely random things, producing completely random outputs, in a massive environment.

In your head, sum up the entire Internet into One entity. Does it not resemble the behaviour of a living being?
If an ISP (Internet Service Provider) is knocked out, the little white blood cells (the network admins) work around the clock to get it back up running.
Without competition, it's spreading across the world. First settling on the lushest lands, the wealthy and highly developed countries. When that is done, it's gradually spreading into Africa too, though that land is not quite as suitable habitat, so spreading there goes slower.
If there were a competitor to the Internet, like Macintosh and the PC, they would try to defeat each other. Claim each other's territory. Either one would conquer all territory, or they would settle into a co-habiting eco-system.

Sounds like the behaviour of a living being, doesn't it?
But that wouldn't necessarily mean Internet has got a ghost, would it? Nope. Not at all. Just maybe.

One can argue that other systems such as the country France, if summed up into one entity, behaves like a living being.
If attacked, it will retaliate or request help from friends. The resources it's dependent upon are either self-produced or imported. If suffering from a lack of one resource, the desire for it grows greater and greater, possibly starting frenzy or an attack on another country that has the resource.

This sounds really silly, right?
Systems cannot have ghosts, can they?
After all, why would they?

But how do you explain that we have ghosts, then? Evolution is a heavily backed up fact. Given that we originate from a bunch of goo in the ocean, at which point did we acquire these 'ghost' thingies?

Of course, there's lots of ideas about what a ghost is and were it's from. I can't tell anybody what way it is. Nobody can, at least presently.
So we have the fact that evolution happened. Now let's assume we have ghosts.

Evolution happened. We have ghosts.
This gives us three possibilities:
1. Our ghosts were given to us.
2. We acquired them ourselves.
3. We had them all along.

Who am I to say that either of these are true? I can't. It would be based on my personal belief, and would thus be for me and me alone.

Though I can sum the meaning of them for you:
1. This would mean there is an origin such as a higher being, God, or aliens if you believe in that.
2. This implies that ghosts appear spontaneously when you reach a certain point in evolution.
3. This implies that we had ghosts even when we were goo, or even atoms.

Now, if we have ghosts that would mean either of the following interesting conclusions:

1. A higher being exists. An idea that in all versions of it have countless and severe logical flaws. If it were in fact true, then lots of scientists would probably shriek in horror, since logics won't be enough anymore. Which is just crazy.
2. This would mean that ghosts are something that appears when a system is configured or built the right way. This means we can create AI that is just as alive as ourselves. It would also mean that it's possible for other systems like the Internet, the stock market, families, and the country France have ghosts. Which is straight out silly.
3. This would mean things such as the pavement you walk on have a ghost (but perhaps no intelligence). As well as your pencil, your shoe and your toupee. It would also mean we can create AI which is just as alive as ourselves. Important would be things such as quantum physics and what we go back to when we die (Gaia?). Which is just weird.

Given we do have ghosts, no matter what option we choose, we can conclude that the universe is a far more weird place than it first seems like.

No ghosts

So what if we don't have ghosts then? No soul, no silly magic or divine interventions. If we don't have a ghost, what would that mean?

That would mean that 'life' is just what we call this thing that occurs when a bunch of neural cells are linked together. By this definition there's no other difference between insects and humans, aside from the number of interlinked brain cells. What we call 'life' becomes a relative term, based on how many brain cells one have. A carbon atom is not alive. Several carbon atoms stuck together is still not alive. A string of DNA is almost alive (it reproduces). A virus is somewhat alive. A bacterium is more alive than a virus. A multi-cellular creature is more alive than bacteria. One with actual brain cells is even more alive. We're more alive than dogs, and dolphins are more alive than us.

Not having ghosts means that 'life' is just something that occurs when physics happen to a bunch of neural cells. What we call ourselves, what gets out of bed in the mornings and drinks coffee with the co-workers, are nothing more than the end effect of a bazillion cells working together to survive. Our mind does not really exist; it's just an experienced effect of billions of neural cells working together.

It doesn't exist, like the centrifugal force. There is no such force. It just feels like it does when you're sitting in the carousel.

What would this mean to our AI robot?
Well, all we have to do is to simulate a neuron in a computer. We wouldn't have a physical neuron, but we would have the exact same (simulated) effect. The simulated result would be a simulated mind. This mind has the exact same properties in the simulated world as we have in the real world. It is as aware of it's own mind as we are of ours. Both are just end effects of many small pieces working together. Thus, by all definitions, it is experiencing the exact same type of life as we are experiencing. In my opinion, even if the tiny reactions don't happen in the physical world but at a logical level, this means the robot is as alive as we are. Heck, if we add more pieces then it's suddenly even more alive than we are!

What would this mean to the Internet and the stock market?
If ghosts don't exist, then minds are only lots of tiny pieces working together. Does the pieces have to function exactly as physical neural cells does? In my opinion this is not likely. IMO the nature of the tiny connections doesn't matter, only the end result does. Each neural cell does not have to have nano-sized connectors conducting electricity to nearby cells. Consider a brain built out of electromagnets where the interference between the magnets acts as the connectors. This device would probably cover an acre of land, but if they exchange logical information just like neural cells does, then no matter if it isn't based on the same physics. Such a system is still alive like the rest. Even if the system would exchange information by having each cell shoot white plastic balls at the others, it would still experience the sensation of having a mind. It would thus be alive. (However you could question its thinking speed...)

The Internet has so many 'cells', out of which most are completely different from the rest. They all exchange information in different ways. When you carry your USB-memory from home to your office, you're being one of those signals. Some signals would be 'different'. Most would carry a type of message that is different to its nature from all other messages. I can't say if that would matter. Maybe it doesn't matter exactly what's the nature of the signals. Then the Internet consists of countless billion connectors and cells, working perfectly, forming an entity that just might carry more brain cells than a human. A super-entity.
Maybe the type of message carried does matter. Then the Internet could consist of a bazillion malfunctioning neurons, and perhaps three or four good-enough simulated neurons, bringing it to the consciousness level of some bacteria.

Finishing up

I have not specified 'ghosts' in any way. The real definition of them could be: 'The illusionary effect of billions of neural cells working together'. But that's just a variant on nr. 2, and does not need to be considered seperately.

In the end - if it is true that we don't have ghosts - that has the same effect as if ghosts are something that we acquired along the way from evolution. They basically mean the same thing. Perhaps the two sides are different on some level I can't comprehend, but that doesn't matter. I just had to take this side-trip to show that, even so, they would produce the same effects.

So if we don't have ghosts, then that would mean point number 2 in the above numbered list is in play. Additionally, if we DO have ghosts, then that would mean any one of the options is in play. All of which are quite hard to accept, if not for me, then for a few million other people.

The way I see it, all scenarios except nr. 1 allows us to develop AI which just is as alive as ourselves. However, even if there is a higher being, he/she/they could be messing around with us, granting our robot a ghost just to confuse us.

My ending conclusion of all this, is that no matter which way it really is, something really weird is definitely going on here...
Who said 'The Age of Adventures' refers to something in the past?



To the reader there probably appears to be a hole in the logic, at list item nr. 2. It assumes other complex systems have ghosts just because we acquired it.
Until you read it closely. It says they MIGHT have ghosts if nr. 2 is in play. However, in the last part here ("What would this mean to the Internet and the stock market") I tried to explain that if nr. 2 is true, then that means such systems LIKELY have ghosts.

Should I mention this in the text, or would that make it over-obvious and boring to read?

Any comments?

#31 webwraith   Members   


Posted 30 December 2005 - 11:55 AM

I'm not sure I'm following you on the whole 'dolphins are more alive than us' part. I'm not entirely certain of all that the field of Artificial Life has to offer, but I know that one of the prime conditions for scientists to describe an object as 'alive' is the ability to reproduce. This is evident in the works of (prof\dr?) J. von Neumann, Dr Thomas Ray, and others, including von Neumanns Cellular Automata (in his case a theoretical robot that existed in an infinitely deep canal filled with all the parts required to make copies of itself) or Rays 'Tierra', a world inside a computer, with 'lifeforms' which each had their own DNA, and responded to their surroundings in the same way as bacterium and amoebae do.

hehe... sorry, got a little carried away there

anyway, it definitely reads better than your last draft, and I think you've got most of the points down pat, now

On a personal note, I reckon that the ghost (or soul, or whatever) is an illusion that we created to explain why we are more 'intelligent' than any other known animal at this time( said in case we make first contact with aliens tomorrow ;)
so it can be thought of as a reflection of our belief that we are, in some way, different to every other living creature on this planet.

Also, if a computer\robot can think like a human, act like a human, and just be plain indistinguishable from a human, then I think I would willingly greet them as equals.

or even superiors, if the T-800 watching over my shoulder has anything to say about it ...

#32 webwraith   Members   


Posted 31 December 2005 - 03:09 AM

Geez, so that's what happens when I write a reply at 0100 hrs...

I like the part about billions of neurones working together, that's cool, in a freaky kinda way...

I take it that meanss that I should stop referring to myself as 'I' and start using 'We' instead?

#33 n0b   Members   


Posted 31 December 2005 - 11:39 AM

Original post by NQ
That would mean that 'life' is just what we call this thing that occurs when a bunch of neural cells are linked together. By this definition there's no other difference between insects and humans, aside from the number of interlinked brain cells.

As far as i know, the difference does not come from the number of interlinked brain cells - that would be something like "the more brain the better" - but that does not always suite.
It actually depends on HOW INTENSIVE the information-exchange between those inter-linked brain cells is.
I guess there was a link about this topic on the wikipedia.org-human brain page.
Just take a look at it :

The idea of billions of little cells melting temporarly together, creating one huge, gigantic mind with many times the capability of a human's mind was also discussed by Frank Schätzing in 2004 within his scifi "Der Schwarm". Although he used a quite dramatic scenario, it includes some interesting thoughts. One of his theories includes the possibility of those "goo"s being able to create a living, learning and thinking individual all by themselves based on information-exchange going on during DNA-Modifing. He was the first person i know which stated that thinking might not nessessarily be based on neurons / brains.
However, i don't know if it was released in english.

But i like your article. I've watched the entire Ghost In The Shell stuff by myself, therefor i'm quite familiar with some of those theories.
I guess it's a nice review of them and you also inserted some additions - other thoughts. Nice one.

#34 Sneftel   Senior Moderators   


Posted 03 January 2006 - 01:35 PM

Original post by aphydx
the central creative core of your mind cannot comprehend something as complex as itself.
Prove it.

#35 M2tM   Members   


Posted 03 January 2006 - 01:56 PM

"When does an algorithm turn alive"

Let me just say that the title itself is poorly worded and the arguments within are not entirely well stated. Others have pointed out things with regard to that statement so I won't repeat them.


"When does an algorithm live?"


"When is an algorithm alive?"

or even:

"When does an algorithm become alive?"

Because "turn alive" just sounds silly. My apologies if you feel offended, I'm really just trying to make a suggestion that might help set a more serious tone.

#36 darookie   Members   


Posted 03 January 2006 - 02:31 PM

Original post by Sneftel
Original post by aphydx
the central creative core of your mind cannot comprehend something as complex as itself.
Prove it.

Good one [smile]

#37 NickGeorgia   Members   


Posted 03 January 2006 - 02:31 PM

My algorithm just turned alive... help... me.... (non-sensical scribbles)

#38 M2tM   Members   


Posted 03 January 2006 - 02:35 PM

or a voice recording:

*frantic, breathy whispers*
"oh my god... it's... not human... in the ventilation shafts... I don't think it sees me yet... .... ..... "
the transmition is ended by a blood-curdling scream.

#39 Jmuulian   Members   


Posted 03 January 2006 - 02:37 PM

I only skimmed over your actual article, as I'm reading this from work, so ignore me if I draw the same conclusions as you have. You would benefit by including an abstract introduction/conclusion clearly stating your answer to the question raised.

When does an algorithm turn alive

I'm going to rephrase your question:

"When does the complexity and/or sophistication of an algorithm become sufficent so as to be considered alive?"

I suggest that this question is misleading. Life, or consciousness (which is what we're really discussing here) should, IMHO, not be considered in terms of a binary distinction; things are not merely conscious or unconscious. Rather, consciousness should be defined as a spectrum. Ants at one end, Humans at the other. The more complex the algorithm, the better it performs, the higher on the scale it goes.

As an excersise, rank the following objects in terms of consciousness:

Human, Flower, Ant, Elevator, Screwdriver, Eliza, HAL 9000.

Note first that a Flower is certainly alive, but has little consciousness. The Elevator has a basic Input/Process/Output loop - taking button presses, engaging machinery, etc.

Here's my list, in ascending order of consciousness: Screwdriver, Flower, Elevator, Ant, Eliza, HAL 9000, Human.

In my experience, a lot of people want to do something like this - [] denotes equal precedence: [Elevator, Screwdriver, Eliza, HAL 9000, Flower], [Ant, Human]

The Philosophy of Mind is a debate that has been going on forever, but in my opinion, if you hold that there is no "ghost in the shell", then it makes no sense to start talking about consciousness as a binary distinction.

#40 Sneftel   Senior Moderators   


Posted 03 January 2006 - 02:44 PM

It's an important thing to keep in mind (no pun intended). That the mind can't comprehend itself is something that seems trivially true at first, but can lead one to seriously mistake the nature of comprehension. It's analogous to saying that no country can contain a map of itself, or a description of its political process.

Old topic!

Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.