Philosophy : Artificial Life

Started by
128 comments, last by StrategicAlliance 23 years, 3 months ago
Hey, Apart from actually creating artificial intelligence there''s a whole other part of the topic, specifically : philosophy and ethics. Basically, they come down to ''what if'' sentences and they''re highly theoretical at the moment but fun and worthwhile to think about. One of the things I''ve always wondered is what would happen on the day we actually discover real artificial LIFE. Suppose some brilliant programmer would succeed in building an application that is aware of the fact it can think and evolve based on a set of parameters. How should we react to this, as humanity. We would no longer be the only conscient species, even more we would have created a compagnon on earth that is capable of awareness and thinking on its own. Do we have the right to experiment with that form of life? Can we turn of that computer when we want to? Can we enhance it the way we want to? Should it see us as God, as an equal or as inferior? I would like to hear your comments on this ****************************** Stefan Baert On the day we create intelligence and consciousness, mankind becomes God. On the day we create intelligence and consciousness, mankind becomes obsolete... ******************************
******************************StrategicAllianceOn the day we create intelligence and consciousness, mankind becomes God.On the day we create intelligence and consciousness, mankind becomes obsolete...******************************
Advertisement
There is an excellent anime called "Ghost in the Shell" that covers this topic..

It can be argued that DNA is nothing more then a program designed to preserve itself. Remind you of a kind of computer program? Computer Viruses.



Edited by - EvilTech on June 25, 2000 4:19:02 PM
Hi,
I''ve been thinking quite a lot lately on this subject.
Firstly, I think the only true way to create artificial life is to build using the smallest units possible - probably neurons, rather than a larger collection of defined units; otherwise we''re giving our "lifeform" information which it should really have developed itself. And you can argue that we''re born with instinct; but how did that instinct come to be? Well it developed on it''s own, it wasn''t pre-defined. So we should start with the things we know absolutely (well nearly) 100% about to be as accurate as possible with nature - neural nets.
Secondly, using these rules (which I believe are the only way to go about creating absolutely real life) how long would actually developing life take? Well, a long time.
And lastely, once we''ve created life, how do we know we''ve created it? You can''t build an image from neural nets, so you couldn''t monitor "outputs" or infact determine whether or not it''s behaving logically; it could be a form of logic we don''t understand. In fact, it would be quite hard just to "wire in" inputs.
It could just be by chance that our form of life reacts using the logic that it does. Imagine what would happen if our environment wasn''t modelled as a similar environment for our life; if there''s no temperature in our synthetic environment, where would the energy come from, is everything based on our common table of elements, etc. (hehe imagine the programs - "SimLife", "SimElement", ...)
So what I''m basically trying to say is :- if we could ever create life, how would we control it or even know if it really existed?

p.s. Please post disagreements, it''s more interesting than "I agree with you there"


---------------
kieren_j
ÿDesigned for Win32
The first thing to remember is that there''s a difference between life and consciousness. Viri are alive. Bacteria are alive. But they''re not conscious; they have no intelligence, no nervous system.

It''s extremely difficult, and maybe even impossible, to determine when something is conscious. Clams have simple nervous systems. Do they have a sort of feeble conscious? What about fish? Dogs certainly are conscious; gorillas definitely are. Where does consciousness begin?

Even the definition of "life" is questionable, but it''s not as difficult a question. I think if something grows and reproduces, it''s alive. But whatever you call life, it''s pretty clear no computer can really be alive.

The question is if a machine can be conscious. If you build an artificial brain, copying the structure of a live human''s brain, and somehow copy all of the human''s electrical impulses in his brain into the artificial one, such that it functions the same way, is it conscious? Is it, in effect, a second instance of the human?

If you did a similar procedure copying the brain development of a fetus/baby, have you created an artificial human? And if you tinker with the development process, making it different from a human''s, when does it stop being conscious?

It''s a very complex topic. In my opinion, a mere computer algorithm cannot be alive or conscious, because computers are linear, and brains are not. Damage nearly any part of the brain, and if only a small part got damaged, then it will continue to function. The neurons of the brain interact with each other but don''t rely on each other.

~CGameProgrammer( );

~CGameProgrammer( );Developer Image Exchange -- New Features: Upload screenshots of your games (size is unlimited) and upload the game itself (up to 10MB). Free. No registration needed.
Virii cannot reproduce, whether it be sexually or asexually, therefore, most scientists (most smart ones) would not consider them alive, but more on a borderline of life. Bacteria that can reproduce however is alive.

-----------------------------

A wise man once said "A person with half a clue is more dangerous than a person with or without one."
-----------------------------A wise man once said "A person with half a clue is more dangerous than a person with or without one."The Micro$haft BSOD T-Shirt
I heard an interesting idea for the concept of consciousness.

Consciousness comes about when a life-form''s thought processes contain a world model so complex that they include itself.

i.e. The creatures internal view of it''s environment includes a model of the creature itself.

This is when self awareness kicks in which is a major part of anybody''s view of conciousness.

Human babies recognise themselves in mirrors, birds in a cage do not. One is self aware and so probably conscious, the other is seemingly unaware of the existance of themselves and so is probably not conscious.

As to the argument that humans are nothing more than complex machines, I totally agree.

Unfortunately this raises a problem with discovering artificial life.

It can be argued that life-forms are just survival machines built by their genes to enable them to propagate their form(read "The Selfish Gene" by Richard Dawkins). In this view it is suggested that all we are is a self-persisting chemical reaction (of a highly complex nature). Hence we are just a result of physical changes in the world state.
However, it is patently obvious that this is all computers are.

So if you believe that the definition of life lies beyond the corporeal, that there is an ethereal component then computers can never fulfil your definition. If you adhere to the view that we are survival machines then we already have simulation of life that are more complex than the original replicators (the earliest persistant chemical reactions).

So either A-Life already exists for you, or it never will.

Mike
Here''s something to think about...

There are some times when I think consciousness is being able to choose between right and wrong. What should you do? What did you do? You see, there are many animals and organisms that just go about life performing their function. Has a flower ever decided not to grow out of its own free will? Not like we would know about it, if it did happen anyhow. The flower is still alive, it just has no choice. Consciousness is pretty much being able to make decisions. We as humans have no predefined function. Some people even feel that they have no reason to live and kill themselves, do to our functionless(Well, in theory). So, to create true artificial intelligence, we would have to create a program with no function...hmmmm. The program would just have to think, and be able to influence outside objects with its choices. Most programs are created with a function, and go through a flow of tests to base its following procedure of events. However, it is told what to test. This is giving it a function I guess. Humans, and things with consciousness pretty much test everything. We observe, and learn from observations, experiences and mistakes. So, true artificial intelligence, in my opinion, has got to be impossible. Ah well, enjoy


Raz
"Imagination is the key to Creation"
I have just thought up a neat philosophy on computer AI, it goes like this: "The world will be safe from computers untill some idiot gives them ambition". It is a kinda bare philosophy, but do you all agree?
Mess With the Best, go down like the rest, but please visit:http://members.tripod.com/nu_bgameprogramming/
Hmmmm...(Rubs his chin in thought)...Ever read Macbeth?


Raz
"Imagination is the key to Creation"
For people who like the subject of artificial life such as I there is a great book on the subject that covers a large amount of information on artificial life. The book''s title is "Artificial Life" by Steven Levy. I think if you people read the book it will clear up a good amount of misconseptions on the subject. At least thats my opinion.


Vorador




"This just goes to show that there are only two kinds of people in this world, stupid people... and me."
--------------------------------------------------------"This just goes to show that there are only two kinds of people in this world, stupid people... and me."

This topic is closed to new replies.

Advertisement