• Advertisement

Archived

This topic is now archived and is closed to further replies.

Philosophy : Artificial Life

This topic is 6263 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hey, Apart from actually creating artificial intelligence there''s a whole other part of the topic, specifically : philosophy and ethics. Basically, they come down to ''what if'' sentences and they''re highly theoretical at the moment but fun and worthwhile to think about. One of the things I''ve always wondered is what would happen on the day we actually discover real artificial LIFE. Suppose some brilliant programmer would succeed in building an application that is aware of the fact it can think and evolve based on a set of parameters. How should we react to this, as humanity. We would no longer be the only conscient species, even more we would have created a compagnon on earth that is capable of awareness and thinking on its own. Do we have the right to experiment with that form of life? Can we turn of that computer when we want to? Can we enhance it the way we want to? Should it see us as God, as an equal or as inferior? I would like to hear your comments on this ****************************** Stefan Baert On the day we create intelligence and consciousness, mankind becomes God. On the day we create intelligence and consciousness, mankind becomes obsolete... ******************************

Share this post


Link to post
Share on other sites
Advertisement
There is an excellent anime called "Ghost in the Shell" that covers this topic..

It can be argued that DNA is nothing more then a program designed to preserve itself. Remind you of a kind of computer program? Computer Viruses.



Edited by - EvilTech on June 25, 2000 4:19:02 PM

Share this post


Link to post
Share on other sites
Hi,
I''ve been thinking quite a lot lately on this subject.
Firstly, I think the only true way to create artificial life is to build using the smallest units possible - probably neurons, rather than a larger collection of defined units; otherwise we''re giving our "lifeform" information which it should really have developed itself. And you can argue that we''re born with instinct; but how did that instinct come to be? Well it developed on it''s own, it wasn''t pre-defined. So we should start with the things we know absolutely (well nearly) 100% about to be as accurate as possible with nature - neural nets.
Secondly, using these rules (which I believe are the only way to go about creating absolutely real life) how long would actually developing life take? Well, a long time.
And lastely, once we''ve created life, how do we know we''ve created it? You can''t build an image from neural nets, so you couldn''t monitor "outputs" or infact determine whether or not it''s behaving logically; it could be a form of logic we don''t understand. In fact, it would be quite hard just to "wire in" inputs.
It could just be by chance that our form of life reacts using the logic that it does. Imagine what would happen if our environment wasn''t modelled as a similar environment for our life; if there''s no temperature in our synthetic environment, where would the energy come from, is everything based on our common table of elements, etc. (hehe imagine the programs - "SimLife", "SimElement", ...)
So what I''m basically trying to say is :- if we could ever create life, how would we control it or even know if it really existed?

p.s. Please post disagreements, it''s more interesting than "I agree with you there"


---------------
kieren_j
ÿDesigned for Win32

Share this post


Link to post
Share on other sites
The first thing to remember is that there''s a difference between life and consciousness. Viri are alive. Bacteria are alive. But they''re not conscious; they have no intelligence, no nervous system.

It''s extremely difficult, and maybe even impossible, to determine when something is conscious. Clams have simple nervous systems. Do they have a sort of feeble conscious? What about fish? Dogs certainly are conscious; gorillas definitely are. Where does consciousness begin?

Even the definition of "life" is questionable, but it''s not as difficult a question. I think if something grows and reproduces, it''s alive. But whatever you call life, it''s pretty clear no computer can really be alive.

The question is if a machine can be conscious. If you build an artificial brain, copying the structure of a live human''s brain, and somehow copy all of the human''s electrical impulses in his brain into the artificial one, such that it functions the same way, is it conscious? Is it, in effect, a second instance of the human?

If you did a similar procedure copying the brain development of a fetus/baby, have you created an artificial human? And if you tinker with the development process, making it different from a human''s, when does it stop being conscious?

It''s a very complex topic. In my opinion, a mere computer algorithm cannot be alive or conscious, because computers are linear, and brains are not. Damage nearly any part of the brain, and if only a small part got damaged, then it will continue to function. The neurons of the brain interact with each other but don''t rely on each other.

~CGameProgrammer( );

Share this post


Link to post
Share on other sites
Virii cannot reproduce, whether it be sexually or asexually, therefore, most scientists (most smart ones) would not consider them alive, but more on a borderline of life. Bacteria that can reproduce however is alive.

-----------------------------

A wise man once said "A person with half a clue is more dangerous than a person with or without one."

Share this post


Link to post
Share on other sites
I heard an interesting idea for the concept of consciousness.

Consciousness comes about when a life-form''s thought processes contain a world model so complex that they include itself.

i.e. The creatures internal view of it''s environment includes a model of the creature itself.

This is when self awareness kicks in which is a major part of anybody''s view of conciousness.

Human babies recognise themselves in mirrors, birds in a cage do not. One is self aware and so probably conscious, the other is seemingly unaware of the existance of themselves and so is probably not conscious.

As to the argument that humans are nothing more than complex machines, I totally agree.

Unfortunately this raises a problem with discovering artificial life.

It can be argued that life-forms are just survival machines built by their genes to enable them to propagate their form(read "The Selfish Gene" by Richard Dawkins). In this view it is suggested that all we are is a self-persisting chemical reaction (of a highly complex nature). Hence we are just a result of physical changes in the world state.
However, it is patently obvious that this is all computers are.

So if you believe that the definition of life lies beyond the corporeal, that there is an ethereal component then computers can never fulfil your definition. If you adhere to the view that we are survival machines then we already have simulation of life that are more complex than the original replicators (the earliest persistant chemical reactions).

So either A-Life already exists for you, or it never will.

Mike

Share this post


Link to post
Share on other sites
Here''s something to think about...

There are some times when I think consciousness is being able to choose between right and wrong. What should you do? What did you do? You see, there are many animals and organisms that just go about life performing their function. Has a flower ever decided not to grow out of its own free will? Not like we would know about it, if it did happen anyhow. The flower is still alive, it just has no choice. Consciousness is pretty much being able to make decisions. We as humans have no predefined function. Some people even feel that they have no reason to live and kill themselves, do to our functionless(Well, in theory). So, to create true artificial intelligence, we would have to create a program with no function...hmmmm. The program would just have to think, and be able to influence outside objects with its choices. Most programs are created with a function, and go through a flow of tests to base its following procedure of events. However, it is told what to test. This is giving it a function I guess. Humans, and things with consciousness pretty much test everything. We observe, and learn from observations, experiences and mistakes. So, true artificial intelligence, in my opinion, has got to be impossible. Ah well, enjoy


Raz

Share this post


Link to post
Share on other sites
I have just thought up a neat philosophy on computer AI, it goes like this: "The world will be safe from computers untill some idiot gives them ambition". It is a kinda bare philosophy, but do you all agree?

Share this post


Link to post
Share on other sites
Hmmmm...(Rubs his chin in thought)...Ever read Macbeth?


Raz

Share this post


Link to post
Share on other sites
For people who like the subject of artificial life such as I there is a great book on the subject that covers a large amount of information on artificial life. The book''s title is "Artificial Life" by Steven Levy. I think if you people read the book it will clear up a good amount of misconseptions on the subject. At least thats my opinion.


Vorador




"This just goes to show that there are only two kinds of people in this world, stupid people... and me."

Share this post


Link to post
Share on other sites
wow...didn''t know they wrote books on this stuff. Unless of course its another programming book.





Raz

Share this post


Link to post
Share on other sites
Conciousness is being able to choose between right and wrong?

There is very rarely such a thing as right and wrong in any situation. I''m not just saying morally, I''m talking about standard everyday decisions. Being a tiger, which antelope should I chase....there''s no 100% best answer, there''s a weighing of pro''s and con''s to make a decision.
Should I kill an intruder to protect myself? Again, pro''s and con''s with every decision, moral or not.
Every creature makes decisions every day based on input from their environment, from the smallest bacteria, to plants, to insects, to mammals and humankind. We are not special in this respect.
I suppose you believe in Good and Evil too?

As to humans having no predefined function. We do. It''s called survive long enough to procreate, procreate, protect your young and die. Same as every other living organism on this planet (although some don''t protect their young (trees for instance) and I''m sure others don''t have a built self destruct at the end of their useful lives).
And you can also argue determinism vs free-will until the cows come home. I know I stand by determinism, it''s the scientist in me, it makes perfect logical sense. I believe we are just ridiculously complex deterministic automatons. Think of Occam''s razor....why add a soul or seperate concious entity into the equation if there is no need for one because you already have the answer (the question being, how do we function).

I agree about the ambition thing though :-)

Mike

Share this post


Link to post
Share on other sites
When was the last time you went to the movies? Watched television? Played a video game perhaps?

Would you have ceased to survive if you had not done that? You don''t see bacteria playing in the dirt...(Not disagreeing with you, just interested in anyones responses, and trying to strike up some good arguments )

Share this post


Link to post
Share on other sites
I think CGameProgrammer had it down, although I''d go a little farther: life and conciousness have nothing to do with one another. "Conciousness" is the whole being aware of one''s own mental processes and physical (or virtual) existence, whereas "life" as such is the simple capability to implement a directed pattern of seeking out and utilizing the resources necessary to propagate one''s own patterned design.

If the form of virtual conciousness described were to provably evolve, then I don''t think there''s any great quandary: if you support the termination of sentient beings, then you can without hypocrisy say that the termination of the conciousness'' function is good and proper. Otherwise, the decision to kill it is hypocritical, the decision to harness its power is slavery, and the decision to experiment on it constitutes a form of torture. If it''s provably sentient, then it''s able to react to the stimuli we give it and hence we can use the interface which we have presumably designed to teach it the things we need it to know in order to communicate with us.

For me, yes, it really is that simple.

As to determinism, with the knowledge that we have of the brain''s function there''s absolutely no way to say one way or another about the automaton thing. It''s unlikely to be a perfect metaphor if only because of the cumulative effect of quantum action on an immensely complex system. And even if it were, there''d be no difference in us being machines and us being, um, non-machines, simply because any sufficiently (and "sufficiently" isn''t very extensive) complex mathematical system diverges exponentially from any predictable pattern simply due to the properties of nonlinear dynamics.

Just try, for example, to write a solid physics engine without using any of the techniques of nonlinear dynamics to make your life easy.



mikey

Share this post


Link to post
Share on other sites
Okay, I don''t think we''re any more or less deterministic than any other chemical reaction. i.e. if a chemical reaction is determined by QM then so are we, it doesn''t make us more alive.
Even more complex maybe, but not more alive.

Also littlemikey, everything you said about sentient machines also applies to, for instance, monkeys, rabbit, cats and dogs sitting in an animal experimentation lab right now.
The point being, we will be less reactive to the cries of any being the more different it is from us. And computers are well down the line beyond houeshold pets. Hell, if we can happily torture Chimps who are our closest possible evolutionary cousins then we should have no problem with computers.

Mike

Share this post


Link to post
Share on other sites
MikeD,

Maybe you''ll think I see this way too clean "Star Trek"-philosophical like but chimps shouldn''t be tortured either.
Creating a sentient artificial life might actually be just what humanity needs to understand more about who we are and what our place is in this world.
Should we ever consider an artificial form of intelligence sentient, then I think we should treat it with as much respect as any other lifeform. Sadly that would indeed mean that several people who use lab rats, might see no problem at all in "manipulating" a sentient computer as well in any form they see fit.


******************************
Stefan Baert

On the day we create intelligence and consciousness, mankind becomes God.
On the day we create intelligence and consciousness, mankind becomes obsolete...
******************************

Share this post


Link to post
Share on other sites
Now that swings around to morality.

There will always be those willing to slaughter another tribe to have extra land to feed their cattle or kill a member of another gang for racketeering on their turf.
And there will always be those with respect for life and for all other beings on this planet.

Unfortunately that''s got little to do with whether a creature is sentient or not..but I take your point.

Mike

Share this post


Link to post
Share on other sites
Of course it''s about morality; a sociopath doesn''t even care if you''re human.

Also, didn''t I say that the decision was hypocritical? More importantly, didn''t i emphasize the importance of proving sentience? The only creature that I am aware of who''s made a good case for sentience is Koko, a great ape who learned sign language and even cried when she was told that her pet cat had died - she had no direct contact with the animal in the incident. I don''t think that you can pose a good argument against the possibility of sentience in "lower" animals. But sentience is a very long path, and it''s not one walked easily. All the arguments I mentioned only apply to the animals in labs if one believes in the value of all life over that of directed life.

It''s a simple concept: we''re a species with poor physical characteristics, low reproductive and maturation rates, and little or nothing in the way of a natural defence. Sentience and its sister intelligence are the keys to what mankind uses to go forward. As soon as we have virtual labs capable of simulating life to even a reasonable approximation of the real thing, sure, there''s a valid argument there. But our talents lie in our ability to move forward on the strength of reasoning and realization, creativity and analogy. If by experimenting on animals we find ourselves taking new knowledge out which benefits the whole race or even the whole planet, then we should at least consider allowing those with the stomach to do it to proceed.

I have to say at this point that after further consideration I find I disagree with your view of the mechanistic nature of man. You seem to think that the human mind can be contained in a deterministic system of some kind, which it cannot. QM does in fact govern the operation of chemical systems, as it governs the universe at some level of coarseness. But the human brain is both emergent and quantum mechanically active. You can''t capture it''s functions, because they are not at the atomic level completely determined! Of course, one might say, the same applies to any sufficiently large system. The point is that in nearly any macroscopic system the effect should average itself out nearly 100% of the time. In the brain, by contrast, it is the action of QM which is of prime importance. The cell structures in the brain tend to take on behaviours explicable only in the context of QM, and these structures are the encoding of the mind itself!

With that in mind, sentience changes the picture: we''re not simply drawing on the experience of a highly complex system of reactions. We''re drawing on the experience of a subject which knows that it''s being drawn upon. That subject does have the ability to make an informed decision if given the chance to do so. The subjects in the lab, on the other hand, do not to any appreciable degree - their reactions are severely limited. At the outer limits of morality are the higher simians and the cetaceans on which experiments are performed. But it is very difficult to make the distinction. And some people will always try to hold the line against the forces of moderation; so they should, or else we may find ourselves too limited by our own good will. Somebody else will hold the other side of the line. It''s their prerogative - and prerogative is what makes intelligent life different.

If we find an AI with prerogative, then to my mind we''ve got to let it follow it. Of course, we limit the permissions of children, and such might be necessary for a young AI. The issue of what to do with it is separate from what is and what is not right to do to it.

wow, that was cathartic. I need some cheese doodles.

mikey

Share this post


Link to post
Share on other sites
_______________

A quote by littlemikey...

QM does in fact govern the operation of chemical systems, as it governs the universe at some level of coarseness. But the human brain is both emergent and quantum mechanically active. You can''t capture it''s functions, because they are not at the atomic level completely determined!
_______________

Erm....please prove this fact or at least give me a decent URL so I can check it out for myself.

Personally I have trouble with the whole QM concept. Admittedly I''ve only read a couple of books by John Gribben ("In search of Schroedingers cat" and "Schroedingers kittens") and nothing in them has suggested to me that the experiments that prove QM mechanics are anything but experimental error. Others have tried to explain why uncertainty theory works but not enough to convince me. Okay, I''m up against some of the greatest minds in physics with that but what the Hell. I''ll believe it when I believe it.

Beyond that, the idea that QM plays such a major part of the brain''s function but not on the function of, say, pea soup seems like another made to blow over in 6 months concept some grad student wrote for their PhD thesis. Like I said, URL please.

As to the fact that cats and dogs aren''t sentient, well, fair enough, they don''t seem to be self-aware, unlike chimps and dolphins (and probably a fair few others). But I still don''t see that as a point as to when and under what circumstances we should suddenlty be ''nice'' to AI. To paraphrase your good self.
"If by experimenting on AI we find ourselves taking new knowledge out which benefits the whole race or even the whole planet, then we should at least consider allowing those with the stomach to do it to proceed."

Hell, I don''t even need that justification. People, sentient human beings that is, have been f**cking each over for millenia over a piece of land or even a piece of bread. There is no built-in reason that we have to justify ourselves for anything. In fact I''d say the only justification that really matters to anyone is to themselves. And if they have to feel that justification by getting someone elses permission to act in the way they want then that''s their problem.

Mike

Share this post


Link to post
Share on other sites
STAGE1: Life->The ability of a form to propogate. (Make more of itself).

STAGE2: Intelligence->The ability of a form to process external information to increase the success of propogation.(LIFE).

STAGE3: Consciousness->The ability of a form to realize the above 2 principles hold true for itself.

STAGE4: CollectiveConsciousness->The ability of 2+ forms to share their consciousness & intelligence.

Mankind is almost evolving from stage 3 to stage 4. We can share through written text, verbal speech, art (VIDEO GAMES!), ect. When the evolution is complete we will share a species wide consciusness. I don''t understand why everyone is wanting to create AI. We need to start at the beginning and create AL(Artificial Life). Machines that can make more of themselves. My hat goes off to all the hackers who are boldly taking this step by creating the first of these. Yes the Virus I believe will lead to the first real AI. Simple magnetic signatures that can reproduce. The only problem is their environment is limited to hard drives and memory. We need to create a machine that exists in the same physical surroundings that we occupy, a machine that can go forth and multiply, a machine that can evolve with its surroundings. This will eventually lead to intelligence, and then conciousness. By that time our species will be gone or living in space where we can watch our creation fight among itself for territory, elictricity, gas, ect. Then we can send down some some "prophets" and teach the creatures a better way.

Share this post


Link to post
Share on other sites
MikeD,

I realize that I''m getting VERY "carried away" here, but your final paragraph scares me a little.(This is not a flame in your direction, consider it a general thought induced by what you wrote there. )

Of course mankind has been murdering and scheming all history long and that is something we should not be proud of. In fact that should be something mankind should STOP doing if we ever get serious about creating real artificial life. Because at that moment we CREATE life, based on how we perceive reality, so this form of life will have many distinct charactertraits of humanity (but also superior math power and such, so if hostile it could become dangerous. I''m not trying to sketch a vicious army of robots here, just the fact that artificial life will be similar in perception of reality as humans are, but combined with raw processing power). If mankind has no problem "manipulating" other forms of life for its own gain, how would we be able to learn it not to do so either? This has always been one of the stranger points in Asimov''s Laws for robots.
They state that a robot should obey a human, and not harm it in any way either active or passive. If we only have artificial intelligence, this could be implemented, but who are we to give orders to a sentient artificial intelligence.
The rules no longer apply, the only knowledge the sentient artificial lifeform has to build its own behavioral pattern is the one of its creator and if humanity manipulates "lower" species for its own gain and the artificial lifeform considers itself superior to humanity (consider my signature ), the conclusion is obvious...

Personally I think that the creation of artificial life would be a serious step in the evolution of mankind, not just scientifically, but for the entire view we have of ourselves and our environment. At this point, I fear we are just not ready to do that discovery.

Note : The thoughts above are far from complete. They are a collection of opinions that are somehow linked to each other but I might actually need a few hours to explain why and how exactly I feel these things are connected. Feel free to differ...

Note 2 : I appreciate the many remarks to this topic, it''s always refreshing to see so many interesting remarks on this subject!


******************************
Stefan Baert

On the day we create intelligence and consciousness, mankind becomes God.
On the day we create intelligence and consciousness, mankind becomes obsolete...
******************************

Share this post


Link to post
Share on other sites
I was kinda heading in the direction of Devil''s Advocate here. Personally I would like mankind to ascend into a utopian world without the emphasis being on personal gain. To get rid of war you only have to get rid of selfishness, however I''m quite a realist and believe that basic human nature has to be fairly strongly manipulated throughout childhood for a person to be put in a non-selfish mind-frame.
Maybe I should build that archology I''ve been meaning too and start from there :-)

Think about human selfishness and remember one of the teams undertaking the genome project intends to patent human genes.

We are a long way off....

Mike

Share this post


Link to post
Share on other sites
I know that this is more a philisophical debate and the such, but i''d like to throw a few ideas of mine out there =)

First of all, i think that it is impossible to make a computer to become alive (i''m using generic terms, you know what i mean). How can self-conscience or intelligence arise from a predictable pattern? Let me try to explain it better. The intelligent computer would follow a certain path of if statements and will take the same route under the same circumstances... *always*. You could put some random numbers into the math, but the problem is that there is no such thing as a truly random number. Albeit very complex, you can predict what a randon number that is generated by the computer will be. There for your computer life-form follows a set path, and is predictable. (i''m assuming that most people feel that humans have free-will or whatever and that we aren''t just an elaborate program.) It''s kind of like absolute zero, we can get so damn close to it that we can effectively reach it, but it''s impossible to get it that extra .0001 kelvin or whatever to make it 0.

Second of all, i think that if it were somehow possible to create true computer intelligence, we would need to use chaos theory (ever see the movie pi ;D). I think that the more complex the "thinking" program would be the worse it would be. By complex, i mean the more feelings, or emotions, or sensors the program has. With chaos theory there is an almost infinate amount of imformation that can be derived from a simple function. I''m still relatively new to chaos, but i''m not one of those ppl that just go fractals=chaos, i actually know the math behind it...

Wow, that was quite a rant, i''ll get off my soapbox now =). The above ideas aren''t quite as refined as i would like, but i just want to know if anyone out there is thinking the same things, and maybe get a little feed back on it. Thanks guys.

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
i think there a lot work to do yet , but if i will see that
day , i hope someone has the key to shut down the machines
at will and reprogram them.

Share this post


Link to post
Share on other sites
Quote from Krinkle:
First of all, i think that it is impossible to make a computer to become alive.


Blimey, that''s a statement and a half. How on earth can you prove that (or indeed disprove it)? I have no proof that anyone is alive except for me, in fact no one has any proof that anyone else is alive.

How, then, do I discern between what is alive and what isn''t? By observation - i think other people are alive because they talk, they walk, they breath, etc, etc. I know that some people are dead because they don''t do those things. Equally, I believe a stone is not alive because it doesn''t do those things. But how can you prove that? How can you prove that a stone doesn''t have a mind or a conciousness?

My point is this - imagine a computer generated human - if it behaves like a human, and there is no percievable difference between it and a real human then surely for all intents and purposes it IS a human - and therefore alive! (ok we''re talking about the living varity of humans here, not the deceased)

I don''t want to say any more than that because this message is already too long.



=============================
Eric Reynolds
msn messenger: ehremo@hotmail.com
=============================

Share this post


Link to post
Share on other sites

  • Advertisement