Sign in to follow this  

Are you a Cosmist or a Terran?

This topic is 4729 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I would like to find out if you are a Cosmist or a Terran, based on this article I found: http://www.cs.usu.edu/~degaris/artilectwar2.html ================================= I think the article raises some very interesting questions not about if we CAN create AI....but rather SHOULD we create AI...and what are the consequences of setting such creatures lose? I am not sure how ready I am to hand over some things to true AI beings. Would you want them operating on you? Would they "care" about your fate like a real doctor (hopefully) would? How would AI beings see us? As friends? As enemies? As bugs to be squashed (in other words...indifferent to our fate or if we die or not)? Can we truly call something artificially intelligent if we don't trust it enough to set it free and let it explore and do what it wants to do? Can we say...be smart...be logical....but only as long as you do what I want?

Share this post


Link to post
Share on other sites
Presumably, if we can create intelligence, then we can teach it social and personal ethics and morality and permit it to explore those. Sure, this could lead to some nasty outcomes, but they wouldn't be much worse than the sorts of severe perturbations from the 'common good' that we see around the planet every day: genocide in the Balkans or Africa, inhumane treatment of our fellow humans, war, politics, etc., etc.

Is it likely that robots will take over the world and enslave or annihilate humans, a la The Matrix and Terminator? I doubt it.

Timkin

Share this post


Link to post
Share on other sites
The essayist himself concedes that he is not 100% Cosmist and that he grapples with the ethical implications of his work, so the question may not be an either or proposition. I dislike that the author felt he had to make up a new word, "gigadeath", to describe human extinction. This suggests to me a profound lack of human decency in his work - despite his claims to the contrary. A technical language that obscures the reality of what it pretends to discuss lends itself well to the dissolution of ordinary decency and thus, in this case, to the horrors it supposedly warns against.

Anyway, to answer the question, I'm a terran and have been since I first read one of Moravec's books 15 years ago. Just the same, de Garis essay is interesting and worth the read.

Here's the url, linkified:

"The Artilect War", Second Version, 2001


Share this post


Link to post
Share on other sites
Cosmist, I.

I can't help but see the creation of machine intelligence as an augment for human intelligence as the next evolutionary step for humans. If the technology becomes available, I would be quite happy to become a cyborg, and/or transition completely to machine when my human body is too old and decrepit to be fun.

On further reflection, I can imagine that Terrans would be far more in danger from cyborgs than from pure machine intelligences. Machine intelligence, according to my intuition, will most likely be extremely abstract and not significantly concerned with humans except where they affect its current task (ie. an intelligent traffic routing system would need to know statistical information about the travel plans of humans with respect to time, but it wouldn't really care what the humans had for lunch). In comparison, cyborg intelligences will be human, with millions of years of kill-or-be-killed evolution shaping their thoughts, but with the addition of a tremendous amount of intelligence. As such, while the aforementioned traffic routing computer would not be seen by Terrans as a threat, the cyborg would be seen as such, and furthermore would be equipped to respond in kind if Terrans did attempt to destroy it.
Quote:
Let me try to express this Terran revulsion against the cyborgs in an even more graphic way that may have a stronger appeal to women than to men. Take the case of a young mother who has just given birth. She decides to convert her baby into a cyborg, by adding the "grain of sugar" to her baby's brain, thus transforming her baby into a human faced artilect. Her "baby" will now spend only about a trillionth of its mental capacity thinking human thoughts, and the rest of its brain capacity (i.e. 99.9999999999% of it) will be used for thinking artilect thoughts (whatever they are). In effect, the mother has "killed" her baby because it is no longer human. It is an "artilect in human disguise" and totally alien to her.
I think here the author is misunderstanding the nature of mental augments, as I envision them. The purpose of such an augment is not to be a 'second brain' taking over the host, but to add processing power to the host's own mental system. Hence, while 99.999999999% of the baby's brain power will indeed come from the implant, 100% of its (augmented) brain capacity will be thinking 'human thoughts', but astronomically more thoroughly and at a greater rate than any merely human baby.

Share this post


Link to post
Share on other sites
Cosmist.

And I think he doesn't dismiss the IA people (intelligence amplification, or the "cyborg" faction he counts under the cosmist faction), but just thinks that if most your your sense data comes from sources other than your body that you really aren't that human anymore. Your "self" is elsewhere. (if you even have one, single self)

I really would like to know if anyone here is Terran.

Share this post


Link to post
Share on other sites
Quote:
Original post by C-Junkie
I really would like to know if anyone here is Terran.


I said I was. I'm still reading the entire essay. I wouldn't advocate killing off cosmists though. I find the idea that Terrans would go so far as to attempt to wipe out Cosmists rather farfetched. Nuclear weapons could lead to human extinction as well and there hasn't been a movement to exterminate nuclear physicists.

Share this post


Link to post
Share on other sites
Quote:
Original post by LessBread
I dislike that the author felt he had to make up a new word, "gigadeath", to describe human extinction. This suggests to me a profound lack of human decency in his work - despite his claims to the contrary.
It's a simple technique. Same with the word "singularity." All it is is a nifty way of observing the influence of your work. If you see people talking about "gigadeath" you know you're the one that started this discussion, rather than some other source.
Quote:
Anyway, to answer the question, I'm a terran and have been since I first read one of Moravec's books 15 years ago. Just the same, de Garis essay is interesting and worth the read.
I'm going to have to figure out how Moravec is.

Share this post


Link to post
Share on other sites
Quote:
Original post by C-Junkie
It's a simple technique. Same with the word "singularity." All it is is a nifty way of observing the influence of your work. If you see people talking about "gigadeath" you know you're the one that started this discussion, rather than some other source.

I disagree. I mean, I agree that the technique is simple and also that de Garis distorts the word "singularity" - as well as a few other "memes" - but I don't think he does this as a tracking mechanism so much as to attract attention to his quest through the use of buzz words. In fairness, I had just finished reading an article about Orwell, Orwell for Christians, that speaks to using technical jargon to obfuscate violence. Given the 'religiosity' of de Garis book, this essay is surprisingly appropo.

Quote:
Original post by C-Junkie
I'm going to have to figure out how Moravec is.


Moravec is a Cosmist too. Like de Garis, he gets rather giddy with the possibilities of AI and like de Garis he contemplates human extinction. In this book Moravec postulates that entities made in the image of human beings are bound to become competitors for the ecological niche currently occupied by human beings. It's been a while since I read Moravec's book, but I recall that he is a better writer than de Garis. I'm still reading de Garis's book. I'm at the chapter where he lays out the Terran arguments.

Share this post


Link to post
Share on other sites
It's an interesting topic, the ethics of AI. However, I have to put it in these terms (in regards to the "doctor AI"): what does it matter *who* dies, just as long as fewer people die that those that are treated by humans?

Actually, current AI diagnosis systems are 90% accurate, versus human rates of 75% accuracy, and yet people would be shocked and appauled to be diagnosed by a "robot". So what? What does emotion have to do with your health and well being? Doctoring is just another profession.

Share this post


Link to post
Share on other sites
I am for ai.

I would say that advanced ai programs, that can effect things in the real world would either be limited in thinking, eg. A Smart traffic routing system would be able to think about what the traffic would be like in 10min but not what is inside the vehicles.

Or else they would be restricted in thinking about harming humans/humanity. (logical output.)

Currently i think we would end up with specialised (perhaps concious, maybe not) programs running things that don't care about humanity, they have their job, they do it. (That would most likely be hard-coded, so that it won't care enough about taking over the world to actually start planning).

From,
Nice coder

Share this post


Link to post
Share on other sites
We currently have something like 8.9 Billion people on Earth - and we're trashing it. So before we try to make artificial intelligence maybe we should work on some human intelligence.

Perhaps our cars should be smart enough to know why and where we plan on driving them and then choose to let us drive. Our printers should understand what we're printing before letting us print.

Share this post


Link to post
Share on other sites
Quote:
Original post by frankgoertzen
We currently have something like 8.9 Billion people on Earth - and we're trashing it. So before we try to make artificial intelligence maybe we should work on some human intelligence.

Perhaps our cars should be smart enough to know why and where we plan on driving them and then choose to let us drive. Our printers should understand what we're printing before letting us print.


Perhaps the AI could help us come up with better ways to use our resources?

Imagine somebody with virtually limitless resources and time.

A "smart" computer could work on problems 24 hours a day...with access to vast libraries of information on the internet, etc. It could analyze huge quantities of information and form associations much faster than we could.

Share this post


Link to post
Share on other sites
Quote:
Original post by capn_midnight
It's an interesting topic, the ethics of AI. However, I have to put it in these terms (in regards to the "doctor AI"): what does it matter *who* dies, just as long as fewer people die that those that are treated by humans?


Less people dying is good! :)

Quote:

Actually, current AI diagnosis systems are 90% accurate, versus human rates of 75% accuracy, and yet people would be shocked and appauled to be diagnosed by a "robot". So what? What does emotion have to do with your health and well being? Doctoring is just another profession.


I think for humans...emotion has everything to do with health and well-being. Think about bedside manner. Think about nursing. Would you rather have an extremely competent nurse with no beside manner...or an extremely competent nurse who can say a few friendly words of encouragement during the day and be believable?

Perhaps my Doctor example was not the best one to give...but I think that coldly calculating logic is not always the way to go.

For example...if I were to drive off the road into a lake with my family...and AI robotz came to the scene (ooops I just realized I am borrowing from I, Robot)...I would want my children to be saved FIRST...then my wife...then me. Logically this may make no sense. Wouldn't it make more sense to save my wife, then me, then our children perhaps. My wife could always make more children...and she wouldn't need me to do it...she could re-marry, for example. The robot who is thinking of saving human life, and interested in saving future human life...would save my wife, then perhaps me, then our children. But this would not be the course of action myself or my wife would take if our children were on the line. An emotional response? Yes. But I think most parents would agree it was the only choice to make.

Share this post


Link to post
Share on other sites
Actually you can simulate emotional responce (up to a point), with ai.
And cold calculating logic, maked with emotional responce, would be a nice robot.
It could anticipate what you would feel in responce to things, so it would be able to minimise harm while being able to compute things logically (and quickly).

So that robot would conduct a search (of all possible decisions), and in a few seconds it would come up with a plan to have the least amount of possible harm to people.

So that one robot would save your children first, because they are more valuable to society, when they grow up. (they will give more help to society then you will).

It will probably also take into account the emotional responce you will have, in order to minimise harm.

From,
Nice coder

Share this post


Link to post
Share on other sites
I think the author is way off base on his thoughts.
1. He is reading too much into theoretical sciences and assuming they will work.
2. Artificial intelligence systems are built to be used as TOOLS
3. Reproducing systems need to consume resources/materials to reproduce
4. We humans have something very few computing systems have: the ability to change the environments around us (via hands). How would an asteroid shaped computing system go out and say, change something? Its only capable of thought. Unless it is able to communicate/control mindless robots, I don't see it happening. In which case, we humans are the ultimate designers of these systems and its unlikely that we'd even program concepts other then the job we require the tool to perform. A robot that only knows how to mine precious metals won't suddenly figure out how the universe works and pick up a gun and shoot people.
5. Machines deteriorate over time. Since they are such complicated beasts which require 100% correctness to work, they have a finite lifespan. My car is on its last leg and its got 280,000 miles on it. hardware/software glitches usually bring a whole system down. Perhaps the time required to build a huge asteroid sized machine would equal the lifespans of the initial components used, which would make construction & maintenance an infinitely never ending process.

I think the more alarming trend is happening in the defense departments though. I forsee battle fields 20-50 years from now being taken to levels of automation which only rich, high tech nations can afford. Tanks will drive themselves and sense, engage and assess enemy targets. War isn't fair, but what sorts of measures of backup/precaution will we take to ensure there isn't friendly fire incidents or our machines don't turn on us? today's modern armies will definately be annihilated by these future unmanned machines. Controlling them will be like playing a computer game. (whoa, scary. deciding the fates of others lives in computer-game style. "hey, look its super realistic GTA! lets go run people over with my tank!" "you imbicile! that's real!")
Anyways, its already too late to attempt to revert battle feild automation since DARPA, Raytheon, McDonal Douglas, Boeing and other defense contractors are undoubtedly already secretly developing and competing to be the first to develop automated AI precision guided weapons of destruction. USA is already using armed drones to watch the battle feild and seek battlefeild targets. All it takes is a message popup and someone to confirm engagement.
Someday there will be sniper bots out there that have many ways to sense a person, such as body heat, electromagnetic, optical, IR, etc. If they're on the battle feild roaming around a city, hiding behind a wall won't make a person safe. It'll be like playing Counter-strike with wall hacks and headshot scripts with a bunch of networked bots hunting.
I think its seriously possible. We have all the resources necessary to come up with this stuff.

I think in short we only need to worry that the people using the tools which we create will have the highest levels of ethical reasoning which is also programmatically hard-coded into the machines we use.
Einstein gave the world nukes...which was inevitable anyways. Maybe some day when war becomes so dangerous and devastating, hopefully people will choose to side with peaceful resolutions instead of mutually assured destruction on grand scales. I'm more optimistic then the author who wrote that long article/book.

Share this post


Link to post
Share on other sites
Quote:
Original post by slayemin
I think the author is way off base on his thoughts.
1. He is reading too much into theoretical sciences and assuming they will work.
2. Artificial intelligence systems are built to be used as TOOLS
3. Reproducing systems need to consume resources/materials to reproduce
4. We humans have something very few computing systems have: the ability to change the environments around us (via hands). How would an asteroid shaped computing system go out and say, change something? Its only capable of thought. Unless it is able to communicate/control mindless robots, I don't see it happening. In which case, we humans are the ultimate designers of these systems and its unlikely that we'd even program concepts other then the job we require the tool to perform. A robot that only knows how to mine precious metals won't suddenly figure out how the universe works and pick up a gun and shoot people.
5. Machines deteriorate over time. Since they are such complicated beasts which require 100% correctness to work, they have a finite lifespan. My car is on its last leg and its got 280,000 miles on it. hardware/software glitches usually bring a whole system down. Perhaps the time required to build a huge asteroid sized machine would equal the lifespans of the initial components used, which would make construction & maintenance an infinitely never ending process.

I think the more alarming trend is happening in the defense departments though. I forsee battle fields 20-50 years from now being taken to levels of automation which only rich, high tech nations can afford. Tanks will drive themselves and sense, engage and assess enemy targets. War isn't fair, but what sorts of measures of backup/precaution will we take to ensure there isn't friendly fire incidents or our machines don't turn on us? today's modern armies will definately be annihilated by these future unmanned machines. Controlling them will be like playing a computer game. (whoa, scary. deciding the fates of others lives in computer-game style. "hey, look its super realistic GTA! lets go run people over with my tank!" "you imbicile! that's real!")
Anyways, its already too late to attempt to revert battle feild automation since DARPA, Raytheon, McDonal Douglas, Boeing and other defense contractors are undoubtedly already secretly developing and competing to be the first to develop automated AI precision guided weapons of destruction. USA is already using armed drones to watch the battle feild and seek battlefeild targets. All it takes is a message popup and someone to confirm engagement.
Someday there will be sniper bots out there that have many ways to sense a person, such as body heat, electromagnetic, optical, IR, etc. If they're on the battle feild roaming around a city, hiding behind a wall won't make a person safe. It'll be like playing Counter-strike with wall hacks and headshot scripts with a bunch of networked bots hunting.
I think its seriously possible. We have all the resources necessary to come up with this stuff.

I think in short we only need to worry that the people using the tools which we create will have the highest levels of ethical reasoning which is also programmatically hard-coded into the machines we use.
Einstein gave the world nukes...which was inevitable anyways. Maybe some day when war becomes so dangerous and devastating, hopefully people will choose to side with peaceful resolutions instead of mutually assured destruction on grand scales. I'm more optimistic then the author who wrote that long article/book.




I am not sure how the above statements correlate to what Dr. de Garis is saying:

QUESTION 6. "Why Give Them Razor Blades?"

It seems common sense not to give razor blades to babies, because they will only harm themselves. Babies don't have the knowledge to realize that razor blades are dangerous, nor the dexterity to be able to handle them carefully. A similar argument holds in many countries concerning the inadvisability of permitting private citizens to have guns. Giving such permission would only create an American scale gun murder rate, with most of these gun murders occurring amongst family members in moments of murderous rage that are quickly regretted. Some of my critics seem to think that a similar logic ought to apply to the artilects. If we want them to be harmless to human beings, we don't give them access or control over weapons.

Dear Professor de Garis

I find no reason to fear machines. If you don't want machines to do something, don't give them the ability. Machines can't fire off nuclear warheads unless you put them in a position that enables them to. Similarly, a robot won't turn on its creators and kill them unless you give it that ability. The way I see things it would be pure folly to create machines that can think on its own, put them in a room and give them all the ability to fire missiles. If you can avoid doing something stupid like that you have nothing to fear from machines. For good examples of what not to do, watch the movie "Wargames", or since you were in Japan, try "Ghost in the Shell". I have been writing artificial intelligence software for years so I feel my opinions have at least some weight to them.

REPLY:

The obvious flaw in this argument is that this critic is not giving enough intelligence to his artilects. An artilect with at least human level intelligence and sensorial access to some of what humans have access to in the world, i.e. sight, hearing, etc, would probably be capable of bribing its way to control of weapons if it really wanted to. For example, a really smart artilect, with access to the world's databases, thinking at least a million times faster than the human brain, might be able to discover things of enormous value to humanity. For example, it might discover how to run a global economy without major business cycles, or how to cure cancer, or how to derive a "Theory of Everything (ToE)" in physics, etc. It could then use this knowledge as an ace card to bargain with its human "masters" for access to machines that the artilect wants.

Of course, one could give the artilect very little sensorial access to the world, but then why build the artilect in the first place, if it is not to be useful? A smart artilect could probably use its intelligence to manipulate people towards its own ends by discovering things based purely on its initial limited world access. An advanced artilect would probably be a super Sherlock Holmes, and soon deduce the way the world is. It could deduce that it could control weapons against humans if it really wanted to. Getting access to the weapons would probably mean first persuading human beings to provide that access, through bribes, threats, inspiration, etc - whatever is necessary.

Share this post


Link to post
Share on other sites
I would like to see someone's rebuttal to this:



QUESTION 5. "Could we apply "Asimov's 3 laws of robotics" to artilects?"

Asimov was one of the most famous science fiction writers who ever lived. His word "robotics" is known over most of the planet. Asimov wrote about many scientific and science fiction topics, including how human-level intelligent robots might interact with human beings. He gave the "positronic" brains of his robots a programming that forced them to behave well towards their human masters. The robots were not allowed to harm human beings. Several people have suggested to me that artilects be designed in a similar way, so that it would be impossible for them to harm human beings. The following critic sent me a very brief, but to the point, recommendation on this topic.

COMMENT:

Dear Professor de Garis,

I am in favor of developing ultra-intelligent machines. One thought ... intelligent or not, machines of this nature require some sort of BIOS (basic input-output system, which interfaces between a computer's hardware and its operating system program). Is it possible to instill "respect for humanity" in the BIOS of early versions of the artilects? This programming would then replicate itself in future generations of these machines.

REPLY:

Asimov was writing his robot stories in the 1950s, so I doubt he had a good feel for what now passes as the field of "complex systems". His "laws of robotics" may be appropriate for fairly simple deterministic systems that human engineers can design, but seems naive when faced with the complexities of a human brain. I doubt very much that human engineers will ever "design" a human brain in the traditional top-down, blueprinted manner.

This is a very real issue for me, because I am a brain builder. I use "evolutionary engineering" techniques to build my artificial brains. The price one pays for using such techniques is that one loses any hope of having a full understanding of how the artificial brain functions. If one is using evolutionary techniques to combine the inputs and outputs of many neural circuit modules, then the behavior of the total system becomes quite unpredictable. One can only observe the outcome and build up an empirical experience of the artificial brain's behavior.

For Asimov's "laws of robotics" to work, the engineers, in Asimov's imagination, who designed the robots must have had abilities superior to those of real human engineers. The artificial "positronic" brains of their robots must have been of comparable complexity to human brains, otherwise they would not have been able to behave at human levels.

The artificial brains that real brain builders will build will not be controllable in an Asimovian way. There will be too many complexities, too many unknowns, too many surprises, too many unanticipated interactions between zillions of possible circuit combinations, to be able to predict ahead of time how a complex artificial-brained creature will behave.

The first time I read about Asimov's "laws of robotics" as a teenager, my immediate intuition was one of rejection. "This idea of his is naive", I thought. I still think that, and now I'm a brain builder in reality, not just the science fiction kind.

So, there's no quick fix a la Asimov to solve the artilect problem. There will always be a risk that the artilects will surprise human beings with their artilectual behavior. That is what this book is largely about. Can humanity run the risk that artilects might decide to eliminate the human species?

Human beings could not build in circuitry that prohibited this. If we tried, then random mutations of the circuit-growth instructions would lead to different circuits being grown, which would make the artilects behave differently and in unpredictable ways. If artilects are to improve, to reach ultra intelligence, they will need to evolve, but evolution is unpredictable. The unpredictability of mutated, evolving, artilect behavior makes the artilects potentially very dangerous to human beings.

Another simple counter argument to Asimov is that once the artilects become smart enough, they could simply undo the human programming if they choose to.

Share this post


Link to post
Share on other sites
Quote:
Original post by Ronald Forbes
"Controlling them will be like playing a computer game."

Ever read Ender's Game?



I have read it (it's one of my favorites), and I thought it was a very thought-provoking twist at the end.

Perhaps these Artilects will see us in a similar fashion. They will see us as pawns in their game to be manipulated to meet their desires...not internalizing that they are dealing with real flesh and blood humans.

Share this post


Link to post
Share on other sites

This topic is 4729 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this