Are you a Cosmist or a Terran?

Started by
21 comments, last by flashinpan 19 years, 3 months ago
"Controlling them will be like playing a computer game."

Ever read Ender's Game?
C:DOSC:DOSRUNRUN DOSRUN
Advertisement
I would like to see someone's rebuttal to this:



QUESTION 5. "Could we apply "Asimov's 3 laws of robotics" to artilects?"

Asimov was one of the most famous science fiction writers who ever lived. His word "robotics" is known over most of the planet. Asimov wrote about many scientific and science fiction topics, including how human-level intelligent robots might interact with human beings. He gave the "positronic" brains of his robots a programming that forced them to behave well towards their human masters. The robots were not allowed to harm human beings. Several people have suggested to me that artilects be designed in a similar way, so that it would be impossible for them to harm human beings. The following critic sent me a very brief, but to the point, recommendation on this topic.

COMMENT:

Dear Professor de Garis,

I am in favor of developing ultra-intelligent machines. One thought ... intelligent or not, machines of this nature require some sort of BIOS (basic input-output system, which interfaces between a computer's hardware and its operating system program). Is it possible to instill "respect for humanity" in the BIOS of early versions of the artilects? This programming would then replicate itself in future generations of these machines.

REPLY:

Asimov was writing his robot stories in the 1950s, so I doubt he had a good feel for what now passes as the field of "complex systems". His "laws of robotics" may be appropriate for fairly simple deterministic systems that human engineers can design, but seems naive when faced with the complexities of a human brain. I doubt very much that human engineers will ever "design" a human brain in the traditional top-down, blueprinted manner.

This is a very real issue for me, because I am a brain builder. I use "evolutionary engineering" techniques to build my artificial brains. The price one pays for using such techniques is that one loses any hope of having a full understanding of how the artificial brain functions. If one is using evolutionary techniques to combine the inputs and outputs of many neural circuit modules, then the behavior of the total system becomes quite unpredictable. One can only observe the outcome and build up an empirical experience of the artificial brain's behavior.

For Asimov's "laws of robotics" to work, the engineers, in Asimov's imagination, who designed the robots must have had abilities superior to those of real human engineers. The artificial "positronic" brains of their robots must have been of comparable complexity to human brains, otherwise they would not have been able to behave at human levels.

The artificial brains that real brain builders will build will not be controllable in an Asimovian way. There will be too many complexities, too many unknowns, too many surprises, too many unanticipated interactions between zillions of possible circuit combinations, to be able to predict ahead of time how a complex artificial-brained creature will behave.

The first time I read about Asimov's "laws of robotics" as a teenager, my immediate intuition was one of rejection. "This idea of his is naive", I thought. I still think that, and now I'm a brain builder in reality, not just the science fiction kind.

So, there's no quick fix a la Asimov to solve the artilect problem. There will always be a risk that the artilects will surprise human beings with their artilectual behavior. That is what this book is largely about. Can humanity run the risk that artilects might decide to eliminate the human species?

Human beings could not build in circuitry that prohibited this. If we tried, then random mutations of the circuit-growth instructions would lead to different circuits being grown, which would make the artilects behave differently and in unpredictable ways. If artilects are to improve, to reach ultra intelligence, they will need to evolve, but evolution is unpredictable. The unpredictability of mutated, evolving, artilect behavior makes the artilects potentially very dangerous to human beings.

Another simple counter argument to Asimov is that once the artilects become smart enough, they could simply undo the human programming if they choose to.
Quote:Original post by Ronald Forbes
"Controlling them will be like playing a computer game."

Ever read Ender's Game?



I have read it (it's one of my favorites), and I thought it was a very thought-provoking twist at the end.

Perhaps these Artilects will see us in a similar fashion. They will see us as pawns in their game to be manipulated to meet their desires...not internalizing that they are dealing with real flesh and blood humans.

This topic is closed to new replies.

Advertisement