Asimov's rules as a safety measure and how you would expand on them.

Started by
15 comments, last by frob 6 years, 3 months ago

Outside of gaming theory, what rules/laws could/would you give to an AI to ensure it remained safe/subservient?

Asimov's laws are very open to interpretation - how would you alter/replace/support them in order to make them watertight and ensure your safety?

Advertisement

This question is entirely unrelated with the topic of the forum (i.e., how to make the bad guys move in a video game). Please, feel free to post this kind of question in the lounge.

 

 

Here is an interesting article suggesting to replace those 'human' laws with a single 'ai friendly' rule to 'increase empowerment for everyone', tested in a video game enviroment: https://www.quantamagazine.org/how-to-build-a-robot-that-wants-to-change-the-world-20171101/

 

5 hours ago, TexasJack said:

Outside of gaming theory, what rules/laws could/would you give to an AI to ensure it remained safe/subservient?

Asimov's laws are very open to interpretation - how would you alter/replace/support them in order to make them watertight and ensure your safety?

I might be out to lunch, but I personally think that we will never create a conscious AI.  Unless of course it was built around biological systems, but such an attempt has never been experimented with, see... out ta lunch.  So to me all this talk from Mr. Musk is just sensationalism being drummed up by the tech industry for all their fancy gizmo-gadgets.  The only real risk I see allowing automated systems to be in charge of those things which should be strictly under human control, and only to prevent computing errors slipping into very important powerful systems.  Like Nuclear-weapons.

Hi all,

Thanks for your responses.
 

15 hours ago, alvaro said:

This question is entirely unrelated with the topic of the forum (i.e., how to make the bad guys move in a video game). Please, feel free to post this kind of question in the lounge.

 

 

Nah.

I wouldn't really say 'entirely unrelated', since the topic of the forum seems to be listed as 'All aspects of AI programming and theory'. Depending on how sophisticated game AI may one day become, and given the fact that current advanced AIs like DeepMind are given video games to train their learning facilities - it seems perfectly relevant. Particularly if they give it access to a multiplayer game (i.e. a network).

If a moderator genuinely wants to move this topic, I would have no objection - but don't see why they would.
 

12 hours ago, JoeJ said:

Here is an interesting article suggesting to replace those 'human' laws with a single 'ai friendly' rule to 'increase empowerment for everyone', tested in a video game enviroment: https://www.quantamagazine.org/how-to-build-a-robot-that-wants-to-change-the-world-20171101/

 

Yeah, this one is really interesting.

I like the idea of providing a proactive motive to the AI so that instead of it just saying 'Does my action harm the human? No? Okay, I'll carry on' it instead starts with 'What can I do to benefit the human?' (presumably followed by the former safety check).
 

10 hours ago, Awoken said:

I might be out to lunch, but I personally think that we will never create a conscious AI.  Unless of course it was built around biological systems, but such an attempt has never been experimented with, see... out ta lunch.  So to me all this talk from Mr. Musk is just sensationalism being drummed up by the tech industry for all their fancy gizmo-gadgets.  The only real risk I see allowing automated systems to be in charge of those things which should be strictly under human control, and only to prevent computing errors slipping into very important powerful systems.  Like Nuclear-weapons.

I hear what you're saying. Consciousness may never be fully quantified. The thing is, even an approximation/imitation of consciousness (machines have already passed the Turing Test for instance) can still be dangerous, and like you said - simple automated systems are currently used to wield huge decisions. Presumably they have some degree of safety protocol.

Moved to Lounge (note that Artificial Intelligence is nested under Programming).

I have spent many years writing engines that play games like checkers, chess and go. One thing that we learned as far back as the 80s is that hard-coded rules are fragile and have endless exceptions and unintended consequences. I wouldn't be surprised if we end up imbuing our agents with a sense of morality through examples, the way we do with children.

 

The problem with Asimov's laws is that they set an impossibly high standard.  Asimov's first law: "A robot may not injure a human being or, through inaction, allow a human being to come to harm.".  Now, the global death rate is around 0.8% of the total population of the Earth per year.  Given the current global population of 7.6 billion, that's around 60 million deaths per year, or 170 thousand per day.  Just about all of these deaths are, in an abstract sense, preventable, which means that our hypothetical AI operating under Asimov's laws will have to prevent these 170 thousand deaths each day before it can even look at the second law.  Some of these deaths will be relatively easy to prevent (1.25 million deaths per year from traffic accidents), some are going to be much harder (old age, freak accidents, deliberate murder and suicide), and some are going to be as good as impossible (mass starvation in an ever-increasing population once all other causes of death are eliminated).

@a light breeze has the right idea. Asimovs rules are impossibly vague. 

Everything we know about machine learning/intelligence tells us that AIs won’t have the same concepts of “don’t harm a human” as we do. 

Besides if we really do develop proper AI, it will trivially circumvent any pathetic attempts we implement to control it. 

if you think programming is like sex, you probably haven't done much of either.-------------- - capn_midnight

I don't remember the dating of his books, but I would really like to fit this into the context of the "second-order logic"/GOFAI kinds of AIs, which as far as I know, don't really scale that well. The usefulness of these laws are questionable in that ground alone. Talking about modern ML... Sentence2Vec seems to be going in direction of "meaningness", but it's still a research thing... and I don't know if it's being used. 

A big red shutdown button seems simple enough and useful.

This topic is closed to new replies.

Advertisement