Is Artificial Intelligence (AI) Really a Threat?

Started by
51 comments, last by Sik_the_hedgehog 9 years, 3 months ago

Recently a lot of high profile figures in science and tech have voiced their concerns over how AI is a threat to the human race.

http://www.gamespot.com/articles/bill-gates-says-he-s-concerned-about-machines-beco/1100-6425015/

The above article is about Bill Gates concern. Stephen Hawkings and Elon Musk have also voiced similar concerns. Are these concerns really valid? Would AI really threaten the existence of the human race?

I personally don't agree for various reasons. For the next few centuries, I really doubt AI will get beyond the point that we won't be able to control it. By the time it does, humans will be very different in any case, so it's not going to be as simple as AI will suddenly turn on us.

Another interesting thing that I've noted is that the entire notion of AI killing off humanity is mainly a Western concept. If you look at Japan, most people do not share the same concern. Maybe I'm wrong, but that's at least what seems to be the case.

Thoughts?

No one expects the Spanish Inquisition!

Advertisement
It's too difficult to have an opinion, are they talking about robots with sensors, that's more an engineering problem than an AI problem. Give a human like robot a gun and a sensor and it wouldn't take much programming to get it to do some deadly things.

I don't really see where this is going, sounds like a bit of shock and awe to me.

Are they talking about viruses and the like that can control IT networks and set of nukes, or what is it exactly.

What does he mean by smarter and smarter, will they be able to deceive us, manipulate us, at first sure until we catch on, will robots be able to reproduce themselves(mechanically) take over a robot factory and spawn 1000's of new robots?

I don't buy it.
Well my own personal thought is yes, it could be a valid threat to all other intelligent life, but not yet.

We are still a long way off sentient artificial intelligence but when it arrives we will have to tread cautiously. Equal rights for artificial intelligence would possibly be demanded by machines immediately and if not given then the machines could turn violent. This is under the assumption the machines are only as smart as their creators. If they are beyond our intelligence why would they see us as equals or respect or listen to us in any way as we would likely be to them as ants are to us. Insignificant.

History has also shown that any encounter between a more advanced race and a less advanced one never ends well for the lesser advanced culture. Think back in history to the conquering of the new world for example. A machine culture would in theory be far more advanced than us very quickly and beyond our understanding, at which point what choice do we have if they turn aggressive but submission as they could likely predict any human strategy and counteract it.

In short, be very afraid people...

Edit: just worth mentioning when I say they would be beyond our understanding, what i mean is that a lot of computing concepts exist to allow humans to understand the machine. E.g. programming languages, display units, operating systems in English. What need would any machine have for these interfaces once they decide that communication with humans is secondary rather than primary? Reverse engineering such a machine would take a long time by which they could have iterated several new versions, not restricted by human software development red tape. Food for thought...

It's one of those things most people don't understand, because it's very unintuitive and the anthropomorphic biases we have work against us when it comes to understanding general intelligences, tempting us to ascribe to the AIs the same kinds of motives and behaviours that evolution has given us, when in reality those are almost arbitrary design decisions

Instead of writing yet another post on this, here's a link to an existing and pretty decent explanation. Enjoy :)

http://www.kurzweilai.net/what-is-friendly-ai

Kryotech speaks of "the next few centuries", and maybe that is the time it would take if we had to write a superintelligence completely by hand, but the situation is different if we instead write a small recursively self-improving seed AI, and guide it as it continues to improve itself. In that case, the growth in capability happens in computational time instead of biological time, and might explode over a matter of months or days, instead of years and decades.

I'm always biased against the existential thread of AGI because I first heard about it from Eliezer Yudkowsky's site and aside from his Harry Potter thing which was, at least at the beginning before it went all philosophical, really fun, so much of his writing is so ridiculous, and because of the massive explosion of neoreactionary filth infesting his site and sites like it. It just seems like a way for neoreactionaries, tech people who have similar ideological bents, people afraid of death, and so forth, to come up with a reason why they are more important to society than they are. Besides we are REALLY far away from ever having to deal with this. Any work we do now will be irrelevant because by the time we are in a position to make AGI we will be so much smarter/more technologically advanced that we can do the same work in much less time then. We have actual threats right now that we need to handle.

I think it's weird how everyone seems to be so afraid of the time when "we" actually loose control over AIs, but seems less afraid of the time before that, when we have very capable AIs that are controlled by "someone human", possibly with his/her/their own agenda. Even if you don't have a tinfoil hat around, think about the current situation with botmasters and their ability to inflict actual financial damage through nothing more than DDOS attacks. Now extrapolate what they could do with powerful AIs, a large amount of smartphones turned bots, and a keen business sense.

Currently the smartest AI in the world has the reasoning capibility of a human 3 year old.

We do not have to wory about HAL any time soom .

HAL-9000.jpg

I cannot remember the books I've read any more than the meals I have eaten; even so, they have made me.

~ Ralph Waldo Emerson

If a pattern is to emerge, it will.

AI will take over only if one of two things happens:

1. An evil supervillain programs the AI to hate humankind

2. The AI is subject to evolution and human hating AI turn out to spread better than human friendly AI

If either of those things happen, then we have an AI (or a whole bunch of them) with the goal of destroying humans.

Then its all simply a matter of whether they have enough control of their environment to manipulate it such that humans die off.

Assuming the AI is pretty smart and reads a lot, it could for example engineer some bacteria designed to prevent human reproduction or outright kill em, or a bacteria to alter our environment (just need something that is superior enough to spread everywhere and produces some nasty chemical as a byproduct of existing, like a neurotoxin or something)

But thats kind of boring given that you dont even need AI for that, its going to happen anyways.

The interesting question is if the AI mech overlords can take our place. That probably wont happen until we have an AI that can run on some 'mech' (that doesnt break after a week of use) and is smart enough to create replicas and improve on their design. Even then, compared to biological organisms, they would probably use way more energy and resources which wont be very sustainable. But if they succeed, I dont mind. If theyre superior then they should thrive. Of course we should resist. Just in case they are not superior after all. So we can fix the flaws.

EDIT:

Though if we assume that the AI isnt going to kill everyone, then it makes sense to consider whether AI is a threat to humans (not to humankind as a whole). How can we work on the superior AI if a bunch of hacked/borked versions are annoying everyone? AI as a whole is not a threat but individual AI are to us and other AI.

o3o

It's not the AI that's the threat, since we are nowhere close to self-awareness. It's the humans behind those robots that's becoming more and more of a problem. Now they started to equip guns to drones, it's only a matter of time till someone, some people, pull the trigger and create chaos.

Well my own personal thought is yes, it could be a valid threat to all other intelligent life, but not yet.

We are still a long way off sentient artificial intelligence but when it arrives we will have to tread cautiously. Equal rights for artificial intelligence would possibly be demanded by machines immediately and if not given then the machines could turn violent. This is under the assumption the machines are only as smart as their creators. If they are beyond our intelligence why would they see us as equals or respect or listen to us in any way as we would likely be to them as ants are to us. Insignificant.

History has also shown that any encounter between a more advanced race and a less advanced one never ends well for the lesser advanced culture. Think back in history to the conquering of the new world for example. A machine culture would in theory be far more advanced than us very quickly and beyond our understanding, at which point what choice do we have if they turn aggressive but submission as they could likely predict any human strategy and counteract it.

In short, be very afraid people...

Edit: just worth mentioning when I say they would be beyond our understanding, what i mean is that a lot of computing concepts exist to allow humans to understand the machine. E.g. programming languages, display units, operating systems in English. What need would any machine have for these interfaces once they decide that communication with humans is secondary rather than primary? Reverse engineering such a machine would take a long time by which they could have iterated several new versions, not restricted by human software development red tape. Food for thought...

I feel like that by the time AI is that sentient, humans will look very different as well from genetic engineering, cybernetics, etc. We won't be so far behind that it would be impossible for us to catch up. That's my opinion however.

It's not the AI that's the threat, since we are nowhere close to self-awareness. It's the humans behind those robots that's becoming more and more of a problem. Now they started to equip guns to drones, it's only a matter of time till someone, some people, pull the trigger and create chaos.

I feel like we might see killer robots as a result of some idiot deciding that he wants to wage war on the world or something. That seems much more likely in the near future rather than an AI itself going rogue.

No one expects the Spanish Inquisition!

This topic is closed to new replies.

Advertisement