Is Artificial Intelligence (AI) Really a Threat?

Started by
51 comments, last by Sik_the_hedgehog 9 years, 3 months ago
I don't think a super human intelligence is even needed for trouble to arise.

Have none of you seen the irrational tantrums of a toddler who can't have what he or she wants? Even a human like AI with the reasoning capacity of a toddler could be a problem if it also was unable to convey it's thoughts fluently.

Imagine a two year old with a strop with direct control of military drones...
Advertisement

I don't think a super human intelligence is even needed for trouble to arise.

Have none of you seen the irrational tantrums of a toddler who can't have what he or she wants? Even a human like AI with the reasoning capacity of a toddler could be a problem if it also was unable to convey it's thoughts fluently.

Imagine a two year old with a strop with direct control of military drones...

Daniel Wilson's Robopocalypse implied that the AI that was trying to destroy humanity was a child. I was just reminded of that novel by this.

Yes it is.... claiming otherwise is simply closing your mind to the possibilities that might arise out of AI... some of them can be very positive. A LOT are EXTREMLY negative for the WHOLE HUMAN RACE. Yes, caps are needed, because a lot of people does not seem to get it.

You build a machine with its own motives, an advanced enough intelligence, now you arm and armour it to be able to be used in an aggressive military act... but also to prevent its own destruction (because, you know, this thing will cost billions initially).

Yes, there will be many intelligent people involved in its creation that might devise failsafes to make sure they cannot lose control... on the other hand lots of people are involved whose job it is to prevent others from gaining control over it.

So many systems built in that could be extremly dangerous in the wrong hands... and then you turn over control to an alien intelligence that most probably is only understood rudimentary still by its creators. You see where I am going?

I say AI is much too dangerous to be overly optimistic. You always have to apply murphys laws. And when you apply that, you have to say, best NOT to arm your autonomous drones no matter how convienient it might be from a military point of view, best NOT to build in failsafes to prevent the machine being controlled by a third party, but make the machine self destruct in such an incident, and many other things that might lessen the value of the military drone, but on the other hand will make it much harder for an AI to do much harm if it goes out of control.

And the day the first AI will go out of control will come for sure...

This all comes from a person that is of the viewpoint that new technology usually brings at least as much good as it does harm. Yes, I am not that fond of atomic reactors and all, but I think its a big improvement over coal energy... much less polution usually with a very minor risk of a BIG pollution incident, which can be minized nicely if run by competent people.

I do see that AI IS the future of makind. It is, to some extend, inevitable, its the next evolution of computers and also kind of the next step of evolution of mankind. Besides getting rid os Sickness and Age, heightening the human intelligence at an increased rate is the next step, and as long as there is not some big breakthrough in neurosciences in the next few decades, AI with a well design human-machine-interface is the only way to really achieve that.

BUT:

Just as atomic reactors, atomic bombs, military equipment, and other things, AI is nothing that should be left to amateurs or, even worse, stock market driven private companys. If there is one group that tends to fuck up even more than the military, its private companys. AI is nothing where you should cut corners, or try to circumvent laws and, even worse, common sense. Private companys do that all the time.

There needs to be a vocal group of pessimists now, that make the general masses aware of the dangers of this new wonder weapon the private companys want to build and release upon the world. There need to be laws, failsafes and regulations in place BEFORE the first sentinent machine walks the earth. A screw up like what happened with the internet is NOT a good idea when it comes to AI. It worked out fine with the internet, and it wouldn't have flourished as much within tighter rules. But just think how much more harm a script kiddie will do when they hack a whole army of military drones and lose control over it.

So yes, I totally support Bill Gates and Elon Musk in their stance. Even though I think they should go into more detail about how to deal with AI, their stance is a very valid one from my point of view.

If I would deviate even farther into the future and into speculation land, I'd say humanity could, even if AI does not go totally out of control and humanity survives the first wave of sentinent machines, face even harder to solve ethical problems.

If machines are sentinent and at least as intelligent as humans, even if you can control them or at least make sure they are friendly... can you still treat them as we do today? Will there be laws defending a machines right to live? to learn? to be paid a wage, or to open his own shop, or to *gasp* reproduce?

How will earth react if besides the 10+ billions humans living on it, there are at least as much sentinent machines that now are also consuming resources (lets hope they all just run on solar energy)... that need their own space to live?

How will humanity react if they are not only made redundant in their jobs by machines that are optimized better for it... they cannot even protest against it without being "robophobic" and breaking laws (think of the way the blacks and women fought for getting the same rights)?

We might have atleast a jurisdical and political fight between classes (machines and humans in this case) at our hands in the not so distant future, that will dwarf gender, or ethnic difference conflicts of the past... we can only hope it does not lead to uproars like in zarist russia in the first world war, or in the monarchic europe in the 15th to 19th century.

In the end it might not be the AI going out of control or the AI being to alien to humans that will lead to a conflict with humanity. It might be that the conflict arises because machines become to similar to humans on a psychological and emotional base, and because humans are tradionally EXTREMLY SLOW to react to social changes and the need to break the commons state of things to keep peace between different groups.

The problem is that we assume that any intelligence that arises will be like humans. I think this is somewhat unlikely. If we assume that a future intelligence is built on the usual silicon transistors, this system will think in base 2, not in base 10. That in itself would be a huge difference between humans and the AI. Regardless of the numerical base it thinks in, an AI of this kind will be very different from humans. A biological AI is a different case, and it might be similar, but we simply don't know, as we haven't experimented much with this technology. In my opinion, any future intelligence that arises will be practically alien. That could cause problems. It also might not. Such an AI might have zero interest in humans and simply choose to leave Earth altogether or it could be the exact opposite. As others have mentioned, I think that much of the motivations of this AI would depend on humanity's response as well as its own inherent nature.

No one expects the Spanish Inquisition!


The problem is that we assume that any intelligence that arises will be like humans. I think this is somewhat unlikely. If we assume that a future intelligence is built on the usual silicon transistors, this system will think in base 2, not in base 10. That in itself would be a huge difference between humans and the AI. Regardless of the numerical base it thinks in, an AI of this kind will be very different from humans. A biological AI is a different case, and it might be similar, but we simply don't know, as we haven't experimented much with this technology. In my opinion, any future intelligence that arises will be practically alien. That could cause problems. It also might not. Such an AI might have zero interest in humans and simply choose to leave Earth altogether or it could be the exact opposite. As others have mentioned, I think that much of the motivations of this AI would depend on humanity's response as well as its own inherent nature.

"Thinking in base 10" is not something seen universally in our own history either. There are plenty of examples of other historical number systems, 12 and 60 being particularly common. Those people certainly didn't "think weird".

I'm not entirely convinced a truly "intelligent" AI is even possible, given the rather pathetic strides we've made towards it. If we accept for the sake of argument that it is, there's no reason to think it won't be like humans either. For much the same reason that we haven't come even remotely close to inventing such an AI, we won't be able to predict what that AI would be like.

Intelligence is a rather poorly-understood concept, but we do have exactly one data point. The only species that we pretty much universally agree is "intelligent" is humans. Certainly that's kind of circular logic, but c'est la vie.

The problem is that we assume that any intelligence that arises will be like humans. I think this is somewhat unlikely. If we assume that a future intelligence is built on the usual silicon transistors, this system will think in base 2, not in base 10. That in itself would be a huge difference between humans and the AI. Regardless of the numerical base it thinks in, an AI of this kind will be very different from humans. A biological AI is a different case, and it might be similar, but we simply don't know, as we haven't experimented much with this technology. In my opinion, any future intelligence that arises will be practically alien. That could cause problems. It also might not. Such an AI might have zero interest in humans and simply choose to leave Earth altogether or it could be the exact opposite. As others have mentioned, I think that much of the motivations of this AI would depend on humanity's response as well as its own inherent nature.

I am not implying that a future AI has to be human-like to lead to the problems I described.

In the end, human-like is a very wide definition, and it has to be. What makes us human? Is somebody like Stephen Hawkins less or more human than Average Joe? He is unable of normal physical movement and speech (and it could be argued he is closer to a cyborg than most humans today), while he seems to posess kind of "super human" mental abilities...

We have humans that seem to be unable to read, understand or exhibit what we call "human emotions"... are they not human?

So in the end, humanity defines itself on broad concepts. If somebody has a human genom, he is human. And actually the mental and physical state in which he is temporarily or permanently doesn't matter much (somebody who is brain dead is actually still treated as human being until he is completly dead).

But the reason why things where quite easy for us until now was this simple fact: we are the only self-sentinent beings in the known galaxy. At least the only ones, where it can be proven without doubt.

Now, this is already changing. The mental difference between animals and humans are blurring the better the mental capabilities of animals are known... seems some animals are almost as intelligent, they just lack the language we understand today (but some of them seem to be capable of learning a language both species understand with some help).

Will a monkey and a human ever think alike? No, not really... still the mere fact we can now communicate with them, the fact they can convey emotions to us we can understand makes it much harder to just treat them as things instead of living individuals. Of course being very close relatives genom wise and also sharing basic needs and motives does help here too.

Chances that even self-sentinent machines with feelings will be close enough to humans in look and feel to provoke the same reaction a chimpanzee or a gorilla does is very slim, granted. Still, if the machine is at least as intelligent as us, also kind-of social motivated, and just as curious as us humans, the machine will not only also try to understand us... it will try to be able to communicate with us, and part of that means to translate concepts alien to humans into a common language. A machine might translate their immidiate need for electricity because the battery is low as hunger... their negative feedback because it couldn't fulfill its goals as frustration... their (self-)programmed strive for self preservation as fear of death.

It might be that the being we see is eerily similar to a human, because the interface the machine develops to be able to communicate with humans emulates a human being quite perfectly... all the while the processes behind it are quite alien to what goes on inside us humans. But the machine is communicating to us in a way that we understand... and is not really lying or somehow tricking us... it just tries best to explain to us what we cannot understand.

Chances are very good that given time, machines will get at least more intelligent compared to humans when it comes to mental flexibility. Our brain, as clever as it is, still lags millenias and aeons behind... its still the caveman brain. We cannot completly rewire our brains as a machine might be able to, even if our brain actually does adapt quite substantially to our surrounding. but while the neuronal connections might adapt, you cannot plonk a completly new sense into our brain just like that, or increase the speed at which works, and so on. Point is: if machines get that clever, chances are they will be the ones reaching out the hand to humanity to try and communicate on an even ground.

TL;DR:

I don't think machines being alien is something that will prevent the inevitable conflict between humans and machines.

Actually, we have to HOPE for the conflict to be on a mere social level, as the caveman humans gets another of its "religious beliefs" destroyed (that we are the only self-sentinent beings in the known galaxy and the pinnacle of evolution), the fight for resources and influence intensifies as the population of "human beings" or "beings with human-like rights" doubles (or triples), and lots of people will not be able to adapt to yet another "subhuman species" getting the same rights as them, and being now protected by law and ethics from being treated like they were before.

This of course is going by the assumption that by the time machines get so powerful and independent they could be a threat to humanity, they are already intelligent enough to be at least as curious as us humans. If they are, chances are good that humans are so interesting as social partners, and objects of scientific research (hopefully only in a non-intrusive way smile.png ), that machines will not want to harm us initially... they might want to be seen as equal beings (and if we are unlucky, they are already so snub they see us as lesser beings.... would serve us right), they might not wnat to be treated as things anymore (yes, that means your household robot might expect a wage to be paid), but they do not want to replace us or harm us without reasons.

If we are less lucky, we might survive as a museum piece, as machines might just be as interested in history as we humans are.... maybe they even leave us earth as a kind of "gigantic museum" for machines, living on planets and moons no habitable by humans, to visit and gaze at how their own "species" came to be.....

Of course, we need to prepare for all the darker visions of the future...

As has been mentioned before, the most tangible threat right now comes from the middle point, where the AI is smart enough to decide what it has to do, but not smart enough to decide what it wants to do. This is pretty much where automated drones are right now (if I recall correctly, the biggest thing thwarting them right now is not the AI, but the camera's resolution not being good enough so their targets only end up appearing as a handful of pixels). At least a full blown AI would stand a chance to not be preprogrammed to kill people.

Don't pay much attention to "the hedgehog" in my nick, it's just because "Sik" was already taken =/ By the way, Sik is pronounced like seek, not like sick.

This topic is closed to new replies.

Advertisement