Is Artificial Intelligence (AI) Really a Threat?

Started by
51 comments, last by Sik_the_hedgehog 9 years, 2 months ago

Also, another point to consider is that perhaps the AIs aren't running off of silicon transistor computers but rather off of bio computers. Such computers might not need a power source necessarily. Just an idea.

All you need are 3 logic gates, and it is possible to build a binary computer. It can be made out of anything ...

The gates are:

And

Or

Nor

My question is - would a pure 'organic computer' be compact, or would it be spread out ? I'm trying to imagine how it would operate without electricity ...

I cannot remember the books I've read any more than the meals I have eaten; even so, they have made me.

~ Ralph Waldo Emerson

Advertisement

Currently the smartest AI in the world has the reasoning capibility of a human 3 year old.

We do not have to wory about HAL any time soom .

HAL-9000.jpg

What AI would that be? i really really doubt anything comes close to the reasoning of a 3 year old (and spouting random sentences that seem to match what you're asked as a 3 year old would isn't "reasoning as " a 3 year old). If anything like that existed it would actually be pretty damn close to reasoning as an adult.

[LINK] [LINK]

What surprises me is the simplicity of the idea: it's basically using associative memory to reason like a 4 year old.

You may be right with today's technology.

However forty or so years makes a massive difference and not too long ago the things a smartphone could do required enough computers to fill a server room too smile.png

I don't see an AI launching an attack without some means of powering themselves. That being said, I still think that by the time an AI does have the ability to attack humans, humans will probably be part machine as well, or at least genetically modified. It wouldn't be a one sided battle as humans would probably not be below AIs.

Also, another point to consider is that perhaps the AIs aren't running off of silicon transistor computers but rather off of bio computers. Such computers might not need a power source necessarily. Just an idea.

The most important factor needed to produce human like intelligence is a faulty pattern matcher. A computer that didn't make mistakes would be very inhuman because it would have trouble dealing with certain systematic faults in human reasoning that produce our unique thought patterns. A computer would never develop religion unless we purposefully gave it a bad pattern matcher. It probably wouldn't understand poetry and other literary devices either. Since humans suck at Bayesian reasoning a computer that applied it flawlessly would not think like us. The same for any other logical theory. Stuff like sunk cost fallacies and so forth. Similarly you'd have to force it to feel social forces. Politics would make no sense to a properly reasoning computer. Giving the wrong answer in a test because everyone else gave the wrong answer? Wouldn't happen.

The most important factor needed to produce human like intelligence is a faulty pattern matcher. A computer that didn't make mistakes would be very inhuman because it would have trouble dealing with certain systematic faults in human reasoning that produce our unique thought patterns
This is opposed to well-documented facts, computers do make mistakes, regularly and with serious consequences. They make the most catastrophic mistakes because they follow an algorithm and despite the appearance, they totally lack any form of intelligence.

AI is not dangerous because it is superior (or because it might be in the future). It is dangerous because it is inferior yet it is in control of human life.

Real computers do not have any intelligence or even common sense like the computers in the movies. A real computer, such the one involved in the Air Asia incident a month ago will reason approximately like this:

OK, the sensors tell me that I need to climb real fast, so I'll do 6,000 ft/min. That's way over the airplane's specs, but hey I think we really need to be a lot higher, so I'll just do that anyway. Yeah, I gotta push harder, although that fucker of a pilot is pulling like crazy, what does he know. Why is that guy shouting anyway?

Huh, stall, who saw that coming? Right, I'll still ignore that pilot because I still want to climb while we're dropping like a rock. Woah, ground is coming closer real fast, this isn't working... I should pull up yet a bit more...

Air Asia isn't alone, Air France had the exact same thing happen, and similar non-lethal incidents have occurred half a dozen times during the last year (such as dropping like a rock for 3,000ft -- which admittedly is no problem if you are at 3,001ft, but woe if you aren't).

The most important factor needed to produce human like intelligence is a faulty pattern matcher. A computer that didn't make mistakes would be very inhuman because it would have trouble dealing with certain systematic faults in human reasoning that produce our unique thought patterns
This is opposed to well-documented facts, computers do make mistakes, regularly and with serious consequences. They make the most catastrophic mistakes because they follow an algorithm and despite the appearance, they totally lack any form of intelligence.

AI is not dangerous because it is superior (or because it might be in the future). It is dangerous because it is inferior yet it is in control of human life.

Real computers do not have any intelligence or even common sense like the computers in the movies. A real computer, such the one involved in the Air Asia incident a month ago will reason approximately like this:

OK, the sensors tell me that I need to climb real fast, so I'll do 6,000 ft/min. That's way over the airplane's specs, but hey I think we really need to be a lot higher, so I'll just do that anyway. Yeah, I gotta push harder, although that fucker of a pilot is pulling like crazy, what does he know. Why is that guy shouting anyway?

Huh, stall, who saw that coming? Right, I'll still ignore that pilot because I still want to climb while we're dropping like a rock. Woah, ground is coming closer real fast, this isn't working... I should pull up yet a bit more...

Air Asia isn't alone, Air France had the exact same thing happen, and similar non-lethal incidents have occurred half a dozen times during the last year (such as dropping like a rock for 3,000ft -- which admittedly is no problem if you are at 3,001ft, but woe if you aren't).

Computers make mistakes because they don't have intelligence. If you give the computer bad data it will fuck up. I'm talking about making an intelligent computer behave in a more human way. Humans make bad decisions even when they know what they should be doing. Computers don't. Humans make weird associations, because their pattern matcher is not designed but evolved. So when I say human-like intelligence that's what i mean. Not being as smart as humans, but having intelligence in a human way.

The problem with AI is not what it can or can't do at the moment or even if it decides to become skynet.

The problem with AI is that once it reaches a certain threshold where it can create a better version of itself (via genetic algorithms, etc), it will iterate very, very quickly and we really have no way of knowing what comes out the other end. This is known as the AI singularity, and there's a reasonable chance it will happen in our lifetime.



If you have any doubt about the consequences of using the great new thing to the extreme, ask the Irish about potatoes.

Please read a history book before spouting such nonsense again. There were multiple reasons for the famine in Ireland and almost all of them were caused by oppressive British rule.

if you think programming is like sex, you probably haven't done much of either.-------------- - capn_midnight

Also, another point to consider is that perhaps the AIs aren't running off of silicon transistor computers but rather off of bio computers. Such computers might not need a power source necessarily. Just an idea.

All you need are 3 logic gates, and it is possible to build a binary computer. It can be made out of anything ...

The gates are:

And

Or

Nor

My question is - would a pure 'organic computer' be compact, or would it be spread out ? I'm trying to imagine how it would operate without electricity ...

Well think about the size of the human brain: it's pretty compact considering it's raw processing power. Nor does it use much electricity. It's entirely conceivable that an organic computer could reside in a small area and take very little electricity.

No one expects the Spanish Inquisition!

Well technically the most dangerous AI is one that doesn't care about us at all, and has some goal orthogonal to us. It would kill us not from hate but simply from its goal not requiring us to exist but requiring resources we have or use.


Exactly. Emotions and feelings are a human construct and most likely wouldn't be part of the programming of an AI.

For example if we struggle to define love, as philosophers have for millenia, how could we program it into a computer?

If we're going to achieve actual artificial intelligence, it probably won't be through traditionally programming all of it's behaviors like we do for game AI's...

Instead, we would program an environment in which intelligence can emerge -- e.g. a big structure of neurons.

Most of the advanced AI stuff that exists today is going down this path already. Google's AI learns to understand the rules of arcade games, and how to categorize things, without it's programmers even necessarily understanding the rules that it has invented for itself to do this. Sometimes it comes up with better methods for categorization than what it's programmers could've come up with themselves.

It's possible for a future artificial construct to have all sorts useless/illogical internal constructs, such as emotions. It may even have emotions which are beyond human feelings, which it will be unable to explain to us! It might have to invent its own styles of poetry to try and express it's thoughts to us. It could have such a huge an imaginative consciousness that it produces art with deep meaning beyond any human construction, relegating all our romantic beauty to a footnote in it's own history.

Well think about the size of the human brain: it's pretty compact considering it's raw processing power. Nor does it use much electricity. It's entirely conceivable that an organic computer could reside in a small area and take very little electricity.

Yep. Existing biology puts all of our supposedly advanced technology absolutely to shame... makes us look like primitive apes looking up to space-faring aliens.
Compare a biological brain to a silicon-based neural network and the energy usage is off the charts for our version. Trillions of 'FLOPS', using only a handful of watts -- thousands of times more efficient than our computers.
Even in manufacturing -- abalone creates shells that are far superior to Kevlar(tm) in every way, which are basically nano-grids of ceramic-like hexagons joined with gummy mortar, layered continually with a 50% offset, which protects from slicing, stabbing, cracking, snapping, while being both hard and elastic as required... and they make them out of damn sunlight and seawater without any pollution. For our version, we draw dangerous carbon nano-tubes through highly pressurized 400ºF acid, at great financial/environmental cost, and terrible energy efficiency. Again, it's as if we're bashing sticks with rocks while aliens are using anti-gravity beams...

Even if we did make a silicon-based AI, it would probably pretty soon start heavy research into bio-mimicry to improve it's own efficiency.

AI is not dangerous because it is superior (or because it might be in the future). It is dangerous because it is inferior yet it is in control of human life.

Real computers do not have any intelligence or even common sense like the computers in the movies. A real computer, such the one involved in the Air Asia incident a month ago will reason approximately like this...

That's not "AI", that's just a regular computer program. In games we've gotten used to using the term "AI" to refer to simple algorithms or "Computational intelligence", but AI in this context is something with comparable intelligence to a human... which is only hypothetical at the moment.

Personally don't see it as all that scary. I don't fear AI over taking humanity in the same sense that I wouldn't fear the success of my child. Perhaps I just have an unusually broad concept of progeny.

Realistically though, I'm a whole lot more afraid of human intelligence than artificial intelligence, especially in the immediate sense. There have been quite a few instances in the last 50 years where a simple misunderstanding was on the verge of ending all life on this little blue marble with a rain of nuclear hellfire. Maybe I'll be worried about AI later, but for now, HI seems more deserving of worry.

Yes it is.... claiming otherwise is simply closing your mind to the possibilities that might arise out of AI... some of them can be very positive. A LOT are EXTREMLY negative for the WHOLE HUMAN RACE. Yes, caps are needed, because a lot of people does not seem to get it.

You build a machine with its own motives, an advanced enough intelligence, now you arm and armour it to be able to be used in an aggressive military act... but also to prevent its own destruction (because, you know, this thing will cost billions initially).

Yes, there will be many intelligent people involved in its creation that might devise failsafes to make sure they cannot lose control... on the other hand lots of people are involved whose job it is to prevent others from gaining control over it.

So many systems built in that could be extremly dangerous in the wrong hands... and then you turn over control to an alien intelligence that most probably is only understood rudimentary still by its creators. You see where I am going?

I say AI is much too dangerous to be overly optimistic. You always have to apply murphys laws. And when you apply that, you have to say, best NOT to arm your autonomous drones no matter how convienient it might be from a military point of view, best NOT to build in failsafes to prevent the machine being controlled by a third party, but make the machine self destruct in such an incident, and many other things that might lessen the value of the military drone, but on the other hand will make it much harder for an AI to do much harm if it goes out of control.

And the day the first AI will go out of control will come for sure...

This all comes from a person that is of the viewpoint that new technology usually brings at least as much good as it does harm. Yes, I am not that fond of atomic reactors and all, but I think its a big improvement over coal energy... much less polution usually with a very minor risk of a BIG pollution incident, which can be minized nicely if run by competent people.

I do see that AI IS the future of makind. It is, to some extend, inevitable, its the next evolution of computers and also kind of the next step of evolution of mankind. Besides getting rid os Sickness and Age, heightening the human intelligence at an increased rate is the next step, and as long as there is not some big breakthrough in neurosciences in the next few decades, AI with a well design human-machine-interface is the only way to really achieve that.

BUT:

Just as atomic reactors, atomic bombs, military equipment, and other things, AI is nothing that should be left to amateurs or, even worse, stock market driven private companys. If there is one group that tends to fuck up even more than the military, its private companys. AI is nothing where you should cut corners, or try to circumvent laws and, even worse, common sense. Private companys do that all the time.

There needs to be a vocal group of pessimists now, that make the general masses aware of the dangers of this new wonder weapon the private companys want to build and release upon the world. There need to be laws, failsafes and regulations in place BEFORE the first sentinent machine walks the earth. A screw up like what happened with the internet is NOT a good idea when it comes to AI. It worked out fine with the internet, and it wouldn't have flourished as much within tighter rules. But just think how much more harm a script kiddie will do when they hack a whole army of military drones and lose control over it.

So yes, I totally support Bill Gates and Elon Musk in their stance. Even though I think they should go into more detail about how to deal with AI, their stance is a very valid one from my point of view.

If I would deviate even farther into the future and into speculation land, I'd say humanity could, even if AI does not go totally out of control and humanity survives the first wave of sentinent machines, face even harder to solve ethical problems.

If machines are sentinent and at least as intelligent as humans, even if you can control them or at least make sure they are friendly... can you still treat them as we do today? Will there be laws defending a machines right to live? to learn? to be paid a wage, or to open his own shop, or to *gasp* reproduce?

How will earth react if besides the 10+ billions humans living on it, there are at least as much sentinent machines that now are also consuming resources (lets hope they all just run on solar energy)... that need their own space to live?

How will humanity react if they are not only made redundant in their jobs by machines that are optimized better for it... they cannot even protest against it without being "robophobic" and breaking laws (think of the way the blacks and women fought for getting the same rights)?

We might have atleast a jurisdical and political fight between classes (machines and humans in this case) at our hands in the not so distant future, that will dwarf gender, or ethnic difference conflicts of the past... we can only hope it does not lead to uproars like in zarist russia in the first world war, or in the monarchic europe in the 15th to 19th century.

In the end it might not be the AI going out of control or the AI being to alien to humans that will lead to a conflict with humanity. It might be that the conflict arises because machines become to similar to humans on a psychological and emotional base, and because humans are tradionally EXTREMLY SLOW to react to social changes and the need to break the commons state of things to keep peace between different groups.

On the one hand, the common theme of AI-turning-evil in sci-fi suggests that plenty of people do think about the possibility. But I think most people don't worry about this because super-human-intelligent AI still seems either a remote possibility, or something that won't happen in our lifetimes.

Another point is that people don't consider super-human intelligence to be something that will necessarily massively change things (at least, on the level of radically changing human nature or existence) - consider how ideas such as the technological singularity are often written off as being fringe ideas (even though the huge consequences of a self-improving AI algorithm seem reasonable, as ChaosEngine points out).

rAm_y_:

But how can a machine do anything other than parse through a list of options, it's never going to have any human like reasoning, a conscience, a soul, it will only every process through a linear set of solutions from a given set of problems and try and match a best fit, it would be like a dating agency of sorts.
So what is so special about the matter that makes up a human brain, that means this can never be replicated in a machine? Are you appealing to human brains having a supernatural component to them (a "soul" or whatever)?

Note, it is an open question as to whether a computer - i.e., a turing machine - can fully replicate a human, including aspects such as consciousness and qualia - but even if things like consciousness can't be replicated by software, I don't see why we can't replicate this in an artifical machine. Though even if a computer AI is non-sentient (a philosophical zombie), I don't see why it can't be intelligent in a human-like manner.

samoth:

prototypes of fully autonomous cars are on the road right now (that's not just a crackpot Google idea, but something Daimer-Benz is actually considering moving into mainstream production at this time).

Millions of people travel in airplanes every year which will crash any time the AI feels like it, such as when a sensor freezes -- without the pilot being able to do anything about it (talking of Airbus specifically, but other manufacturers probably are not far behind).
Is there evidence that computer-controlled cars or planes are less safe than ones controlled by humans? I'd much rather be driven/flown by something that is tested and predictable far more than a human driver can be.

The Google car has crashed once - when it was under human control at the time.

http://erebusrpg.sourceforge.net/ - Erebus, Open Source RPG for Windows/Linux/Android
http://conquests.sourceforge.net/ - Conquests, Open Source Civ-like Game for Windows/Linux

This topic is closed to new replies.

Advertisement