Why A.I is impossible

Started by
116 comments, last by Alexandra Grayson 6 years, 1 month ago

@SillyCow the thing is that those goals must be quantifiable. Not sure how you would measure “maximum interaction”. 

Besides, I was responding to @Awoken who was talking about a machine with a “next level of consciousness”. My point was that if such a machine existed, it would be difficult for us to understand that experience as we’d have no frame of reference. Imagine trying to ask god what it’s like to be god. 

if you think programming is like sex, you probably haven't done much of either.-------------- - capn_midnight
Advertisement
On 09/01/2018 at 11:10 AM, Eric LeClair said:

I like how everybody and their mom is trying to solve the A.I dilemma. You got scientists, mathematicians and all the smarty pants of the world trying to create A.I.

Here is the common sense reason why A.I can't be created.

1. The only difference between a human being and a machine is 'consciousness'. Some people call it a soul or spirit or whatever. Basically, it's energy that's beyond the 5 'human' senses.

2. We are using our 5 senses to create something that is literally 'out of this world'. 

Good luck!

 

Sign me up, where do I send my money?

Lemme see if I’ve got the hang of your logic though:

  • Step 1 - insert some intractable or difficult problem [ here ]
  • Step 3 - therefore this proves ... [ insert some unproven assertions here ]

Ok, let me have a turn.

  • Step 1 - forget “a.i” how about just simple multiplication and division?  

I’ll bet you can’t devise algorithms for something as simple as finding the factors of a large number. Yep that’s all.

You know the factors of 77 are 7 and 11, the factors of 100 are 2, 5, 10, 20 ... but now do that with huge numbers.

All the computers in the world running for thousands of years can’t do it.

  • Step 3 - therefore this clearly proves .... faires. 

....

Hey I’ve go the hang of it. My cheque is in the mail ...

....

There are some things computer algorithms just cannot do - never mind “a.i” - even some alegedly very simple questions cannot be decided! 

This was all demonstrated only some 80 years ago (Alan Turin / Alonzo Church ...). And no - Moore’s law won’t help nearly enough.

So just because we may or may not be able to solve intractable problems doesn’t mean we get a free ticket to “prove” - or - disprove anything else we like.

You’re explanation for why we can’t create a.i. is as good as mine. The answer is fairies.  

You might need to elaborate on step 2 a bit more. So yeah good luck.

 

 

On 2/6/2018 at 5:47 AM, AlexKay said:

I’ll bet you can’t devise algorithms for something as simple as finding the factors of a large number. Yep that’s all.

You know the factors of 77 are 7 and 11, the factors of 100 are 2, 5, 10, 20 ... but now do that with huge numbers.

All the computers in the world running for thousands of years can’t do it.

(...)

There are some things computer algorithms just cannot do - never mind “a.i” - even some alegedly very simple questions cannot be decided! 

This was all demonstrated only some 80 years ago (Alan Turin / Alonzo Church ...). And no - Moore’s law won’t help nearly enough.

So just because we may or may not be able to solve intractable problems doesn’t mean we get a free ticket to “prove” - or - disprove anything else we like.

I feel like this argument is correct in broad strokes, but a bit imprecise.

To clarify, decidability, computational complexity, and "one-way functions" are all distinct things.

Factoring is treated like a one-way function, because checking whether a group of numbers are factors of a particular number is trivial (and consequently has low complexity), but there is no known way of factoring an arbitrary number with a comparable time complexity. Interestingly, it has not been proven that one-way functions even exist at all, to say nothing of whether factoring in particular is a one-way function.

Factorization isn't undecidable, though. It's totally possible to program a universal Turing machine to factor a number in a way that it's always correct, even if it works slowly. For an undecidable problem, such as the halting problem, it is actually impossible to program a machine to solve it in a way that is always correct.

-~-The Cow of Darkness-~-

There is a physicist by the name of Roger Penrose who essentially "proved" (some dispute his proof) that the human mind cannot be simulated by a Turing machine, hence the impossibility of creating a truly "conscious" AI. His theory is that the mind is a quantum process that goes in the microtubules of the brains neurons. He goes on to show in his book Shadows of The Mind ( which I highly recommend reading for those who have a background in CS and quantum mechanics) that Plato might have been right when it comes to describing human thoughts as "metaphysical".

"Penrose and Hameroff[edit]

Theoretical physicist Roger Penrose and anaesthesiologist Stuart Hameroff collaborated to produce the theory known as Orchestrated Objective Reduction (Orch-OR). Penrose and Hameroff initially developed their ideas separately and later collaborated to produce Orch-OR in the early 1990s. The theory was reviewed and updated by the authors in late 2013.[12][13]

Penrose's argument stemmed from Gödel's incompleteness theorems. In Penrose's first book on consciousness, The Emperor's New Mind (1989), he argued that while a formal system cannot prove its own consistency, Gödel’s unprovable results are provable by human mathematicians.[14] He took this disparity to mean that human mathematicians are not formal proof systems and are not running a computable algorithm. According to Bringsjorg and Xiao, this line of reasoning is based on fallacious equivocation on the meaning of computation.[15]

Penrose determined wave function collapse was the only possible physical basis for a non-computable process. Dissatisfied with its randomness, Penrose proposed a new form of wave function collapse that occurred in isolation and called it objective reduction. He suggested each quantum superposition has its own piece of spacetime curvature and that when these become separated by more than one Planck length they become unstable and collapse.[16] Penrose suggested that objective reduction represented neither randomness nor algorithmic processing but instead a non-computable influence in spacetime geometry from which mathematical understanding and, by later extension, consciousness derived.[16]

Hameroff provided a hypothesis that microtubules would be suitable hosts for quantum behavior.[17] Microtubules are composed of tubulin protein dimer subunits. The dimers each have hydrophobic pockets that are 8 nm apart and that may contain delocalized pi electrons. Tubulins have other smaller non-polar regions that contain pi electron-rich indole rings separated by only about 2 nm. Hameroff proposed that these electrons are close enough to become entangled.[18] Hameroff originally suggested the tubulin-subunit electrons would form a Bose–Einstein condensate, but this was discredited.[19] He then proposed a Frohlich condensate, a hypothetical coherent oscillation of dipolar molecules. However, this too was experimentally discredited.[20]

Furthermore, he proposed that condensates in one neuron could extend to many others via gap junctions between neurons, forming a macroscopic quantum feature across an extended area of the brain. When the wave function of this extended condensate collapsed, it was suggested to non-computationally access mathematical understanding and ultimately conscious experience that were hypothetically embedded in the geometry of spacetime.[citation needed]

However, Orch-OR made numerous false biological predictions, and is not an accepted model of brain physiology.[21] In other words, there is a missing link between physics and neuroscience,[22] for instance, the proposed predominance of 'A' lattice microtubules, more suitable for information processing, was falsified by Kikkawa et al.,[23][24] who showed all in vivo microtubules have a 'B' lattice and a seam. The proposed existence of gap junctions between neurons and glial cells was also falsified.[25] Orch-OR predicted that microtubule coherence reaches the synapses via dendritic lamellar bodies (DLBs), however De Zeeuw et al. proved this impossible,[26] by showing that DLBs are located micrometers away from gap junctions.[27]

In January 2014, Hameroff and Penrose claimed that the discovery of quantum vibrations in microtubules by Anirban Bandyopadhyay of the National Institute for Materials Science in Japan in March 2013[28] corroborates the Orch-OR theory.[13][29]"

https://en.wikipedia.org/wiki/Quantum_mind

http://www.bbc.com/earth/story/20170215-the-strange-link-between-the-human-mind-and-quantum-physics

And even if we create a Quantum AI which runs on quantum computers, we might not be able to replicate what goes on in a human brain, but merely create a super powered AI that calculates in parallel...

 

@Hermetix, thanks for that.  I've heard of their theory before. When I first heard about the microtubules idea I was sceptical.  The language surrounding the idea is difficult to decipher, But I'm making my way through it.  

13 hours ago, Hermetix said:

There is a physicist by the name of Roger Penrose who essentially "proved" (some dispute his proof) that the human mind cannot be simulated by a Turing machine, hence the impossibility of creating a truly "conscious" AI.

Except not really, at all. It's tough to argue that his claims even meet the standard for being a theory (falsifiability), much less that they're something with any hope of being proven deductively. The conjecture that a human mind can be simulated by a Turing machine is just the Church-Turing thesis, and so far there haven't been any serious challenges to it.

This isn't to say that Penrose's ideas about consciousness aren't interesting, or even that they're not (potentially) true, it's just that there are a lot of assumptions encoded in them, and many of those assumptions are actually pretty weird and not widely accepted. They're certainly not rigorous, either.

In my opinion, the weirdest (but certainly not only) such assumption is actually in the text you reprinted about the incompleteness theorem. It's a personal pet peeve of mine to see people abuse this theorem (which is completely rigorous) to make non-rigorous claims about "consciousness" (or anything, really). In fact, I'd say that asserting that human minds have access to some kind of magic logic that cannot, by assumption be described formally, and then arguing from that to claim that Turing machines (which are, of course, describable by formal logic, since that's the whole point of them) must be "missing" something is roughly the mother of all circular arguments. How do you even argue that such a magic logic exists? You can't do it formally, by assumption. So what is even the point?

-~-The Cow of Darkness-~-
6 hours ago, cowsarenotevil said:

Except not really, at all. It's tough to argue that his claims even meet the standard for being a theory (falsifiability), much less that they're something with any hope of being proven deductively. The conjecture that a human mind can be simulated by a Turing machine is just the Church-Turing thesis, and so far there haven't been any serious challenges to it.

This isn't to say that Penrose's ideas about consciousness aren't interesting, or even that they're not (potentially) true, it's just that there are a lot of assumptions encoded in them, and many of those assumptions are actually pretty weird and not widely accepted. They're certainly not rigorous, either.

In my opinion, the weirdest (but certainly not only) such assumption is actually in the text you reprinted about the incompleteness theorem. It's a personal pet peeve of mine to see people abuse this theorem (which is completely rigorous) to make non-rigorous claims about "consciousness" (or anything, really). In fact, I'd say that asserting that human minds have access to some kind of magic logic that cannot, by assumption be described formally, and then arguing from that to claim that Turing machines (which are, of course, describable by formal logic, since that's the whole point of them) must be "missing" something is roughly the mother of all circular arguments. How do you even argue that such a magic logic exists? You can't do it formally, by assumption. So what is even the point?

 

I agree that some assumptions must be made for his theory using Godel's incompleteness theorem to work. But this is true for a lot of things in science in general. You assume everyday that when you step out of bed each morning, you will not fall inside a black hole...

But I tend to focus more on his Orch-Or theory which so far has not been refuted and explains how consciousness could emerge in the brain with quantum vibrations in neuron microtubules. The main argument against it was that quantum processes cannot be possible inside "warm and wet" environments such as the brain. But since then it has been shown to occur in plant photosynthesis and bird brain navigation.

I also never really bought the idea that somehow our brains are some kind of "meat computers" and that the mind is just  purely electrical. It just does not explain qualia for one thing. It also does not explain dreaming (lucid or not), OOBEs, or any other kind of "mystical" experiences that are reported by people around the world.  I think so far, the quantum explanation is the best to try and solve the mind-body problem. If strong AI could be done one day, we will have to solve this problem first. But I'm pretty sure that a real AI would need some kind of vessel that is similar to how biological systems are organized. 

 

49 minutes ago, Hermetix said:

I agree that some assumptions must be made for his theory using Godel's incompleteness theorem to work. But this is true for a lot of things in science in general. You assume everyday that when you step out of bed each morning, you will not fall inside a black hole...

Sure. All I'm saying is that there's a tremendous difference between assuming you won't fall into a black hole tomorrow and proving that it's impossible to fall into a black hole. Like I said before, I'm not even comfortable describing this kind of conjecture about consciousness as a theory, simply because it currently doesn't make any predictions that can be falsified. This is in contrast to other theories, like theories of gravity, which at least make predictions that can actually be measured. And even so, it's still impossible to actually prove that gravity works in a particular way. In fact, it's not even possible to say that gravity is likely to behave in some particular way, due to the problem of induction.

 

49 minutes ago, Hermetix said:

But I tend to focus more on his Orch-Or theory which so far has not been refuted and explains how consciousness could emerge in the brain with quantum vibrations in neuron microtubules. The main argument against it was that quantum processes cannot be possible inside "warm and wet" environments such as the brain. But since then it has been shown to occur in plant photosynthesis and bird brain navigation.

I'm not aware of any claims that quantum processes aren't possible inside the mind. Isn't the argument more that there's no reason to believe that non-deterministic processes actually have a macroscopic effect on human behavior?

 

49 minutes ago, Hermetix said:

I also never really bought the idea that somehow our brains are some kind of "meat computers" and that the mind is just  purely electrical. It just does not explain qualia for one thing. It also does not explain dreaming (lucid or not), OOBEs, or any other kind of "mystical" experiences that are reported by people around the world.  I think so far, the quantum explanation is the best to try and solve the mind-body problem. If strong AI could be done one day, we will have to solve this problem first. But I'm pretty sure that a real AI would need some kind of vessel that is similar to how biological systems are organized.

To me, the problem with qualia as a concept is that it appears to be self-evident, rather than being something that can be derived logically or measured empirically. For this reason, it's unfortunately really hard to argue that qualia, which is essentially defined as any subjective experience that is distinct from the actual mechanical behavior of the brain, actually exists at all. For instance, I could just claim that all laptop computers experience "zualia," which, like qualia, cannot be explained in terms of a desktop computer, but that doesn't necessarily imply that there is, or needs to be, some other way of explaining "zualia." It also certainly doesn't imply that it's impossible to build a desktop computer that truly simulates a laptop computer. It would be different if we could actually identify and measure a specific set of behaviors that is unique to laptops, but so far, this hasn't been done. Likewise, if there is something that conscious humans can do that Turing machines can't, no one has found it yet.

I also think that the notion that qualia is somehow based on quantum effects, or indeed anything that can't be described in terms of a Turing machine, doesn't really help to explain qualia, either. The idea that some behavior that can't be described computationally is able to affect our mind in such a way that we're actually able to refer to it is pretty weird. It would be one thing for these processes to affect our behavior in some subtle, difficult-to-describe ways, but having these processes actually affect our behavior in such a precise manner that the physical portion of our brain can actually reference those processes themselves and reason about them symbolically would seem to require some very complex machinery indeed.

-~-The Cow of Darkness-~-
On 11/02/2018 at 7:31 PM, cowsarenotevil said:

I feel like this argument is correct in broad strokes, but a bit imprecise.

To clarify, decidability, computational complexity, and "one-way functions" are all distinct things.

Factoring is treated like a one-way function, because checking whether a group of numbers are factors of a particular number is trivial (and consequently has low complexity), but there is no known way of factoring an arbitrary number with a comparable time complexity. Interestingly, it has not been proven that one-way functions even exist at all, to say nothing of whether factoring in particular is a one-way function.

Factorization isn't undecidable, though. It's totally possible to program a universal Turing machine to factor a number in a way that it's always correct, even if it works slowly. For an undecidable problem, such as the halting problem, it is actually impossible to program a machine to solve it in a way that is always correct.

Ha ha, yes indeed, you are correct.  I wasn't intending to use words like intractable and undecidable in a formal computational complexity sense, I just meant "hard stuff for computers to work out" - but I couldn't think of alternate words.

I am glad you pointed out the more precise meanings of those terms, so thanks for that.

I probably should've just said "when there's hard stuff for computers to work out - don't just take the opportunity to launch into 'a proof' of pretty well anything you like'.

Civilisations have been lying along these lines for millennia, "why am I sad or sick", "well of course the devil did it", or in more recent times - "let's do a lobotomy that'll fix your mood swings". Or "your child has ADHD so pop these pills ... trust me I'm your doctor..."

Years or centuries later those plausible sounding arguments don't sound so plausible. I forgot his name but didn't Mr Lobotomy win a Nobel Prize for that piece of brilliance? 

 

 

3 hours ago, cowsarenotevil said:

For this reason, it's unfortunately really hard to argue that qualia, which is essentially defined as any subjective experience that is distinct from the actual mechanical behavior of the brain, actually exists at all.

Qualia and why it's a bad addition to this discussion.  If one uses the word qualia with the idea that qualia is actually some type of mystical substance unique in and of itself then I'd agree with your statement.  However; if qualia is a just word place-holder for subjective experience then the logic is bananas.  This is why I hate the word, because people enjoy trashing the idea of qualia and I think it's just a philosophical addition to the discussion that confuses the ideal, that being subjective experience.  Would you make the same assertion? 

'For this reason, it's unfortunately really hard to argue that subjective experience,... actually exists at all.'

Stupid qualia, and I don't like that it's often used in combination with subjective experience.  I think a distinction needs to be made.  Qualia is a poor attempt at quantifying subjective experiences.  Subjective experiences are emergent phenomena of the brain that we yet have little insight into.

This topic is closed to new replies.

Advertisement