Why A.I is impossible

Started by
116 comments, last by Alexandra Grayson 6 years, 1 month ago
On 2/12/2018 at 4:41 AM, Awoken said:

Qualia and why it's a bad addition to this discussion.  If one uses the word qualia with the idea that qualia is actually some type of mystical substance unique in and of itself then I'd agree with your statement.  However; if qualia is a just word place-holder for subjective experience then the logic is bananas.  This is why I hate the word, because people enjoy trashing the idea of qualia and I think it's just a philosophical addition to the discussion that confuses the ideal, that being subjective experience.  Would you make the same assertion? 

'For this reason, it's unfortunately really hard to argue that subjective experience,... actually exists at all.'

Stupid qualia, and I don't like that it's often used in combination with subjective experience.  I think a distinction needs to be made.  Qualia is a poor attempt at quantifying subjective experiences.  Subjective experiences are emergent phenomena of the brain that we yet have little insight into.

I have mixed feelings. On one hand, I agree that using the word "qualia" brings an implication of something more than simply "subjective experience," and I think that this adds yet another not-necessary-useful level of specificity to a concept that's already difficult to pin down.

On the other hand, I still think that's pretty tough to talk about "subjective experience" on its own as well -- that is, in a way that actually distinguishes "subjective experience" as a concept from the physical and functional processes of the brain.

There's a popular notion of a "philosophical zombie" that's defined as something that either recreates the physical processes of the brain, or emulates the functional processes of the brain, but without subjective experience. This seems like a good starting point for actually describing subjective experience, but I also think it's a potential trap: even though it's (at least sort of) possible to imagine such a zombie, it's not necessarily the case that such an entity could actually exist. For instance, it's totally possible to "imagine" some statement in formal logic, only to realize that the statement contains a contradiction.

Basically, I think it's difficult to distinguish "qualia" from "subjective experience," but it's also surprisingly hard to distinguish "subjective experience" from "no experience at all," so I don't think dropping the notion of "qualia" really gets us much further.

-~-The Cow of Darkness-~-
Advertisement
On 16/02/2018 at 11:41 PM, cowsarenotevil said:

On the other hand, I still think that's pretty tough to talk about "subjective experience" on its own as well -- that is, in a way that actually distinguishes "subjective experience" as a concept from the physical and functional processes of the brain.

There's a popular notion of a "philosophical zombie" that's defined as something that either recreates the physical processes of the brain, or emulates the functional processes of the brain, but without subjective experience. This seems like a good starting point for actually describing subjective experience, but I also think it's a potential trap: even though it's (at least sort of) possible to imagine such a zombie, it's not necessarily the case that such an entity could actually exist. For instance, it's totally possible to "imagine" some statement in formal logic, only to realize that the statement contains a contradiction.

Basically, I think it's difficult to distinguish "qualia" from "subjective experience," but it's also surprisingly hard to distinguish "subjective experience" from "no experience at all," so I don't think dropping the notion of "qualia" really gets us much further.

Nicely said.  To be blunt... for me, my subjective experiences are all I've got, if it weren't for them I'd have no lens in which to 'view' this world.  Naturally I cling to them or perhaps the 'idea' of them for comfort.

"This seems like a good starting point for actually describing subjective experience, but I also think it's a potential trap: even though it's (at least sort of) possible to imagine such a zombie, it's not necessarily the case that such an entity could actually exist. For instance, it's totally possible to "imagine" some statement in formal logic, only to realize that the statement contains a contradiction."

God comes to mind... haha.

" On the other hand, I still think that's pretty tough to talk about "subjective experience" on its own as well -- that is, in a way that actually distinguishes "subjective experience" as a concept from the physical and functional processes of the brain. "

I like it, I'd like to explore this point further, I'll need clarification on your end and you'll have to pardon my attempts at articulation.  I could view subjective experiences as something 'otherworldly', something separate from the physical and functional, but I've been taught the better and I know that there is nothing separate from reality.  However; all of my teachings of the physical and functional encourage me to abstract the world around me into quantifiable bits that can be measured and examined for scientific research.  Suddenly I'm presented with a dilemma, how would I go about such research into my own subjective reality?  How do I go about quantifying it?  Quantify it in such a manor so as to remove all possible doubt of a philosophical zombie?

I try to keep up to date on brain research and learn as many new insights as I can about the latest discoveries in neuroscience.  They have told us so many fascinating things.  I assume you've heard of Synesthesia?  
I get that somewhere down the line of physical processes a "sensation" arises, but where? and the more important question, can we ever know?

1 hour ago, Awoken said:

Suddenly I'm presented with a dilemma, how would I go about such research into my own subjective reality?  How do I go about quantifying it?  Quantify it in such a manor so as to remove all possible doubt of a philosophical zombie?

I try to keep up to date on brain research and learn as many new insights as I can about the latest discoveries in neuroscience.  They have told us so many fascinating things.  I assume you've heard of Synesthesia?  
I get that somewhere down the line of physical processes a "sensation" arises, but where? and the more important question... can we ever know?

I'm actually fairly optimistic that a "definitive" answer not only exists but might even be known in the next hundred years, if advances in AI and technology in general continue at a good pace, precisely because I think we can provide meaningful evidence for or against the possibility of a "philosophical zombie."

If we can make a complete computer model of the human mind, then the only "difficult" problem is finding where, in that model, this notion of "subjective experience" actually comes from. A daunting task, for sure, but not necessarily an impossible one.

It's already practical to use formal verification systems to prove things about models of certain biological systems, so I can imagine asking some modeled human mind whom I'll call Mr. Robot to attempt to describe "subjective experience," and then work backwards to study the processes and structures that lead to him discussing subjective experience. It's tough to say how satisfying such an answer will be, or exactly what other interesting conclusions we can derive from it, but this knowledge would at least be sufficient to rule out the idea that subjective experience is something external to the mind (or at very least to rule out that the subjective experience we can talk about, or that otherwise affects our behavior, is external to the mind).

This would not, strictly speaking, rule out the possibility that Mr. Robot is a philosophical zombie, but the alternative leaves us stuck in a really weird place: we studied the brain, found out precisely why we can talk about philosophical zombies in the first place, only for this to actually be completely unrelated to real philosophical zombies. At this point, we'd absolutely never be able to study or analyze real philosophical zombies, because by assumption we literally cannot talk about them at all.

On the other hand, if subjective experience actually turns out to really be something external to the mind (e.g. some kind of weird quantum behavior that's not computable and can't be modeled at all), I expect that that would become fairly clear as well -- as our ability to model physical systems and AI improves, I can only imagine it'll become increasingly clear where this "breakdown" occurs. That is, I imagine that the more "mind-like" things we can model computationally, the better hope we'll have of discovering exactly what portion of a real mind we can't model.

Obviously this second case would be "nice" in that it suggests that there might be life after death, that humans really are "special," and that we don't have to feel guilty about experimenting on poor Mr. Robot because he doesn't really feel pain. I of course have no reason to think these things aren't true, they're just not scientifically useful until they can be used to make falsifiable predictions that differentiate them from the "brains are just computers" version -- which is something we get for free if we do discover a specific reason why we can't model a human mind computationally.

-~-The Cow of Darkness-~-
8 hours ago, cowsarenotevil said:

If we can make a complete computer model of the human mind, then the only "difficult" problem is finding where, in that model, this notion of "subjective experience" actually comes from. A daunting task, for sure, but not necessarily an impossible one.

Could you imagine a time when we're able to say "Hey look, the sensation of yellow for Mr. Robot came about between lines 335 and 442 in the processWhatEver() function... Cool!" :D  

Somebody put a worm's mind inside of a lego robot. I'm not kidding.

http://openworm.org/

It means that AI is probable, we just need to improve our technology. Later we might be able to emulate bees, then mice, then cats, and humans. I can't wait to see what the future holds!

Codeloader - Free games, stories, and articles!
If you stare at a computer for 5 minutes you might be a nerdneck!
https://www.codeloader.dev

On ‎11‎/‎02‎/‎2018 at 1:57 PM, Hermetix said:

There is a physicist by the name of Roger Penrose who essentially "proved" (some dispute his proof) that the human mind cannot be simulated by a Turing machine, hence the impossibility of creating a truly "conscious" AI. His theory is that the mind is a quantum process that goes in the microtubules of the brains neurons. He goes on to show in his book Shadows of The Mind ( which I highly recommend reading for those who have a background in CS and quantum mechanics) that Plato might have been right when it comes to describing human thoughts as "metaphysical".

Okay so just add some RNG to your turing machine. Problem solved!

"I would try to find halo source code by bungie best fps engine ever created, u see why call of duty loses speed due to its detail." -- GettingNifty

On a more serious note, here are my 2 cents to the whole "subjective experience" discussion: I think it's easy to look at a system that behaves slightly differently than ourselves and go "pff, this thing is clearly not conscious". We've seen this being said about animals and now we're seeing it being said about computers and I don't think it's as clean cut as you might initially think.

Self consciousness is the ability to observe and react to ones own actions. Who is to say that a PID control loop is not "self conscious"? You might laugh at this idea, but if you think about it, what arguments can you really make against it? Are living systems not just a highly complex composition of millions of self-regulatory systems, where the thing that identifies as "me" is at the very top of the hundreds of layers? Who is to say that each of those systems is not self conscious in its own way and has its own subjective experience? When a thought enters your mind, you proceed to think about that thought, and think about thinking about that thought, and so forth. This process of self-thinking is some kind of feedback loop, which relies on the results of many "lower level" feedback loops, right down to the molecular level (or perhaps even further, who knows). This is also the reason why you see fractals when you do psychadelics, because systems with feedback loops are recursive, but that's beside the point.

And for that matter, who is to say we are at the "top" of this chain? Humans form structures such as societies or companies, which also have an ability to self observe and react accordingly. Who is to say companies aren't conscious? Or the environment isn't conscious? Or the billions of devices connected to the internet haven't formed a self-conscious "brain" of some kind? Or the galaxy isn't one gigantic conscious super-organism? It might be very different from our own consciousness, but again, that doesn't necessarily make it unconscious.

Randomness is another point of discussion. Must a self-conscious system necessarily have an element of randomness? There are numerous psychological experiments that predict how a human will respond to specific situations with an astoundingly high degree of accuracy (see: Monkey ladder experiment, see: Stanford Prison experiment, see: Brain scans that predict your behaviour better than you can) It almost seems like we are under an illusion of being in control and perhaps the actions we take are for the most part predetermined. Whether this is true or not is of course unknown, but the real question is: Does it matter? If so, why?

Just because it appears that human consciousness is not computable doesn't mean it's random. It is very obviously highly organized, otherwise you'd be unable to respond to this thread, or even have an experience for that matter.

So: If I were to add an RNG to my Turing machine to make it less predictable and thus "more conscious", isn't that taking a step back from the actual problem?

"I would try to find halo source code by bungie best fps engine ever created, u see why call of duty loses speed due to its detail." -- GettingNifty

  Current artificial intelligent systems become intelligent by learning how to do things.

We already know that nature creates a vastly diverse spectrum of materials, structures, and life forms by repeating processes over time. This has a tendency to use changes in context to create simple, progressive augmentation of the 'stuff' that manages to survive. I call that 'repetition with a twist'. It's completely analogous to the way that humans "create".

While working on Java help and C++ help tasks I leant electronic silicone intelligence engines are already able to repeat and twist at speeds that are orders of magnitude faster than our extremely slow bio-chemical thinking processes. And, unlike humans, these electronic intelligences will have rapid, if not instant, access to ALL of the accumulated information known to human kind now and ever more, plus they will be multitasking. It's already a reality that's growing and progressing as we speak. Robots with artificial intelligence will ultimately be able to think for themselves, create, and reproduce their own king (in their own way, of course). Maybe not tomorrow, but soon after.

cheers

This topic is closed to new replies.

Advertisement