• Advertisement

Why A.I is impossible

Recommended Posts

Advertisement
Posted (edited)

Among a lot of BS, I think one of Eric's core point is mostly likely correct. But I guess he is approaching it from the wrong angle and probably using the wrong words and logic. For one, in my opinion i think the statement that "we are all one single consciousness" is complete BS, but that "the seed of consciousness is separate from the brain and that's what makes us real human", I think that's CORRECT. 

I use seed for a particular reason. Without the brain there is no awareness. But its a deep and long stuff and I haven't got the time to write on that now. I'm seriously behind schedule on what I'm developing at the moment.  I think this thread would be long dead by the time i complete my coding and have the time.

Also mikeman's link seems very interesting, haven't read the whole thing though

3 hours ago, deltaKshatriya said:

primarily because our first true AGIs are almost certainly going to think in a manner completely alien to us,

Why? If humans coded the logic by which the "AGI" thinks or operates and builds upon... AGI would develop (because of infinite and fast self programming resource) to be more advance than us , but why would it be alien to us

Edited by grumpyOldDude

Share this post


Link to post
Share on other sites
14 minutes ago, grumpyOldDude said:

that "the seed of consciousness is separate from the brain and that's what makes us real human", I think that's CORRECT. 

But why?

Share this post


Link to post
Share on other sites

To apply some of the common 'logic' to other things:

There is more to a car's complex engine than just the engine! Just look at it. There are tons of parts, and if you remove a few parts, like a spark plug or two, then it will still mostly run, but not great. Things can be added, and some bits can even be moved around, and there are lots and lots and lots of different engines out there, but lets be honest and admit that there is no way humanity could ever understand how an engine TRULY works, so therefore it must rely on some outside factor to operate that we haven't yet discovered...

It is a bio-chemical-electrical machine, and it really doesn't make much logical sense to assume there is anything magical or other worldly needed for it to run or for it to be simulated in another system once we can precisely define the functionality of the originals.

Share this post


Link to post
Share on other sites
45 minutes ago, grumpyOldDude said:

Why? If humans coded the logic by which the "AGI" thinks or operates and builds upon... AGI would develop (because of infinite and fast self programming resource) to be more advance than us , but why would it be alien to us

Well the question is what do you define as thinking? If thinking is simply defined as small pieces acting in accordance to produce some sort of output to an input, then wouldn't a search engine qualify as thinking? It's not human thinking, in the sense that it thinks using some sort of search algorithm. Something similar could be said for a natural language engine. A navigation algorithm does also think in that sense. Moreover, AGI would emerge from these sort of things interacting with one another in ways we didn't foresee. Kind of like the general goal of machine learning. So if we cannot foresee how these things would interact with one another, how it would utilize algorithms, what it would emphasize, etc., it's form of thinking would seem alien. We wouldn't see it as 'thinking' necessarily. 

Then there's just hardware. Machines are built on transistors, and inherently use base two. We are built on neurons and count on 10 fingers. We perceive through eyes, ears, skin, nose, etc. Machines can perceive differently. Machines use different means to perceive similar things. Moreover, machines can perceive things we simply cannot. These are the reasons I think any machine intelligence would be alien.

Share this post


Link to post
Share on other sites
Posted (edited)
4 hours ago, mikeman said:

Well, as long as we're talking about it...

https://en.wikipedia.org/wiki/Chinese_room

Maybe I am wrong but it looks to me that Chinese room thought experiment was beaten by deep learning.

Here is why:

lets assume that language is a detection/description tool in this case for the machine. So lets throw into deep learning  all the symbols combinations and combinations of combinations (higher level in neural network) etc. Now we should have a tool to describe a state.

Now we need to get a knowledge about the states- therefore now we need to apply deep learning onto human society and applied topics to the translations. Bham, now the computer knows that granny + grandad equates to 150% more Christmas gifts on average than grandad alone. Now lets talk about the Easter when grandad is in spa- I think doable to some certainty.

 

 

Edited by Osidlus

Share this post


Link to post
Share on other sites

Two foundational elements of a "thinking machine" already exist.  Deep Mind can learn and adapt through trial and error, and there are many different means of creating a "self programming" computer.  These are two key elements of a "thinking machine".  I can create a "self programming simulation" and I am not even a programmer.  If you think in terms of centuries instead of years, if we can already do these two things it seems nearly a certainty too me that we will have "thinking machines" within a few centuries, and maybe even a lot sooner than that.

The issue than becomes how you define "intelligence".  Even if we reach the point that machines can actually "think", is thinking alone intelligence?  Is something like a "soul", if such a thing even exists, necessary for it to truly be considered to be a "thinking machine".  Does it need to be sentient to have truly achieved the goal?  

I think a "thinking machine" is a near certainty, considering that we already have some of the basic building blocks for doing that and 300 years (for example) is a very long time to work the rest of it out.  So if you are just talking about a "thinking machine" I think that is an eventually certainty.  Commander Data, on the other hand, is a lot more than just a "thinking machine".  So the definition of "intelligence" is the key to this discussion, otherwise everyone is likely to be talking about different things.

Share this post


Link to post
Share on other sites
Posted (edited)

 

21 hours ago, deltaKshatriya said:

If thinking is simply defined as small pieces acting in accordance to produce some sort of output to an input, then wouldn't a search engine qualify as thinking? It's not human thinking, in the sense that it thinks using some sort of search algorithm. Something similar could be said for a natural language engine. A navigation algorithm does also think in that sense.

             ...

So if we cannot foresee how these things would interact with one another, how it would utilize algorithms, what it would emphasize, etc., it's form of thinking would seem alien. We wouldn't see it as 'thinking' necessarily. 

 Search engine, navigation algorithm don't come near qualifying as a thinking machine. Someone presses a button and out comes the output. They don't think independently, don't make decisions independently., They are not creative. They don't make choices. A chess program doesn't have a mind, and as such its decisions are not really independently, they are programmed decisions They only obey your commands 

21 hours ago, Kavik Kang said:

I think a "thinking machine" is a near certainty, considering that we already have some of the basic building blocks for doing that and 300 years (for example) is a very long time to work the rest of it out.  So if you are just talking about a "thinking machine" I think that is an eventually certainty.  Commander Data, on the other hand, is a lot more than just a "thinking machine".  So the definition of "intelligence" is the key to this discussion, otherwise everyone is likely to be talking about different things.

In the future, at best I can see machines having only a pseudo-human-mind 

A machine can simulate human mind or intelligence, but it would be missing self awareness + independent creativity (for instance independently designing and constructing another machine based on their independent intuition) + social endeavours.

You might say but Termites do not have self awareness i e they cannot recognise themselves in a mirror. But they meet the other 2 requirements  

 

Edited by grumpyOldDude

Share this post


Link to post
Share on other sites
Posted (edited)
4 hours ago, grumpyOldDude said:

Why? If humans coded the logic by which the "AGI" thinks or operates and builds upon... AGI would develop (because of infinite and fast self programming resource) to be more advance than us , but why would it be alien to us

Because we don't code the logic. 

No-one is going to write an AGI the way we write "normal" computer programs. You can't write 

if (isHappy()) smile();
else if (isAngry()) frown();

AGIs are simply way too complex for this. We don't even know how our existing machine learning algorithms work, and in many ways, we can't know.... the datasets are simply too complex for us. This might also be the reason that we don't understand consciousness. It could be that "consciousness" is simply an emergent property of extremely complex data processing (possibly an abstraction?).

I would recommend this as a simple primer to machine learning

 

Edited by ChaosEngine

Share this post


Link to post
Share on other sites

I think the greatest limitation to our development of a sentient artificial intelligence is currently limited by our model for the human mind and how intelligence works. The model is extremely limited and poorly understood, even by neuroscientists and brain surgeons. However, given enough research and time, that particular scientific model will progress towards higher levels of correctness (which is interesting in a different way, because it would be the first model which is self aware).

As far as intelligence goes, I think its more of an emergent property of our neural topology. There's nothing magical or fancy about it, and to some people who wish to see magic where it doesn't exist, this may be disturbing on an existential crisis type of level. A narcissistic part of our identity wants to believe we're unique and special, but the reality is that we're really not and that may be hard to deal with.

To say that creating sentient artificial intelligence is "impossible" is a completely foolish and absurd claim which hints at a level of unawareness/ignorance on your part. Just because you don't know how to do it, doesn't mean it isn't possible. Although our current scientific models for general intelligence have big gaps, there is no guarantee that those gaps will continue to exist far into the future. You just can't say with reasonable certainty what type of technological achievements will never be possible, because you'd just be applying modern ignorances towards the future.

Share this post


Link to post
Share on other sites

The dude in the first post said AI is impossible because the brain doesn't control us, it is our 'soul' or 'spirit' or whatever. Therefore AI would be impossible to replicate.

If you ask anyone who is into religion and the bible, they will tell you that animals have no soul.

https://heritagebbc.com/bible-question-and-answer-archive-1/iii-1-do-animals-have-a-spirit/

So where does their intelligence and capacity for learning come from then?

 

Share this post


Link to post
Share on other sites
11 hours ago, slayemin said:

To say that creating sentient artificial intelligence is "impossible" is a completely foolish and absurd claim which hints at a level of unawareness/ignorance on your part.

Huh, that's interesting because I've always stated the exact opposite

 

11 hours ago, slayemin said:

I think the greatest limitation to our development of a sentient artificial intelligence is currently limited by our model for the human mind and how intelligence works. The model is extremely limited and poorly understood, even by neuroscientists and brain surgeons. However, given enough research and time, that particular scientific model will progress towards higher levels of correctness (which is interesting in a different way, because it would be the first model which is self aware).

For me, this is a huuuge stretch.  Though I won't say it's impossible, I think the current way people frame the context of this discussion, when they talk about creating sentient or self aware machines, they are way way off.  Especially if someone thinks it's gonna come out of a computer science laboratory. 

All this being said though I think this brings up something important about the nature of our subjective experience, or the nature of our self awareness.  The two philosophical camps are these; that our everyday conscious experience is an illusion, a by-product of neural activity and we are just passive observers hopelessly clinging to the idea that free choice is a thing and that we exert some influence on our lives.  The alternative is that we do have free choice and that it's because of our conscious decision making that we conduct ourselves the way we do.  The former position is held by most of the skeptical community, Danniel Dennett, Sam Harris and others.  The later is held by the majority of people, including myself.  However, if free-choice is actually an influencing agent in the universe that must mean it abides by some sort of rules, measurable rules that perhaps science could peer into.

Now that being said though slayemin, Scientific inquiry is something which is abstracted outside of our conscious experience.  It is a tool we use for understanding those things which is are reduce-able and which can be repeatedly studied and analysed by others.  Trying to use science to understand how self-aware systems work is next to impossible because as someone else pointed out earlier in this post, How do they really know anyone else really has a subjective conscious experience?  For this reason I don't think our current model for analysing the world is going to provide significant insights.  And I'm 99.999999% confident no computer science lab is gonna create a self-aware machine.

Edited by Awoken

Share this post


Link to post
Share on other sites
8 hours ago, Awoken said:

The two philosophical camps are these; that our everyday conscious experience is an illusion, a by-product of neural activity and we are just passive observers hopelessly clinging to the idea that free choice is a thing and that we exert some influence on our lives.  The alternative is that we do have free choice and that it's because of our conscious decision making that we conduct ourselves the way we do. 

 

Consciousness and free will are not the same things. I don't believe in free will, but I definitely have a subjective conscious experience of the world. Even if I don't understand or know the mechanisms around why I make a choice, it ultimately subjectively feels like I do.

 

8 hours ago, Hodgman said:

There's also the camp who believes that the actual physical mechanisms behind thought is rooted in quantum behavior, which is probabilistic, which makes the whole thing "just physics" without having to say that it's deterministic (keeping the "free" part of "free will" free, and leaving the door open for a God who rolls dice).

I tend to believe this, I just consider it random rather than "free will". All of my interactions and experiences determine the probabilities of me making a specific choice, but in the moment, the "decision" is made at a physical level which my conscious self interprets as a choice. 

That said, I don't think that you need either consciousness or free will to create an AGI. 

Share this post


Link to post
Share on other sites
2 hours ago, ChaosEngine said:

Consciousness and free will are not the same things.

this is why going for concision on a topic like this is problematic.  

2 hours ago, ChaosEngine said:

I tend to believe this, I just consider it random rather than "free will". All of my interactions and experiences determine the probabilities of me making a specific choice, but in the moment, the "decision" is made at a physical level which my conscious self interprets as a choice. 

That said, I don't think that you need either consciousness or free will to create an AGI. 

If in fact what you say is true, then I'd agree with you, it's unnecessary to create a self-aware or sentient AGI, which is also why it's an important philosophical question to answer.  However; if we do have some level of free-will, which on some quantum level enacts enough of a butterfly-wing effect on the whole system to propagate what we'd recognise as a choice of action, then creating self-aware machines would be necessary.  The later I happen to believe to be true, which is why no computer science lab is going to achieve it without some other discipline mixed in.

Edited by Awoken

Share this post


Link to post
Share on other sites
Quote

Trying to use science to understand how self-aware systems work is next to impossible because as someone else pointed out earlier in this post, How do they really know anyone else really has a subjective conscious experience? 

That's not true, you can now even recognize by experiments with animals, whether they have a self-image or not. There are animals (some species of birds, monkeys, dolphins) that recognize themselves in a mirror image and this clearly indicates that they have a sense of self-awareness. It is therefore possible to clearly identify by observation whether an animal recognizes itself in the mirror or not. Self-awareness in my opinion only marks a higher advanced intelligence. But once again an AI does not need self-awareness or a "soul".

Edited by zer0force

Share this post


Link to post
Share on other sites
1 hour ago, zer0force said:

That's not true, you can now even recognize by experiments with animals, whether they have a self-image or not. There are animals (some species of birds, monkeys, dolphins) that recognize themselves in a mirror image and this clearly indicates that they have a sense of self-awareness. It is therefore possible to clearly identify by observation whether an animal recognizes itself in the mirror or not. Self-awareness in my opinion only marks a higher advanced intelligence. But once again an AI does not need self-awareness or a "soul".

Yes, I agree with you, again this a big problem when tackling this subject material, getting on the same page using the same language to describe the same things.  To clarify further what was meant by the comment, the original poster had communicated something along the lines of ' How do I really know that anybody else is conscious at all '.  What I believed he was getting at was that there was no way for him to verify that anybody other than him is actually experiencing subjective experiences, for all he knows we're just a bunch of clever robots without a subject experience to call our own.  And I was using his example to make the case that for this reason we could not verify an AGI has subject experiences.

Share this post


Link to post
Share on other sites

The biggest problem here comes down to definitions.

What is an AI?  What does it mean to have an AI that is learning, or that is self aware? What is a "real" AI?

Turing's original question was "can machines think?", and his interpretation was that if a human could successfully imitate a human. Even though his question asked one thing, the test he proposed was if the machine could convince an interrogator that it was human.  Later papers he asked similar questions about if a machine could demonstrate intelligent behavior.

The original Turing Test has been passed and exceeded many times over. People create newer and bigger and better tests.  Each new test tends to be: the things we have today are not AI, but the promise of the future is an AI.

As for consciousness and awareness, that's a very tricky one.  I had one presenter on the topic present it this way:

Let's not look at today. Let's look at a machine built in the future.

Imagine a machine of the future that is built to look exactly like a human, designed to move exactly like a particular human. The machine has input that mimics human senses in all ways, not just smell and touch and vision, but input that responds as hunger and thirst and pain and space and motion and everything else. Imagine the AI for the machine has been trained specifically based on that human's entire life experiences. The AI can recall all the things the human can recall, it has imprints but cannot recall the things the human has forgotten. In all measurable ways the AI and the human are identical.  Given identical stimuli the AI would behave identically to the human across the human's lifetime up until that time. When the experiment begins the AI can continue to learn from new experiences and modify its behavior in a fashion similar to actual humans. The two are placed side by side and the experiment begins. Up until that instant they are effectively the same person. As far as we can externally observe, the machine believes itself to be that human standing next to it.

When that experiment begins, anyone questioning or observing the two would say they are identical. For a time they behave nearly identically, with the first divergence being the moment they stand next to each other. The two will drift apart as their inputs change and time passes.

Since the machine behaves in all ways the same as the human would, is the AI alive, or is it a machine, is it both, is it something else?  Is that machine actually thinking, or is it simulating the process of thinking, and is there a difference between the two? Since it seems to believe it is conscious and believes it is self aware, is it actually self aware and conscious? On the topic of souls, does the machine have a soul, why or why not?

 

Those definitions about what it means to be human (or not) become very difficult to answer.

 

Going further, Turing's original topics were to see if an AI could perform as a human. But later people asked if an AI should be exactly as good as humans to be considered intelligent. Does that include human failings?  Personally I want a self driving car's AI to be far better than humans. I want a medical AI to far outstrip any physician today, being an expert on all fields.  I expect that incorrect judgments will still be made because we cannot predict all things perfectly, but I want most AI systems to be far better than humans would ever be.

In games, I want my AI to provide an entertaining experience, but I want it to perform worse than humans. I absolutely do not want an AI that always makes perfect headshots, always moves to precisely the correct locations, always responds with the mathematically ideal way. As the game begins I expect the AI to be very forgiving so the game is approachable. As the game progresses it increases in difficulty. After time I expect the AI to provide a difficult challenge and (depending on the game) defeat me occasionally, but I also expect to be able to win the game and have a satisfying experience in the process.

 

So much depends on the definitions and the context. What makes a good AI in one situation is a terrible AI in another situation.  Calling something a machine or an algorithm versus calling it an intelligence is a very difficult line to draw, and in some viewpoints that line does not exist at all.

 

Share this post


Link to post
Share on other sites
2 hours ago, frob said:

The biggest problem here comes down to definitions.

What is an AI?  What does it mean to have an AI that is learning, or that is self aware? What is a "real" AI?

Turing's original question was "can machines think?", and his interpretation was that if a human could successfully imitate a human. Even though his question asked one thing, the test he proposed was if the machine could convince an interrogator that it was human.  Later papers he asked similar questions about if a machine could demonstrate intelligent behavior.

The original Turing Test has been passed and exceeded many times over. People create newer and bigger and better tests.  Each new test tends to be: the things we have today are not AI, but the promise of the future is an AI.

As for consciousness and awareness, that's a very tricky one.  I had one presenter on the topic present it this way:

Let's not look at today. Let's look at a machine built in the future.

Imagine a machine of the future that is built to look exactly like a human, designed to move exactly like a particular human. The machine has input that mimics human senses in all ways, not just smell and touch and vision, but input that responds as hunger and thirst and pain and space and motion and everything else. Imagine the AI for the machine has been trained specifically based on that human's entire life experiences. The AI can recall all the things the human can recall, it has imprints but cannot recall the things the human has forgotten. In all measurable ways the AI and the human are identical.  Given identical stimuli the AI would behave identically to the human across the human's lifetime up until that time. When the experiment begins the AI can continue to learn from new experiences and modify its behavior in a fashion similar to actual humans. The two are placed side by side and the experiment begins. Up until that instant they are effectively the same person. As far as we can externally observe, the machine believes itself to be that human standing next to it.

When that experiment begins, anyone questioning or observing the two would say they are identical. For a time they behave nearly identically, with the first divergence being the moment they stand next to each other. The two will drift apart as their inputs change and time passes.

Since the machine behaves in all ways the same as the human would, is the AI alive, or is it a machine, is it both, is it something else?  Is that machine actually thinking, or is it simulating the process of thinking, and is there a difference between the two? Since it seems to believe it is conscious and believes it is self aware, is it actually self aware and conscious? On the topic of souls, does the machine have a soul, why or why not?

 

Those definitions about what it means to be human (or not) become very difficult to answer.

 

Going further, Turing's original topics were to see if an AI could perform as a human. But later people asked if an AI should be exactly as good as humans to be considered intelligent. Does that include human failings?  Personally I want a self driving car's AI to be far better than humans. I want a medical AI to far outstrip any physician today, being an expert on all fields.  I expect that incorrect judgments will still be made because we cannot predict all things perfectly, but I want most AI systems to be far better than humans would ever be.

In games, I want my AI to provide an entertaining experience, but I want it to perform worse than humans. I absolutely do not want an AI that always makes perfect headshots, always moves to precisely the correct locations, always responds with the mathematically ideal way. As the game begins I expect the AI to be very forgiving so the game is approachable. As the game progresses it increases in difficulty. After time I expect the AI to provide a difficult challenge and (depending on the game) defeat me occasionally, but I also expect to be able to win the game and have a satisfying experience in the process.

 

So much depends on the definitions and the context. What makes a good AI in one situation is a terrible AI in another situation.  Calling something a machine or an algorithm versus calling it an intelligence is a very difficult line to draw, and in some viewpoints that line does not exist at all.

 

I can't seem to figure out the new forum interface, so forgive me for quoting this whole post. There's a lot of good stuff in here to respond to.

I think the first step towards finding valuable knowledge and wisdom comes from asking the correct questions. I think you're on the right track. The first question:
"What is an AI?"
Well, AI stands for "Artificial Intelligence", which hints at the correct answer and correct question to be asking. The correct question to be asking is, "What are the defining characteristics of intelligence?" Until you can come up with a rigorous definition for what constitutes intelligence, you won't be able to create an artificial simulation of that intelligence. Like I said in my previous post, our models for intelligence are still pretty rudimentary and have a lot of gaps and unanswered questions. I think the best approach for defining intelligence itself is to look through the animal kingdom and try to observe behaviors which suggest intelligence. The first question to ask and answer: "What is the lowest threshold for intelligence?". I think, to answer that question best, the observer should have a very generous level of tolerance for intelligent behavior. I personally would start at the microscopic level and observe single celled organisms to determine if there is any reactionary intelligence, or if its all instinctive behavior. IF true intelligent behavior can be observed at the microscopic level, then looking towards multi-celled organisms and complex organisms would be unnecessary. Let's assume that we can find true intelligence at the single cellular organism level. The subsequent task of the researcher is to figure out how the intelligence works. Since the organism is single cellular, we can say that the mechanisms for intelligence are going to be extremely simple/rudimentary, so dissecting it would be relatively straight forward. I think this would probably yield some pretty fruitful discoveries, definitions, and generalizations. The next task would be to slowly move up the life form complexity scale, trying to determine if they too have intelligence and if the definitions and generalizations still hold. Eventually, you can start looking at insects such as ants and flies, and spend a lot of time determining how their cellular composition creates the behaviors they have. I think at this level, you'd start to see some emergent patterns which are universally consistent across all life forms. They may have neural type cells which drive behavior, and their brains are small enough that you could probably examine each and every neuron and its connection with other neurons under a microscope, and then create an artificial replica of that system in a computer program. Now, let's say that we perfectly captured the neural topology of an ant brain and the neural behaviors, given all stimuli and inputs. We've effectively recreated the brain of an ant. The interesting realization I think we could make here is that whether the brain is composed of organic cells or digital representations, as long as the underlying mechanisms for behavior are exactly the same, the resulting intelligences are indistinguishable and indiscernible. If you consider two indiscernible objects to be the same thing, then the organic and digital brains are exactly the same. In light of this, the next interesting question to ask:  What makes the intelligence "artificial"? What's artificial about it if its indiscernible from the organic version? To put it another way, if we somehow could read the type and position of every single atom in an ant brain and make an exact copy of it at the atomic level, such that the copy is an organic equivalent to the original ant brain, would that organic equivalent be "artificial"? I think the underlying question is about what makes something "artificial"? If we want to define the atomic copy of the brain as a natural intelligence, then you can't ignore the fact that it was manually created, atom by atom, cell by cell until it had the exact same behavior as the original. But, if we can make a perfect simulation of the underlying behaviors of each atom and each cell within a digital model of the brain, then is not that digital model of the brain no less natural than the organic version? Our digital model would be so precisely defined that if you wanted to, you could create an organic equivalent, atom by atom, cell by cell, such that it would be atomically indistinguishable from the original. If you agree, then we would have to say that the digital version of the brain isn't any more "artificial" than the original, and if the digital brain exhibits all of the hallmarks for intelligence as previously defined, then we have a digital intelligence rather than an artificial intelligence, and the term of "artificial intelligence" is a poor description for what we have.

I think, if we're armed with this fundamental idea for intelligence, then the concepts of higher levels of intelligence and thinking are within our grasp. I also think that we would have to start defining intelligence along a gradient level, because the intellect of a single celled organism will be vastly inferior to the intellect of a crow, and the crow would be vastly inferior to the intellect of a human being, and the intellect of a human being could be vastly inferior to the intellect of some other creature or intelligence. The question for this intelligence gradient comes down to a problem of demarcation of observable hallmarks of intelligence. Great care would have to be taken to define the various hallmarks at each gradient level, such that an intelligence could be precisely and accurately assessed.

When it comes to "souls", I personally don't believe souls exist. It's a religious invention and a relic of the past, which was an attempt to explain consciousness and the human experience. It's a pretty weak concept, because religious people necessarily have to insist that animals don't have "souls", even though animals are 100% conscious beings with distinct personalities, and experience the world much like how we humans experience the world (their sensory inputs and physical capabilities might differ, but at the core, we humans and animals are all intelligent agents). Religious people are dogmatically bound to deny the existence of "souls" in animals because to embrace the idea of a soul within a non-human being would both suggest that those souls too must be "saved" (which opens a huge can of worms) and that humans don't actually have a unique position of dominion over other animals in the animal kingdom. 

When it comes to game AI, we *really* don't need to create the level of true intelligence I described above. In film and cinema, there's this concept of "Suspended Disbelief", where the audience momentarily let's go of their belief that the film world is fake. I think games can have that same suspension of disbelief. We only have to create illusions or reality, and as long as we're consistent and our illusions are believable, we maintain a good suspension of disbelief with the audience. So, if you're designing an AI for a game, we only have to create a level of complexity which creates an illusion of actual intelligence. The illusion of intelligence can be created by creating sufficiently complex conversation / response trees, intelligible behaviors in response to game events, and faking as much as we can get away with. Computationally, this is much cheaper than running an accurate simulation of intelligence at the cellular or atomic levels. We can create some pretty convincing intelligence with a sufficiently complex state machine :)

Edited by slayemin

Share this post


Link to post
Share on other sites
2 hours ago, slayemin said:

They may have neural type cells which drive behavior, and their brains are small enough that you could probably examine each and every neuron and its connection with other neurons under a microscope, and then create an artificial replica of that system in a computer program. Now, let's say that we perfectly captured the neural topology of an ant brain and the neural behaviors, given all stimuli and inputs. We've effectively recreated the brain of an ant. The interesting realization I think we could make here is that whether the brain is composed of organic cells or digital representations, as long as the underlying mechanisms for behavior are exactly the same, the resulting intelligences are indistinguishable and indiscernible. If you consider two indiscernible objects to be the same thing, then the organic and digital brains are exactly the same.

Your view is one that I think is held by the majority when looking at how to mimic the brain in a simulation.  It's at this point though, where if in fact the simulation does mimic the brain of the animal or insect being simulated, that I'd contend that the simulation would not be capable of a similar level of intelligence or have subject experiences.  I'm glad you brought it up because it's actually at this very point where I diverge from yours.  The assumption being made on your behalf is that the electrical activity of the brain is what's responsible for the intelligence and/or subject experience to arise.  The research points in a different direction.  They believe the subject experience and I'd assume the extension from it being coherent intelligence, are more the result of the chemical exchanges between the neurotransmitters as opposed to the electrical signals.  If my interpretation is correct, the electrical signals act to synchronise the chemical activity of the brain.  If anybody's ever done drugs, the vast majority of what you're experiencing is the result of the chemical make-up of what your taking and the chemicals effects on the neurotransmitters.

Edited by Awoken

Share this post


Link to post
Share on other sites
2 hours ago, slayemin said:

When it comes to "souls", I personally don't believe souls exist. It's a religious invention and a relic of the past, which was an attempt to explain consciousness and the human experience. It's a pretty weak concept, because religious people necessarily have to insist that animals don't have "souls", even though animals are 100% conscious beings with distinct personalities, and experience the world much like how we humans experience the world (their sensory inputs and physical capabilities might differ, but at the core, we humans and animals are all intelligent agents). Religious people are dogmatically bound to deny the existence of "souls" in animals because to embrace the idea of a soul within a non-human being would both suggest that those souls too must be "saved" (which opens a huge can of worms) and that humans don't actually have a unique position of dominion over other animals in the animal kingdom.

FWIW, the one group of religious dogmas that you're referring to does not equal all religious people. Spiritualism is pretty diverse. Plenty of individuals and belief systems assign souls to animals. Not every religion is about having to "save souls" either. Many also feature a single soul, being kind of god itself, running through everything, which renders the question as to whether any specific thing has a soul or not, a nonsensical question. That's actually the famous Mu-koan in Asia. Even Catholicism tries to incorporate this with the holy ghost, but we all know how full of contradictions it can be ;)

38 minutes ago, Awoken said:

They believe the subject experience and I'd assume the extension from it being coherent intelligence, are more the result of the chemical exchanges between the neurotransmitters as opposed to the electrical signals. 

Chemical signalling and electrical signalling are completely linked. You can't have one without the other. Any sufficiently advanced simulation would have to incorporate models of both in order to function. That's also not an impossible task. We do complex chemical, atomic and even quantum simulations all the time. It's just a matter of scale and cost... 

Share this post


Link to post
Share on other sites
52 minutes ago, Awoken said:

Your view is one that I think is held by the majority when looking at how to mimic the brain in a simulation.  It's at this point though, where if in fact the simulation does mimic the brain of the animal or insect being simulated, that I'd contend that the simulation would not be capable of a similar level of intelligence or have subject experiences.  I'm glad you brought it up because it's actually at this very point where I diverge from yours.  The assumption being made on your behalf is that the electrical activity of the brain is what's responsible for the intelligence and/or subject experience to arise.  The research points in a different direction.  They believe the subject experience and I'd assume the extension from it being coherent intelligence, are more the result of the chemical exchanges between the neurotransmitters as opposed to the electrical signals.  If my interpretation is correct, the electrical signals act to synchronise the chemical activity of the brain.  If anybody's ever done drugs, the vast majority of what you're experiencing is the result of the chemical make-up of what your taking and the chemicals effects on the neurotransmitters.

I actually believe that intelligence is an emergent property of our neural topology and the underlying mechanics for neural behavior. The electrical activity in the brain is not where intelligence comes from.

I think we may need to split some hairs on the role of electrical signals and chemical signals within a neuron. A neuron has synaptic receivers which receive chemical signals from neuron transmitters. The neuron also has a potassium pump at the cell nucleus which causes an imbalance of charged atoms to build up over time. When the charge exceeds a particular threshold, the neuron fires and an electrical signal is sent down the axon membrane which triggers the neural transmitters, which in turn, slightly increase the charge of downstream neurons. The firing neuron then goes into a short refractory period where it balances out its potassium ions, and then it returns into a "ready to fire" state. The activation of particular neurons and the subsequent downstream activation of other neurons (happening tens of thousands of times simultaneously) is what creates a "thought". The human brain is composed of trillions of neurons, and the neural connections vary between people, so ... we don't really have a good understanding of the human brain or the capability to process 1 trillion neurons simultaneously. When we learn something new, a few neurons are reconfiguring their connections to other neurons. The more often a connection is used, the stronger it gets and the more intricately and efficiently connected it becomes. If a connection is unused, it eventually decays and we experience "forgetting" something. Brain science and AI is barely scratching the surface, and if you ask an AI researcher or neuroscientist how a particular part of the brain works, 80% of the time the answer will be "Nobody knows..."

You bring up something that reminded me of a really fascinating thought experiment I have been toying around with. Imagine that you go to sleep tonight, and while you're asleep, the robots rise up and take over the world, and in your sleep, the robots have decided to replace you with a robot. To do this replacement in place, they pretty much take the brain out of your skull and place it into a robotic host body which looks and feels exactly like your organic body did, except now you're a robot with an organic brain. You wake up the next morning, feeling no different than you did every other morning of your life... except, you can't help but feel that something is very off but you can't quite place what it is. So, the question for the thought experiment: What is noticeably off? What gives away the fact that you're a robot instead of a human?

The underlying question: How much does our physical body and its state, drive our mind and its state? Is there more to the human experience than being a brain in a vat with a bunch of electrodes hooked up to it to create an illusion of reality?

And even deeper questions: What role does our hormones play in our thoughts, behaviors and actions? How does adrenaline influence the mind? Is it possible to experience love without a body, purely on an intellectual level? Or is a physical body a prerequisite for love? What things would we lose if our mind was transferred into a robotic host?

And the killer question: If we make an exact copy of an organic mind and then we run it digitally, and a key part of the minds experience comes from stimuli from a host body, is it ever possible to run a correct simulation of an organic intelligence in a disembodied digital setting? In other words, is an AI forever doomed to be incapable of love because it doesn't have a host body which is a necessary component for it? (that could make an interesting fiction story: a sentient AI questing for love)

Anyways, sometimes I find myself thinking, "The robot revolution hasn't happened because I did something stupid, and a robot wouldn't screw that up like I just did. Yep, I'm still human... for now."

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


  • Advertisement