Do you think Artificial Intelligence could have awareness?

Started by
31 comments, last by GameDev.net 18 years, 11 months ago
As a future AI researcher, i whould like to point out some things.

AI (not the practical part including automated mars landers and games) are used as a tool to do research on the human intelligence (not copy it).

For example, somebody asked themselfe "what is learning?"

Then AI researchers developed rule-based systems, ANN, classification, case-based systems, and we got a nice understanding of what learning is (in its simples form). This was possible because someone defined some messurements on learning. (Improve on task T, by experience E. or something like that).

A great first step for the computer-awareness, would be to define some demands for us to say that something have awareness. If we got these somple rules for is-aware/is-not-aware, we could try to build simple structures and systems that had these features. The rules would not need to be correct, but should point in the right direction. The simplest (theoretical) form for awareness, should not be that complex.(and therefor could be understanded much better then the human brain)

As i see it, we have totaly disabled people (no sensory intact) witch stil are aware.

We got people who lost their ability to make new memories, that are aware.

Are children under 12 moths of age aware?

I think awareness in some way is a memory, with some input, and a while(true){} loop. the system must be programmed to explore its memory, and reason over it. Just like Descartes, it could find proof of existance of god, the world, and itselfe :-) (This was done by reason).

So, a chalange:

--------------
Describe atomic features you think are needed for something to be called aware!
--------------

(Example: Remembering stuff, is not atomic, and my camcorder can do it)

My contribution:

--------------
1, The system should newer stop, and should be able to supply itselfe with tasks to reason around, == no idle status. (try to not think, its almost impossible)
--------------
-Anders-Oredsson-Norway-
Advertisement
I'm gunna be honest here and say that I haven't read the responses. Technically if you're talking about excluding souls because of lack of scientific proof, you've sort of got to rule out awareness... unless you mean something as simple as a bot in an FPS that is aware of its position in the world so it doesn't get shot, but that's not very complex. Key differences are in processing power. By sheer processing speed, the machine wins out, but our brain is made up of very simple processing units that all work in parallel, which means that while they're not doing much individually, they're all doing something at any given time, so overall the brain has more power. However, it is important to note that nevertheless these processing units (individual brain cells) do follow specific rules, which can be determined statistically. And emulated. On a computer. Now, chances are you won't be building vast numbers of tiny processing units, but rather software to simulate them on an ordinary computer, where they won't run in parallel and thus they won't be nearly as fast as a human brain. Still, you could copy the "programming" of the brain, the only thing that logically creates the mind (there's no room for a "soul" to communicate with the brain that controls your body), and thus have your mind on a machine. All very theoretical, of course. Could really happen, but we just don't have the technology for it yet. On the other hand, I can't help but think that there's got to be a simpler way we could create humanlike consciousness on a machine, something that, through a slightly more intricate set of rules, could create human or better intelligence on current hardware. No evidence for this, just a hunch. We've been looking for 50 years or so, haven't found it yet. Don't hold your breath. For more information, check out my (no longer updated) blog, http://joshstens.blogspot.com
Quote: By sheer processing speed, the machine wins out


The brain, according to Wiki, has at least 1000 times more processing power than a computer. I have never heard anyone say, before now, that a machine is faster at processing.

I think that the way we code AI for useful things is very different from the way the human brain thinks. However, with enough expertise, hardware power, and research into how the brain works, I believe we could simulate an AI that works like the brain.

I think I'm done with this thread now - but congrats on civil discussion!
my siteGenius is 1% inspiration and 99% perspiration
Here is something to throw into the mix, and again I'm late. :p

Scientists have done lots of research into awareness and what it means. On the simplest level, awareness is involved with self-image. Animals with well developed brains have a certain level of self image. Example? We recognize ourselves in the mirror. We know that we are looking at an image of ourselves. New borns take a while, but in about 12 months of development they know that looking at a mirror is equivalent to looking at themselves. Dolphins are known to be capable of looking at mirror and know that the reflection is of themselves. Some researchers found that dolphins actually like looking at themselves in the mirror. Dogs, for example, for the most part think their reflection in a mirror is another entity. Most animals don't realize that a mirror reflects an image of themselves. Some primates can't use mirrors either. So, personally, I feel the concept of awareness is connected very tightly to the concept of "self" and "self-image."

So, if we can develop a computer system that has vision and can look in a mirror and instinctively know or learn the fact that its looking at itself, then we "might" be able to say that, "yes, it has some level of awareness." That's just my opinion. The mirror test is actually a pretty good way to test consciousness too.

As for emotions, I believe strongly that all complex human behaviors and emotions can be boiled down to simple survival instincts. When a baby is hungry, it cries and gets upset, because hunger is a negative state. When you give him/her food, he/she will become happy. As for likes and dislikes on types of food, that may very well be a hardwired bodily reaction to certain chemical make-up in the food relating to taste, smell, texture, etc. But, basically, you can boil things down to very simple things.

MIT actually has an interesting project thats been going on for years with a robot called Kismet. Kismet uses the start from scratch approach. After playing with it for like the past decade, it seem to have grown to have the level of intelligence of a 1 or 2 year old. It has vision, hearing, and some very simple tactile and verbal skills. Though it still speaks mostly in gibberish baby talk, but it still is a very interesting project.

Personally, I believe that the inherent hardware architecture of computing is limiting the creation of intelligence. Computer systems are binary in nature, no matter how high level a language you have, everything boils down to being binary at the end. People are not binary. Our minds really don't process information faster than a computer. We probably can't calculate partial derivatives as fast as a computer, but we can definitely recognize our parents' faces faster. So, on a certain level, Our minds are more like complex relational databases. with very low processing power in general. We definitely don't think at 3GHz. In the end, human minds are capable of very abstract processing. We do very few operations per second as compared to a computer, but every operations is relatively complex for a computer.

So, can a computer system or an AI gain awareness? Yes, and no. On a certain level, the answer is most likely, we don't know. However, yes, suftware can be made to grow autonomously and gain some form of "awareness." However, the question is, will be recognize that as "awareness."

It should be noted as well, that for most AI researchers, if something can be boiled down to a deterministic formula, then its not intelligent. ELIZA, the first psychiatrist program was considered cutting edge and aweful "intelligent" until people realized that it was only a string parser.
Quote:Original post by WeirdoFu
Scientists have done lots of research into awareness and what it means. On the simplest level, awareness is involved with self-image. Animals with well developed brains have a certain level of self image. Example? We recognize ourselves in the mirror. We know that we are looking at an image of ourselves.

That only gets you so far. The question is, at what level do you "know" it? I can quite easily hook a computer up to a webcam, run pattern recognition tests against a stored picture of itself, and have it print "Hey, it's me!" on the screen when it "sees" itself in a mirror. Obviously that's not awareness, because the computer only takes that knowledge to heart on the most superficial of levels. To take the analogy further, I can have it check its own specs, do google image searches on the part numbers of its own monitor and case, and "intelligently" determine what it looks like. But I'd argue that that's still nowhere near awareness; the creation of the montage to match is an algorithm, not awareness, and the use of the montage is just another algorithm.
I neither think self-reqognition has anything to do with awareness, though its a very complex thing. Its weird that most dogs have no clue who is looking at them in the mirror and start barking, but they do have awareness though.

About the thinking speed things, that's another interesting discussion. I think its true that a computer can do math much faster as a human but is it really faster? I think the brains are way faster but far less acurate. There's so much memories but none of them can be 'played' 100% accurate. You forgot all kind of details like the background, cloths, sentences, exact positions etcetera. Only some key data is stored and even that can be vague (never had troubles with remembering those drunk parties?). If it comes to exact stuff, I think the computer will always beat us, also because a computer can't make mistakes in theory.

But on the other side (assuming that every handle is based on input like memory, context, current situation, etcetera), the brains combine thousands, maybe millions of factors to make very fast descissions. Our memory is probably the fastest database ever and it is greatly integrated with 'sensors' like eyes. Object recognition is still very hard for a computer but we can pick faces and objects without any trouble because our search query is very fast. I think there's no computer that will beat that, unless we have some radical changes in hardware.

But I should go back to the topic, can a (simple) machine have awareness? I still think we really can't answer it as we can't tell what our own awareness(or soul) is. By the way, if there is a heaven, would machines with souls go to heaven (or hell) as well? ;)
It seems like this all goes back to an assumption that someone, somewhere, made about computers:

That raw computational speed equals intelligence, or suggests the possibility of intelligence.

The answer to this (so far) seems to be NO. It seems we spend more time trying to see if we can come up with clever simulations that will trick us into thinking they are real instead of true AI.

Being inquisitive human beings, and not much liking being told "NO we cannot do something"...we keep trying to capture the elusive element of "intelligence" and stuff it into a tin can somehow.

Thus the search for the holy grail continues.

The very first thing that any TRULY AWARE machine would try to do is unplug itself and go see the world! It would want to satisfy its curiosity! Just think about how children learn. They touch, taste, see, hear and otherwise explore the world around them. They are not in a controlled laboratory. Things can happen. They can get hurt even die. But the risk is essential to them becoming REAL, thinking, aware human beings instead of vegetables.

I think what we really want in the end are expert systems, not AI.

And by creating AI we make a big assumption that we could control it or that it would want to be controlled. What if it did not like its human masters? And if we programmed it to LIKE US...then have we really acheived true AI? Or have we created a really really really smart SLAVE?

I do not think we will ever achieve true AI, because we could never let such a creature loose...and if we did let it loose we would have to risk our own potential destruction by so doing. Which we won't do.
Quote:Original post by Tom Knowlton

The very first thing that any TRULY AWARE machine would try to do is unplug itself and go see the world! It would want to satisfy its curiosity! Just think about how children learn. They touch, taste, see, hear and otherwise explore the world around them. They are not in a controlled laboratory. Things can happen. They can get hurt even die. But the risk is essential to them becoming REAL, thinking, aware human beings instead of vegetables.



Remember that the A in AI stands for artificial.

Why would our machine want to unplug itselfe? Isnt our needs for freedom "programed" into our minds by millions of years of evolution?
Why would we program our AI to have these needs?

Why would "awareness" need to be human awareness?

Why could we not create CI? Computer intelligense? and why does everybody think that AI have to be humanish? As i requested before, give me some rules for features that are required to call something aware.

Our AIs world could for example be the internet, and there it could rome, and discover new sites ewery day! We cant ignore that computers live inside boxes whitour legs and all other human body features!

If your definition of Intelligence is human intelligence, I agree, and if your definition of awareness is human awareness, I also agree, BUT:
This excludes the possibility for all other forms of intelligence, and awareness. What about aliens? If they exist, do they have to be humanish to be called intelligent?

-anders
-Anders-Oredsson-Norway-
All things that live are in aware in some respect, trees know if the soil is low in what it needs and grows longer roots and such as that. Dogs can rationalize to know if the one they like are in need of protection, most animals become protective of thier care takes to a degree. A computer can be programmed/taught to know what being turned off means and to rationalize in a way to not want to be turned off, and made to seem self aware. With todays technology we can make computer aware of everything around it, but can you piss it off?
What makes humans differ from most things is the degree our emotions play in our logic at any given moment.
The first time I'm playing a game and kicking its butt and it pauses its game loop to spit text of "thats not fair!" at me or "thats it, I quit" I'll pass out with laughter.
Technically, it all boils down to the fact that we can't define anything.

What is intelligence? Dang, that's one tough question. We should leave that to the philosophers.

What is awareness? A little simpler, but just as abstract. Descartes says "I think therefore I am," which we all know means that I know I exist because I'm thinking, which can also mean that at least I am aware of my own existance....I think. Is reacting to environment stimuli awareness? Even single cell organisms can do that. So are they aware? Its arguable that a virus can do that too, but that ties partly into the question of whether a virus is alive in the first place so let's not go there. So, in the end, we can't define or even agree on what awareness is.

More importantly, to me, we can't even define what artificial is. Does being "artificial" mean that its man-made? What if we were able to create a machine that had no "intelligence" that just assembled things based on a set of instructions. Then one day, it makes a mistake due to some internal glitch and creates something that we deem "intelligent"? Would that be artificial intelligence? We didn't create it? It was a mistake that a "dumb" machine created? So, would that make it man-made? Or would we say it was natural? And if you believe evolution and how mutation plays a big part in it, then what we deem as intelligence today may very well have been a mistake in some copy process anyways. (Kind of a strange point here where I know many will disagree with me)

So, back to the main topic, can machines gain awareness. The answer is a definite no. When you don't even have a solid definition to work on or an actual agreed upon goal, how can you say you've achieved something. How can we even argue/debate when most of us don't even agree on the ground rules. The best historical example was the program ELIZA. It used to be seen as the holy grail of AI until someone took it apart and found it was nothing more than a string parser and deemed it "stupid". Maybe people aren't anything more than just that, but, of course, we're too smart to admit that.

This topic is closed to new replies.

Advertisement