Good AI equals self-awareness

Started by
32 comments, last by bibiteinfo 15 years ago
Quote:Original post by polyfrag
Quote:Take pain, for instance. Pain is something that we experience frequently when we are younger. Why? Because our brains have not yet learned to avoid things that are harmful, such as hot objects. Experiencing pain is, as we all know, very unpleasant; it provides feedback to the brain saying "that was bad, don't do it again".


But free will also comes into it. Lying in bed I may feel an itch on my cheek or may feel slightly uncomfortable in the position I'm lying. How does the mind create the qualia of "not liking" this? I have a choice of whether to scratch my cheek or not and whether to try to find a more comfortable position. But how does my nature incline me towards doing this? It seems that these things are like the "natural" things to do and my choosing to resist scratching the itch or moving to a more comfortable position is action that is "against the current". I think maybe that as children we feel these qualia that give us an ambiguous, nebulous sense that requires some action that something isn't right that won't leave our attention and won't let us focus on other things unless we attend to the things causing our state. And through actions eventually we learn over time to how to "relieve" these unpleasant states.


"Relieving" an "uncomfortable state" is not free will, It is the decision to do the opposite that is free will because it is an active decision to go against a reflex. The whole concept of "not liking" something is actually a conscious categorization of unconscious or subconscious behavior or reflexes. There are quite a few hardwired behaviors of the brain and the physical body that comes from evolution. Take your example of an itch. At some point in evolution, it probably became a survival necessity to be able to feel light contact on the skin, which induced what we call an itch. The natural response is to get rid of it. Humans aren't the only animals that feel the need to scratch an itch. Most all mammal have the same behavior. The point of the skin and its ability to feel pressure is to warn us of environmental hazards.

All reflex responses are fundamentally built on the need for survival. So, on the lowest level, there is pain, hunger, etc, which directly affects survivability. Then built on top of that, once survival has been taken care of and guaranteed, there is then the "quality of survival" or "quality of life" layer. And it is on this "quality of life" layer that most of our reflexive emotional responses are built on top of. So, the purpose of behaviors on this level is to further maximize survivability by creating behaviors that decrease the chance of having to trigger lower level reflexes. As the layers grow, their dependencies on previous layers become more complex, which is why it becomes less apparent how certain behaviors can be attributed to survival instincts.

So, ironically, true "free will" is the conscious choice of going against our instincts. However, there are times, where your seemingly conscious decision to go against a certain instinct or reflex is because of some other higher level behavior having slightly greater benefits when performed. So, for example, instead of scratching the itch, you get up and go take a shower instead because you realize that may solve the root problem. So, this further begs the question of what are choices made of "free will" and what are choices that are based on established behavior patterns from experience.
Advertisement
But scratching an itch isn't a reflex, it's a learned behaviour.

[edit] Maybe there is some algorithm to the thinking process. E.g., maybe there is an underlying pattern how thought goes from A to B and things come into our attention. If there is a pattern, then our thoughts are deterministic, on an atomic level... just thinking how a self-aware AI could be programmed. Fascinating stuff. It would be interesting to take courses on neuro-psychology.
Human intelligence appears to have evolved through combination of adaptive learning and memories. This means that we remember especially 'good' or 'bad' events, and then, if they were to repeat, we use the common features of the two situations to determine what causes it and how to prevent it/make it happen again.
We also, however, develop a system where we predict interactions between things using our memories, so that we can simulate the interactions mentally to try to create or avoid these 'good' and 'bad' events, and we also use experience gained from our understandings that we are much alike other people, through stories or spectation on events.
This is how we have come to understand intelligence; an entity that evaluates a situation to decide which actions will be most beneficial and which will be most detrimental, then subsequently avoiding the bad and using the good. By learning in such a way, using past experience and object stereotyping, the presently percieved 'ultimate' intelligence is achievable.
For a game, or any other software-based intelligence, then, it is necessary for access to memories to decide the causes of both long term and short term consequences.

A virtual fox, for example, may chase a rabbit, catch it, and eat it, and may be considered intelligent, as it has satisfied its hunger. However, the same fox will also then try to catch more rabbits to continue to remove its hunger. It will find that it gets a stomach ache from chasing all those rabbits on a full stomach, and may intelligently correlate this to his running. But he will not realise that it is his running only on a full stomach that results in his present situation unless he can access his memories.
The aforementioned virtual fox is intelligent, then, but not as much so as true equivalent entities, as his comprehension of the basic Action (Cause) -> Change (Effect) nature of existance is limited by its limited access to information about its situation.

In a game application, as discussed before, this is horribly inefficient, as the memories will be an excessive inconvienience and performance destroyer, and the evaluative, or thinking, time that most humans have to consider the common causes and effects of different things will be unavailable in most environments.

However, this said, each character prototype could be trained in a sandbox environment, with its controls set to make it act randomly and an ability to learn and store its experiences. The outcome of this could then be streamlined, loosing all unneccessary memories after learning all that is possible from them, and giving the resultant rules of understanding to each entity of that type in the game.

Any attempts at this training method would be of great interest, and any results or observations would be greatly appreciated.






Quote:To program consciousness/self-awareness into an AI one would need to understand how the mind works. I myself have thought how does thought go from A to B? If you understand that you can program self-awareness/consciousness.

As a brief aside, I would like to note that, despite much scientific and programatic understanding of the brain's base functions, the resultant branch of programming, neural programming, does not provide all the answers for artificial intelligence, for true emulation of human or animal brains would require an excessive quantity of layers and a huge amount of computation, as well as vast interdependancy of multiple neural networks.
However, as a training method and a resultant 'taught response' intelligence sort, artificial neural networks have potential to become greatly accurate vehicles for artificial intelligence. Unfortunately, the present state of technology primarily prohibits it, due to extensive use of multiplication at often thousands of different nodes in every cycle.

For those interested in artificial neural programming:
http://www.cs.ualberta.ca/~sutton/book/ebook/the-book.html
(quite in-depth and mathematical on theory)
http://www.ai-junkie.com/ann/evolved/nnt1.html
(brief, non-mathematical, and from a programming perspective)
Wow mrmr1993 this is the best answer I've read on this forum, ever!

From a small perspective, I think the Sims is the game which emulate the best a human intelligence ... perhaps they do some stupid things, they interract with their environnement pretty well and learn. Therefore, they do not have what we call a real human intelligence.

This topic is closed to new replies.

Advertisement