ai self-consciousness

Started by
58 comments, last by GameDev.net 19 years, 4 months ago
i want to make a thread dedicated to what i believe is the most important thing that could be done in AI. that is to create an AI capable of infinite or almost infinite self improvement. that is the only way to break the current barrier with AI, which are limited by their original programming, for the most part. I believe that this can be done by having the AIs conscousness based on words, like how humans counscoiunesses are, since birth we use words long before we learn any math. Define words to the ai and show how the words you defined relate to each other, how the definitions work and interact with each other and whatever the real world is to the ai (that is, all the input it receives). Thus the AI will not only be able to speak to us but also to grow and improve itself. some of those definitions will be mathematical ones, from its understanding of how words are used it would be able to use math the same way. that is how humans learn things, with a great verbal assist, everything, even if its a number, is said in a word. that is the essence of good self improving self conscious ai. -xior Sounds like another cyc project. Just another language to enter the definitions, no consciousness comes out of that. No, the things we want to achieve consciousness with should work on connections, like neural networks, no words, unless the connections can be interpreted as words. -xrcist i know its complicated. but my idea was to indeed interpret the connections as words. cause from words you can describe or interpret anything if you get complicated enough. it will also help figure out whats going on if we verballize all of it. > "Define words to the ai and show how the words you defined relate to each other, how the definitions work and interact with each other" This sounds a lot like "giving a name to objects" (words) and how these objects relate to each other (predicates). > "Thus the AI will not only be able to speak to us but also to grow and improve itself." I _really_ want to believe that something like that can be accomplished :-). However, your conclusion seems to be a bit too fast to me, because there are no arguments to support this conclusion in your original post. > "No, the things we want to achieve consciousness with should work on connections, like neural networks, no words," This triggered me to remember DISCERN, a set of programs that is used to feed stories to a neural netwerk. Look for 'DISCERN' in http://xenia.media.mit.edu/~mueller/storyund/storyres.html -- Yeb the thing i put in the center was a reply to my first statements (from someone else somewhere else). i think it can all be done with words. not a nueral network. my argument is that through words it would understand everything, if its taught how to speak properly, grammar, sentence structure, then through that it can understand what words mean, using definitions related to each other it would grow more and more complex as it thinks, just by basing everyhting on words. it could come to realize what a thought is by the word thought and its definition, and realize it could start thinking of things as thoughts, and use the word experiment to experiment with that idea, then revert back and keep what it had learned. something like that. What would, in the AI you describe, be the difference between the following two sentences? "Xior throws a ball at Yepster" "A ball is thrown by Xior at Yepster" Would the AI know that the two sentences express the same proposition? that just has to do with grammar. we need to give it the definitions of grammar words enough so it applies grammar by itself. and then it would know that the two are the same. What is the first sentence you would put in the system? i would give it the understanding that a word is a group of letters possibilities being a-z, word separating another word by a space on each side. hen the next lines would be. definition = an explanation of a word explanation = The act or process of explaining keep going with a lot of other data. Ok :-) But in what language would you write down THAT knowledge? Could you give an example session with the AI? Like AI> "a word is..." (etc etc) i guess i would have it process everything at once. with the understanding that = means it can replace one thing with another if they are equal, have that piece in its programming. Do you know wordnet? http://www.cogsci.princeton.edu/cgi-bin/webwn2.0?stage=1&word=word yeah, i get your point that its complicated, but i believe that after enough info the computer would sort it all out by itself, and understand everything eventually, but evenything it needs to improve after a certain point.
Advertisement
You'll (probably soon) notice that the phrase "it's not that simple" fits perfectly in this area.

Human self-awarenes isn't just a bunch of interconnected words. It's a "state of the being", it isn't just the "understanding" of being "something" but a whole "sentiment" of existance that comes through the interaction between our senses and the enviroment.

And a lot (LOTS) more.
[size="2"]I like the Walrus best.
This is a "nice" idea but we're VERY far away from this. You don't consider many aspects of our brain that you want to mimic. There are already computers out there that can talk to us "intelligently" but they still don't understand what exactly they are saying. The only thing we can do is mimic behavior. Look up regular expressions in AI. This is the breakdown of sentences into stuff like: pronoun noun verb noun. A verb is an action and can only be put into certain places in a sentance and other rules for all other types of words.

I'm a little rusty on this topic because it has been about 2 years since I took my AI class back in college. One of my projects for that class was to take a text version of old stories, like Huck Finn and Tom Soyuer, read them in and have the program "learn" the structure of the sentences used and how the groupings of the words were used. It then had to generate it's own 10 page coherent story. You had to deal with a LOT more than just regular expressions too, like statistics.

Also, how would you define the learning process? What about slang words that the meanings change over time? Or new words that aren't formally defined yet? Also, defining words with other words is kind of circular reasoning. It's like what came first, the chicken or the egg.

This was a big topic (language processing) in my AI class. This is also a big research area. I got all this from my AI professor, Dr. Bruce D'Ambrosio, when I was in school. He is best known as the inventor of the Symbolic Probabilistic Inference (SPI) algorithm for solving conditional joint probability densities from any directed acyclic Bayesian network. He is very respected in the AI research field and that's where I got all this info.

EDIT: You say:

"if its taught how to speak properly, grammar, sentence structure, then through that it can understand what words mean"

This is also flawed because there are many different grammars, sentence structures and such. Think of English and Spanish, they have different grammar/sentence structure. We don't just use words to communicate, we learn our first words by watching our parents as they say the words.
Here is a link to a website about a computer at MIT that you can call and talk to about the weather in your area. You might find it interesting. The website also talks about the complexities involved.

http://www.sls.csail.mit.edu/sls/whatwedo/applications/jupiter.html

[Edited by - SAE Superman on December 22, 2004 12:58:08 PM]
it might be good if it doesn't understand what itself is made out of and how its working. we could just put an independent code in it that the ai will eventually discover that this code can be used to communicate with people outside of the ai, and that is how we'll talk to it. it won't know how its thinking, we'll know that by its cpu and all, so it will be safe, yet able to tell us anything.
I think you watch too many sci-fi movies [grin]
Quote:Original post by Xior
it might be good if it doesn't understand what itself is made out of and how its working. we could just put an independent code in it that the ai will eventually discover that this code can be used to communicate with people outside of the ai, and that is how we'll talk to it. it won't know how its thinking, we'll know that by its cpu and all, so it will be safe, yet able to tell us anything.


No, It's sad but NO, sorry.

Now accept it. Study seriously and find a way to make it real.
[size="2"]I like the Walrus best.
ai cant be self conscious because what do we do when we are not using processes to solve problems? That is outside the domain of computer AI, because its an intelligent problem solving tool, and being self conscious is not helpful in solving problems. (different than being self aware, in the sense that the crate stacking robot might need to know about its own dimensions for various pathfinding tasks)

Quote:Original post by owl
Quote:Original post by Xior
it might be good if it doesn't understand what itself is made out of and how its working. we could just put an independent code in it that the ai will eventually discover that this code can be used to communicate with people outside of the ai, and that is how we'll talk to it. it won't know how its thinking, we'll know that by its cpu and all, so it will be safe, yet able to tell us anything.


No, It's sad but NO, sorry.

Now accept it. Study seriously and find a way to make it real.


here i rewrote it.

Eventually the ai will become self conscious and it might be good if it doesn't understand what itself is made out of and how its working. we could just put an independent code in it that the ai will eventually discover that this code can be used to communicate with people outside of the ai, and that is how we'll talk to it. (after it becomes self conscious it will want to do everything it can and the only ability aside from thinking we’d give it is that code that allows it to talk to us. it will eventually figure out how its thinking (at first it won’t just would be using its cpu without realizing its using its cpu),
Quote:Original post by pTymN
ai cant be self conscious because what do we do when we are not using processes to solve problems? That is outside the domain of computer AI, because its an intelligent problem solving tool, and being self conscious is not helpful in solving problems. (different than being self aware, in the sense that the crate stacking robot might need to know about its own dimensions for various pathfinding tasks)


we'd give it a LOT of definitions and from that it would be able to figure out everything and it would learn how to solve problems for itself.

This topic is closed to new replies.

Advertisement