ai self-consciousness

Started by
58 comments, last by GameDev.net 19 years, 4 months ago
words are trees, insert quote about trees here
what do we use before words?
china
why do you think *Word*net is done by the *Cognitive Science* department?
(those are riddles btw)

http://www.webspawner.com/users/samplex/
I'd also check out 'Society of Mind' by Marvin Minksy. It's inspired me a long time and would give insight into a lot of this. Although I don't remember why I bought it, anyway I suppose that's what got me in this.

By the way, I haven't actually heard any ideas why it won't work. I've heard a lot of flames, but no actual reasons. Although yes, one does need more than a simple dictionary, the question is, what else is necessary?
Again note how a dictionary was created by a cognitive science department.
Advertisement
Alright, to put this topic to rest. Words have many meanings and a lot of words can be used as nouns and verbs in the same sentence, or one word can be used for every word in a sentence. Here's my explanation on why this idea won't work:

--------------------------------------------


A sentence can be constructed that has a noun
repeated arbitrarily many times, followed by a verb repeated the same
number of times:

1. Bulldogs fight
2. Bulldogs bulldogs fight fight.
(i.e., bulldogs (that) bulldogs fight, (themselves) fight)
3. Bulldogs bulldogs bulldogs fight fight fight.
(i.e., bulldogs (that) bulldogs (that) bulldogs fight, (themselves)
fight, (themselves) fight)
...

The inflation of this type of sentence can be accelerated by the use of
the three senses of the word "buffalo":

1. oxen (noun)
2. baffle (verb)
3. from a city in Western New York (adjective, usually capitalized)

The progression becomes:

1. Buffalo buffalo buffalo.
(i.e., Buffalo (from the city of) Buffalo baffle.)
2. Buffalo buffalo Buffalo buffalo buffalo buffalo.
(i.e., Buffalo (from the city of) Buffalo (that) buffalo (from the
city of) Buffalo baffle, (themselves) baffle.)
3. Buffalo buffalo Buffalo buffalo Buffalo buffalo buffalo buffalo buffalo.
(i.e., Buffalo (from the city of) Buffalo (that) Buffalo (from the
city of) Buffalo (that) Buffalo (from the city of) Buffalo baffle,
(themselves) baffle, (themselves) baffle.)
...

This sentence will have an adjective-noun pair repeated arbitarily
many times, then a verb repeated the same number of times. So the
word "buffalo" really only has three meanings in the sentence.

A progression using the pronoun, conjunction and adjective meanings of
the word "that" was composed by George Herbert Moberly in the 1850s.

1. I saw that C saw.
(i.e., I saw the following: C saw.)

2. C saw that that I saw.
(i.e., C saw the thing which I saw)

3. I saw that that that C saw was so.
(i.e., I saw the following: the thing which C saw was so.)

4. C saw that, that that that I saw was so.
(i.e., C saw this fact, the following: the thing which I saw was so.)

5. I saw that, that _that_ that that C saw was so.
(i.e., I saw this fact, the following: the specific thing which C saw was so.)

6. C saw _that_ that, that _that_ that that I saw was so.
(i.e., C saw the specific thing, the following: the specific
thing which I saw was so.)

7. I saw _that_ that, that _that_ that that _that_ C saw was so.
(i.e., I saw the specific fact, the following: the specific thing which
the specific C saw was so.)

In the final statement, the first, fourth and seventh "that"'s are
adjectives meaning "specific," the second and fifth are pronouns,
and the third and sixth are conjunctions. Thus there are seven uses
but only three meanings.

Here is an example with five "had"'s in a row, each with a different
meaning. This is the longest known case of this phenomenon. As an
aid to understanding, we'll build it up a step at a time.

The parents were unable to conceive, so they hired someone else to
be a surrogate.

The parents had had a surrogate have their child.

The parents had had their child had.

The child had had no breakfast.

The child the parents had had had had had no breakfast.

------------------------------------------------

All these sentences above are valid sentences (although unlikely said in everyday conversation). Our brains can figure out which words mean what, but can you explain the steps your brain goes through to figure that out? That's what you have to do in order to tell the computer how to figure it out. Computers can only do what they are told and if we don't know how we do it then we can't tell a computer how to do it. Computers will NEVER think as long as we don't know how we think. In time they may MIMIC thinking but they will never actually do it. How does your brain think? Again, if we don't know how we are able to think, how are we to tell the computer how to think?

I said it before, but if you really want to know about such AI topics, go read some research papers online or take an actuall college level AI class before arguing with people that have actually worked on research AI systems. I gave you a link to a website for a computer you can call on the phone (toll free) and talk to it. I doubt you've even looked it up, since you haven't said anything about it. Here's a bit on the Jupiter computer too:

----------------------------------------
http://www.sls.csail.mit.edu/sls/whatwedo/applications/jupiter.html

How does Jupiter work?

Jupiter is based on the GALAXY client-server architecture, the platform for the SLS Group's conversational systems. GALAXY's technology servers include SUMMIT for speech recognition, TINA for language understanding, and GENESIS for language generation. A domain server, Jupiter stores weather forecasting information in a relational database derived from its four Web-based sources. When a user asks a question over the telephone such as "What will the weather be like in Boston tomorrow?" Jupiter invokes the following procedure:

- Speech recognition: SUMMIT converts the spoken sentence into text
- Language understanding: TINA parses the text into a semantic frame -- a grammatical structure containing the basic terms needed to query the Jupiter database
- Language generation: GENESIS uses the semantic frame's basic terms to build a Structured Query Language (SQL) query for the database
- Information retrieval: Jupiter executes the SQL query and retrieves the requested information from the database
- Language generation: TINA and GENESIS convert the query result into a natural language sentence
- Information delivery: Jupiter delivers the generated sentence to the user via voice (using a speech synthesizer) and/or display

Depending on user specification, GENESIS accesses Italian, English, or Japanese language tables during language generation procedures.

Spoken Language Systems Group
MIT Laboratory for Computer Science
545 Technology Square
Cambridge, MA 02139 USA

----------------------------------------

As you can see from above, a large computer science group at MIT have already done a lot of work on language systems. I would say they are the ones qualified for this kind of stuff. So like I suggested before, go research their stuff.
As best I can tell, there are two ways we might ever create a self-conscious artificial intelligence.

1) Somehow figure out everything about how our minds work abstractly and program that into a computer, so that it may "do as we do." As SAE Superman points out with some fun linguistic examples, until we can understand how we parse them, we cannot program a computer to parse them. I share his belief that I don't see us figuring out our psychologies in profound depth any time soon.

2) Construct an artificial human. By this, I mean something that starts like we do, toddles around, learns, grows. Something that experiences culture and can take part in society. If we just build something that functions like we do on a physical level, then I wouldn't be surprised to find that if you treat it like one of us it will react the same way, that you could teach it. This is how I personally think the way to go would be if constructing self-conscious artificial intelligence was our goal.

Because really, think about it. If I understand you correctly, and you'd have your AI as a metaphorical brain-in-a-box parsing grammar on end...How could such a thing ever really understand color? Maybe it could generate the sequence of characters "box" but how could it even understand what a box is, having never used one? How could it conceive of something it has never experienced, something that it has no context for, no frame of reference? It certainly couldn't possibly have the same understanding of elephants that we do, having never seen one and undergone the same emotional reactions we may have gone through.

I'm not saying that AI is impossible, but I really don't think we're going to have at it in any way other than by making something like us -- otherwise its cognition would not be comparable to our own. I think AIs need to be able to see, to touch, to taste, to hear, to move around before they'll ever be able to share our kind of cognition. I'd like to see AIs that start out knowing nothing, that start babbling like infants, and that can learn speech with care and nurturing. I imagine that such an AI wouldn't really be that different from you or I...they'd just have been made differently.
- Hai, watashi no chichi no kuruma ga oishikatta desu!...or, in other words, "Yes, my dad's car was delicious!"
Quote:Original post by Bucket_Head
As best I can tell, there are two ways we might ever create a self-conscious artificial intelligence.

1) Somehow figure out everything about how our minds work abstractly and program that into a computer, so that it may "do as we do." As SAE Superman points out with some fun linguistic examples, until we can understand how we parse them, we cannot program a computer to parse them. I share his belief that I don't see us figuring out our psychologies in profound depth any time soon.

2) Construct an artificial human. By this, I mean something that starts like we do, toddles around, learns, grows. Something that experiences culture and can take part in society. If we just build something that functions like we do on a physical level, then I wouldn't be surprised to find that if you treat it like one of us it will react the same way, that you could teach it. This is how I personally think the way to go would be if constructing self-conscious artificial intelligence was our goal.


The problem is that to do #2 you must master #1. The question you always have to ask youself is... "how do WE do it?" How do we learn? How do we grow? How does our brain start out as a "toddler" and learn language? We are still thousands of years away from making an artificial human. Take something as "simple" as walking. What are the steps your brain takes to calculate where you should put your foot next? How do you know you're off-balance and in what direction? How do you stand on one foot? How do you recover after something/someone has bumped into you? What are the steps for all those? Even once you know all those things, how will you tell where to put your foot? There will have to be some sort of visual recognition system and that in itself is a huge AI research area. Take any picture and think of how your brain picks out and recognizes objects. How do you pick out a bird sitting in a tree? Remember all you know about the image is each pixels color. I could go on for hours about the complexities...but I'll leave it up to you to ponder [grin]
[imwithstupid] I'm gonna have to agree with SAE Superman on this one. If you guys think you can do it then go for it but I wouldn't hold my breath. And you apparently won't get any help from here from the looks of the other replies.
i came up with a way for it to sort out the importance of things to increase efficiency. 0 = nothing
everything = infinity
everything between 0 and infinity is something. and every something has a value relative to every other something.

(maybe an evil robot would work that only negative)

Say the robot sees a box. It will be able to place the box in importance relative to whatever its objective at the time is. (perhaps it would have to give mathematical values for the importance of things) Or if it doesn't have an objective it can decide for itself the box's importance relative to everything and nothing. And everything else in between.

you tell it to leave a room. it looks up leave, it gets the general idea from all the related definitions and stories and everything, it sorts them by relevance and relative value and then its get the idea that you want it to leave the room its in. it uses multiple programs like graphings programs and visual identification and math programs and its camera on its head to see the objects and position of items in the room it uses its 3d program and a combination of all those programs in coordination with its legs to leave the room and it just did what you asked. it understands that you're not trying to hurt it cause it can tell you asked in a nice tone of voice which it analyzed relative to other voices and what those voices had to say and its infinite database of knowledge it knows everything

[Edited by - Xior on December 24, 2004 5:39:39 AM]
Xior

Are you planning on trying this theory and programming it? Because I'd really like to see what happends. And see the work in progresss

If you plan on doing it, please post your progress on the forum.
Xior, like you, your program will know lots of fancy words but won't be able to convey a coherent sentence for the life of it. Nor will you be able to get it to make good decisions based on arguments posed by numerous people against it's incoherent theories.

This reminds me of a friend of mine who just graduated with a degree in liguistics or something of that nature. My friend Jason can speak just about every common language in Europe. As well as many others. Sadly Jason is a babbling idiot at times, and we're forced to make him wear mittens so that he can't manage to turn on the oven when we're not watching him carefully.

Point is that the logic involved in human intelligence is based on biological materials that calculate data much much faster than any artificial computer architexture we've managed to design. Computers are not at any time in the near future going to capable of independant thought comparable to a human.

You're going to find that for your idea that whenever the program needs to look up a definition to a noun or verb it will find only words. Which it will need to look up the definition for. Thus finding more words and continuing this circle. Words only become something more than a set of characters to a computer when you can compare a word to something that is REAL.

When I think of the word resteraunt. I might relate it to a picture in my head of my favorite resteraunt. A computer will look up its definition and find something along the lines of "A place where food is served and eaten." Assuming it has already related the words in that definition to REAL experiences somehow. It might mistake a dumpster as a resteraunt sense people serve their food to it (granted it's not humanly edible) and it's eaten by rats and racoons.

To put it simply you could fit any adjective into this sentence to hopefully see my point. "It was too -...- for words." Somethings simply cannot be expressed by words. You'll find that a truly intelligent program would need a MUCH MUCH MUCH more complicated way of storing, soting and comparing its EXPERIENCES (which may be experiences involving words themselves).

Consider that most children already know from some experience what a thing is before they learn the word to describe it.
"Never have a battle of wits with an unarmed man. He will surely attempt to disarm you as well"~Vendayan
you tell it to leave a room. it looks up leave, it gets the general idea from all the related definitions and stories and everything, it sorts them by relevance and relative value and then its get the idea that you want it to leave the room its in. it uses multiple programs like graphings programs and visual identification and math programs and its camera on its head to see the objects and position of items in the room it uses its 3d program and a combination of all those programs in coordination with its legs to leave the room and it just did what you asked. it understands that you're not trying to hurt it cause it can tell you asked in a nice tone of voice which it analyzed relative to other voices and what those voices had to say and its infinite database of knowledge it knows everything
Quote:Original post by Xior
its infinite database of knowledge it knows everything


I'm somewhat hesitant to get involved, but you actually bring up an interesting point (whether intentionally or as flamebait is another question). Searle argues against the notion of Artificial Intelligence, saying that it is simply impossible to emulate human thought, principally because the computer would not *understand* what was being said. One of the formulations of his theory was his famous argument The Chinese Room. Here, he stated that a man was in a Chinese Room with a book, an "infinite database of knowledge" as you put it, that contained a good response in Chinese to every possible Chinese statement made to it. The man in the room did not understand Chinese, but every day he was sent little slips of paper that each contained a phrase in Chinese; he was to look up the phrase and reply with the Chinese phrase the book instructed him to reply with. Thus, since the man's "database of knowledge" was infinite, he looked and acted like a human!

However, Searle was arguing *against* Artificial Intelligence. It is obvious that the man in the Chinese Room did not understand what he was being sent. Thus, he could not formulate replies on his own, and clearly human beings have this ability. Further, it is impossible for this "infinite database of knowledge" to exist, since nothing is infinite! Quite simply, an impossible situation was outlined for any human to construct, and even if it was managed to be constructed, the human in the room would NEVER understand Chinese!

This is also a perfect example of data vs. algorithms, as was mentioned above. A programming language operates on data using a set of algorithms. As it happens, the relationship between data and algorithms as expressed by a programming language is called a "programming paradigm", such as Object-Oriented Programming, or Procedural Programming. Clearly, for a programming language to even exist, both of these components must be present. Data alone without an algorithm is worthless, simply because it cannot be interpreted in any useful manner! The key is to discover a "learning algorithm" that we can apply to data through a language such that it learns in the manner you desire. This is what has been plaguing computer scientists ever since Turing! Indeed, data alone won't do it, but algorithms alone won't do it either. Human descriptions are too vague and informal for transcription into a language.

I always like to say that if I truly understand something, I can encode it into a programming language. Do we truly understanding intelligence? Do we truly understand emotion? No, I'm afraid we don't. In fact, if you come up with a formal definition and representation for emotion, you'll be the first to win the Nobel Prize in the new field of research you just created ;-)

Merry Christmas!
h20, member of WFG 0 A.D.

This topic is closed to new replies.

Advertisement