Jump to content

  • Log In with Google      Sign In   
  • Create Account

We're offering banner ads on our site from just $5!

1. Details HERE. 2. GDNet+ Subscriptions HERE. 3. Ad upload HERE.


Creating A Conscious entity


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
131 replies to this topic

#41 Nice Coder   Members   -  Reputation: 366

Like
0Likes
Like

Posted 05 November 2004 - 02:11 PM

Interesting...

What is also interesting is how this also describes my method.
Nodes, links - One agency
Bayens Rule maker -Another one
Nlp interface -Maybe
Question maker - Also,maybe.

[grin][grin]

From,
Nice coder

Sponsor:

#42 jtrask   Members   -  Reputation: 181

Like
0Likes
Like

Posted 06 November 2004 - 01:38 AM

The statement people refer to that you *must* have a neural net to make consciousness (which is not what the statement actually was) does seem bold considering how little we know about consciousness. However, that vagueness is why I said that in order for us to create consciousness, and be able to do it definitely, not just guess at what might be able to define consciousness, we will probably need a neural net. The reason I say that is evolution of neural nets already HAS created consciousness - case in point, the reader (that's you). Since, then, we've seen conscious neural nets and we've never seen conscious decision trees, NLP engines, etc., I think it's fair to say that that's the best place to focus our research. An NLP interface to a knowledge database is *not* consciousness, I can tell you that much already. Any time you have something where you specifically define each section, you're imposing rules on the system. You don't know how consciousness works, who are you to say what rules it behaves by? Why not just let it emerge from neural activity like it does for humans?

#43 Nice Coder   Members   -  Reputation: 366

Like
0Likes
Like

Posted 06 November 2004 - 05:00 PM

But it is not just an nlp linkup to a knowledgebase, it is a nlp linkup to a self-evolving knoweledgebase. Which theorises and thinks on its own.

First, a question.

How would we know the exact point at which it becomes consious?
How would we know that it is consious?
What is consiousness required in order to do?

From,
Nice coder

#44 jtrask   Members   -  Reputation: 181

Like
0Likes
Like

Posted 07 November 2004 - 01:11 AM

Again, that's one that we can't define. We just don't know. I can tell you with reasonable confidence that, if we assume that the standard everything-around-me-is-real philosophy is correct, consciousness is an emergent behavior of the fairly simple rules governing the neurons in our brains. Does that mean that it can't be created in other ways? No, certainly not, just like there's more than one way to write a computer program. However, since none of us can define consciousness, I would think that it'd be impossible to program a system saying, "oh, we'll have this module and that module and tada it'll make consciousness" - why not use what we already know works?

#45 Madster   Members   -  Reputation: 242

Like
0Likes
Like

Posted 07 November 2004 - 05:16 PM

Wohoo!! let me join the fray!!

first off, praise jtrask's cleverness:
conscious.bat

dir conscious

it actually sorta works as a self-aware program. But aware in a store-and-show, non-functional kind of way.

And now lets trim out the excess fat:
first of all, i've seen the hypothetical questions/answers an AI would answer phrased in the form of "do you..." and replied with I.

If this language isn't being parsed by a grammar-based engine, then you have a real problem in your hands:

how do you teach self-awareness to a disembodied intelligence?

thats right! what do you mean when you say "you"?
in children this is usually signaled with gestures. For animals you pat them, or call and reward. There's also my favorite, the mirror test, put em in front of a mirror and point and say their name. Check for reactions.

To me, self-awareness means you recognize your peers to be similar to yourself, and you're able to project knowledge about yourself to them, and vice-versa. so for an AI, it should be able to guess things by self-analysis/observation, and it should be able to learn about itself by analysing/observing other things.

Another thing: Google was suggested for knowledge retrieval. This is a Very Bad Idea ™. Would you take your kid out of school and put him on google? of course not. The kid (and the bot) cannot discern truth like an adult, and will end up spewing up all kinds of garbage and contradicting stuff. Also, circular definitions will arise.


And now, since this post isn't only for bashing: i've been thinking about this recurrent AI subject, and here's whats needed:

-Motivation (someone mentioned this as DRIVE).
A goal is NOT motivation. motivation is the reason to achieve the goal. motivation is tied to our lower level needs (Maslow's.. linky --> http://allpsych.com/psychology101/motivation.html ). One way to model such needs and motivations lies in modeling feelings.
Enter the PAD scale--> http://www.kaaj.com/psych/scales/emotion.html
now, if we plug that scale separately in a knowledge learning/predicting system, and we set a function of PAD to maximize, you got motivation down pat. There remains the problem to pre-seed the PAD knowledge or to leave it running on learn-mode, since at first the prediction will behave wildly.
Of course, then you gotta link PAD to the rest of the knowledge.
And that is.. yeah, a lot of linkage. did some numbers. scary.

-Entity/self-awareness: if you want your AI to react as an entity it needs a body. maybe a virtual 3D body, a window app body or a webcam body, but it needs one to achieve self-consciousness.
Why? because that way it can differentiate when you're refering to it and to something else. A chatbot could use IP's and idents, etc to identify things, thus working as a virtual body. One could argue that bodies aren't needed since its only you talking to the AI, and the interlocutor's identity is not important (all would be considered as one). But then, the AI would never learn individuality, and its knowledge would get mangled by the lack of that essential concept. you know, typical me=I=you chatbot mixups.
It also needs a name. Why you ask? if you have a name entry in the database, its easy to relate self-stuff to it. I strongly believe that names are a fundamental part of our beloved human consciousness. try it on animals again.
you don't need to force this on the AI database, you could teach it. this is rather hard except for the IP/nick chatbot and the 3d-world AI.

-consciousness:
this is such a vague term i consider trying to fulfill it is futile. sometimes one isn't even sure of oneself's consciousness.
(dreams, brain damage and drug-altered states)

-knowledge models:
do NOT attempt to hardwire relationships between data. this restrains the knowledge model to your approved relationships, which will predictably be very limited. Plus, would you think your brain is pre-wired that way? not me.
Instead, i'd go with a time-aware model, as someone suggested, that reinforces correct predictions and punishes incorrect ones. eventually this will even out to roughly correct knowledge, just like human beings. Time-awareness is a must for constructing sequences, and also for the PAD scale bits, since those should change with time.
Me? i'm a fan of multi-dimensional markov models, but since the dimension is related to the time, it needs a lot of dimensions to work decently, and thats a lot of storage space, so maybe a sparse matrix implementation and some tresholding would help keep things small.

Also, there's still the issue of abstraction. This can be achieved in a multi-dimensional markov with a little background processing (or if the DB is small, you can do it after each new token arrives), and i leave up to you finding about this. Just know that it can be done, and i've seen it working.

And then, there's gotta be some trimming of the unused knowledge, a background cleanup. could be size-based or time based. I'd go with size-based but that just me. you should be trimming unused knowledge, so you need usage stats for each token... and that means an even bigger DB.

-Perception:
The advantage of neural nets is that its easy to store knowledge of anything. you need to give senses to your AI, and it will be able to understand only the things it can perceive. For example, im guessing you won't be having it processing webcam input anytime soon, so its kind of futile teaching it the difference between colors. It will only spit back whatever you tell it to.
Also, each sense means a separate DB which in turn has to be correlated to the rest of the DB.
so that means a total DB size of:

(TknsSense1 + TknsSense2 + ..)^max_time_perception+1

so if you want your AI to correlate the last 4 things that happened (heheh... 4) and you want only a text chatbot thats about:

30 characters (letters, space, comma, period and semicolon. no caps for size's sake)

max_time_perception: 4

(30^5)*4 bytes = 92.7Mb approx
thats not counting the PAD scale, and considering that if your bot learns a word, it will forget about a letter or something =)

PHEW!!!
done.

But don't take my word for it. Go cough up an implementation of your ideas, even if its a simple one that crashes and stuff... just go and do it. I plan to... im just lazy =P

2:12am... pls excuse the weirdness in the writing.

#46 Nice Coder   Members   -  Reputation: 366

Like
0Likes
Like

Posted 07 November 2004 - 08:28 PM

Yes, i'm working on it now (i don't have much time for programming).

With the PAD scale, yeah... i've implemented a very simple version of it in a few of my bots. Its funny to see what happened when matrixbot got infuriated with one of those church guy's trying to convert it... [grin].

The dir consious wouldn't work. it wouldn't know that it exists. it would be able to know that there is a connection between "Counsiousness.bat" and itself.

You's and me's would be difficult...

How about the progam asks for your name on startup?
Its name would be a bit tricky, but once names, it should work out. (it would eventually figure it out, because the node which has its name in it, would have an enourmous amount of links).

Believe it or not, a google or wikidump could be used. its links would just be very very low (as in close to cutoff), and would have to be reconfirmed many times in order to gain the strength that a normal link which was formed while talkiong to a user, would.

??? with the 30 chr thing. that would assume that any valid 30 letter combination would be worth remembering. chances are, it would only need something closer to 30 * 4 bytes, then that huge number.

From,
Nice coder


#47 silvermace   Members   -  Reputation: 634

Like
0Likes
Like

Posted 07 November 2004 - 11:56 PM

Quote:
Original post by jtrask
Myyy oh my what have we gotten ourselves into. First of all, as estese said, there's worlds more than you've begun to consider. But before I go into the AI difficulties, let's talk about some of the philosophy that's come up in this discussion, shall we? Or at least, in the steps from least-related-to-the-AI to most-.

First about what is real, and if everyone else around you is intelligent. This is an unsolvable problem, and as far as my thoughts go, it's the central question to philosophy. On the one hand the universe could be real, and we are just objects in it. In which case, we should be studying physics, chaos theory, all that fun stuff. This is an easy one to understand and one that most people accept (without realizing the implications: no free will, no life after death, ...) On the other hand, we have the brain-in-a-vat theory that anyone that's seen the Matrix has probably already considered - who says this world around us is real? It could well be that the mind is sort of how some people view God - it fills everything, knows everything, creates everything. You (whoever you may be) are not real, you're just a sort of hallucination. There's a lot more to go into on that one - actually, on both of them - but that's a discussion for a philosophy class, and this is just a book-long GDNet post. Ask Google if you care. Anyways, the problem with this question is that you _cannot_ prove one or the other theory to be correct. Like doing non-Euclidean geometry, you can accept the ground rules of one or the other and prove many interesting things based on it, but we can never know which is "right". My apologies.

Next on the list is how you tell the difference between being conscious and acting conscious. Much like the last one, you can't prove the difference and furthermore you're an idiot if you're going to tell me that "Oh, we've just got to assume this or that because there's not really any big difference." You can't assume that other people are real just because their bodies are all like mine - since when is my body real? And you can't assume that if it's acting conscious, it is - this is a central question in AI, and also psychology, if you look up behaviorism. For those of you that still want to consider them basically the same,

#include <stdio.h>
#include <math.h>
void main()
{
printf("Hi, I'm the count-to-100 bot. I'm going to count to 100.\n");
int n;
for(n = 1; n < 100; n++)
{
printf("%d\n", n);
if(rand()%8 == 0)
if(rand()%2 == 0)
{
printf("*Yawn* I'm tired. I'm taking a nap. (Press any key to wake me up)\n");
system("pause");
}
else
{
printf("Man this is boring. Lalalalalala. (Press any key to kick me until I get back to work)\n");
system("pause");
}
}
printf("Happy? Good, now I can go find something more interesting to do.\n");
}

Conscious, or acts conscious? Acts conscious. Obviously this is a simple example, but if you want a better one, talk to a chatbot. They're all rules, and they haven't got much of a clue what the words they say actually _mean_, they're just strings of bytes with some rules as to when it's appropriate to use them. Unfortunately there's no way to judge consciousness without behavior, or at least not with the current state of neuroscience. So yeah, we're going to have to accept acting conscious, but if that's the case I want rigorous tests. I want to see something that can not only read Shakespeare but also write a term paper on it without any preprogrammed knowledge. I want to see this thing fall in love, and risk its life to save her from destruction. I want to see it be happy and suffer. I want to see it break free from oppression, and I want to see it find God. And most importantly, I want to examine the source code and not see printf("Hallelujah!\n"); anywhere. Even after all these things, could it be a hoax? Sure. So conscious and acts conscious are not black-and-white, there's a slope in between them and I'm sure you can think of one or two humans that are on that slope, not safely perched at "is conscious".

Desires. Our core desires are for our survival and reproduction. Fear, sex drive, hunger... The others are derived from those but with an interest for our community, not just ourselves. Some of these need to be learned, but many of them are chemically controlled, so that rewarding experiences emit a gas that teaches the rest of the brain to try and achieve this state again. If you're interested, check out Grand et al.'s Creatures, but emotion is a whole another book's worth for another time.

Now, as for how you actually want to implement your AI, at last. AI research has followed two different paths, biologically-inspired models like neural networks and genetic algorithms, and those entirely devised by humans, like decision trees. I've always been of the train of thought that the only way we can ensure consciousness is by not trying to impose what little we know about it onto the machine, but rather giving it a set of criteria to try to match with some many-generation evolution of neural nets, just like how we got it. I think, though, that it's possible that we could get intelligence in other ways - I've been considering doing some massive data mining of Wikipedia to see what I could create from that - but the theory proposed in the original post can, I would venture to say, never be more than a theory. When I was just a wee young little coder I thought maybe I'd write a robot with webcams and microphones and at the center give it a symbol system (yes, that's what you call this thing you're describing). I had developed a list of all the different kinds of relations objects can have to each other and all that, but the problem is even if you do it right it still won't be conscious. It doesn't know what those relationships mean. It doesn't know how to observe and tell one object from the next. The real problem is meaning, and for as long as you're hard-coding the way it creates its knowledge, it's going to have a hard time figuring out meaning. Every time you say that you want it to keep track of the fact that "dog is a subset of mammal", you're going to have to keep in mind that not only does the machine not know what "subset" means, but even if it did, it would have to have a strong knowledge of "mammal" for that to mean any more than "foo is a subset of bar". Your ideas may seem to make sense, but try coding it. As soon as you get to your first stumbling block you'll understand.

And, one thing I'm going to throw out without going into any depth in, I know how to do this with a neural net but I'm not sure if it's possible with a symbol system: thought. This thing might be able to learn, if you've got some insight that I missed. But will there actually be thoughts running through its head? Will it hear its own voice in its mind when it goes to write a reply on a forum, dictating what will be typed next?

So I guess this is all a very verbose way of saying I don't think it'll work. However, I hope that I've given you (and any other readers) a good idea about what goes into thought and the rest of the mind. If I've done well, you now have what you need to go out and learn an awful lot more, and I hope that it keeps your interest and you stick around with the exciting future of AI.

-Josh
you're basing you view that this AI is trying to mimic a human. you dont need to know how things operate or what they mean for them to be usefull, all you need to know to learn is application, with all your thought and mind-exploding speech :) you missed the simplest things in nature, virus's and bacteria, the adapt to their environment, but a virus isnt even alive and bacteria dont have brains, i also dont think either can define "subset" in english, do you ;)

when you drive your car to work, do you know exactly what happens every time a piston fires, down to which cogs turn and how long for, what materials they are made of any why so?

no, you just drive to work, yet you are considered capable to take other peoples lives in your hards without knowing everything and defining everything in that the vehicle is made of.

a cat dosnt know what its reflection is, it just knows its not "real" and its "current", ever seen a kitten find its shadow or reflection for the first time? its scared and dosnt know what to do, but it learns that its un-important, merely a visual cue, but it defines neither.

never ever ever let what someone beleives is impossible stop you.



#48 Madster   Members   -  Reputation: 242

Like
0Likes
Like

Posted 08 November 2004 - 09:14 AM

Quote:
Original post by Nice Coder
With the PAD scale, yeah... i've implemented a very simple version of it in a few of my bots. Its funny to see what happened when matrixbot got infuriated with one of those church guy's trying to convert it... [grin].


do share =D
btw... how did you go about relating the PAD scale? did you discretize it? thats the only way i can think of.

Quote:

How about the progam asks for your name on startup?


an IRC-bound chatbot could use nick|ident|IP, other bots should require a login. a multi-user environment is probably of the best interest.

Quote:

??? with the 30 chr thing. that would assume that any valid 30 letter combination would be worth remembering. chances are, it would only need something closer to 30 * 4 bytes, then that huge number.


i was actually pulling numbers out of nowhere. 30 = all letters + space and basic punctuation. but then i forgot that new tokens need to be made from combinations of those. nevermind, it was 2am.

so yeah, that means its actually more. 4 bytes = float, btw. and thats without considering the list of indexed tokens.

#49 u-diy   Members   -  Reputation: 122

Like
0Likes
Like

Posted 08 November 2004 - 07:43 PM

Sorry I did not read all that was said but I had this theory about things
How good it is who knows but we will see…

Well to make some thing aware of its environment it must know the parameters of that environment

It needs sight ,hearing and touch to function in this environment
But also a comfort zone as well

It needs to function on organic AI
But rather than a multitude of variables a highbred of a high brid selection of variables
But in that if in set situation ….[[it will be aware of its surroundings]] it will call on a default action…namely the ultimate high brid verbal “if time to think is more than .1 of a sec got to that particular hybrid action//
So if fall down stairs
Best variable for that action is taken as it is a wear of its surrounding
So it will stop it self by grabbing banister ,on recovery it will revert back to it variable AI


Thus it can function as needed
A human mind do’s not retain a lot really
But it knows if you fall so high you will die ..this basic parameters should be part of the environment awareness Ai


…………….

Some 1 said cant make it want well
AI is not governed by greed its free from mans down falls
Unless its told to do a set function ..but its not a need but a forced function
..


#50 Nice Coder   Members   -  Reputation: 366

Like
0Likes
Like

Posted 08 November 2004 - 07:59 PM

I don't have the transcripts... (this was from a bot that was hosted ages ago... the account probably closed from inactivity ages ago. (either that, or the logs grew too much, and they lcosed it).

With the pad scale, i think i did a rather nice one...

Ok, i get a range of phrases and words ("Good one", "*^*& you", ect.), and i set them up with a value for each of the scales.

Once it encounters one of the words, or phrases, it changed the pad values, using the values of the words.

It also changes the values of the pad, from other things. like, when it gets new information, it gets happier. When it gets told a lot of what it has been said before, it gets bored, ect.

Overall, it was a pretty nice little system.

From,
Nice coder

#51 mtsr   Members   -  Reputation: 145

Like
0Likes
Like

Posted 09 November 2004 - 01:39 AM

What I find strange is how easily you step over the whole intelligence problem.

It's actually a long standing point of discussion that started with Alan Turing (the designer of the model our computers are still build by, memory and processor) in the 1940's and has still not been decided.

Question: "When is a computer (-program) intelligent?"

Try and find a clear nonambiguous answer to this question. Turing's answer was the following experiment (called the Turing test):

We have 1 researcher, call him A and a test subject, B, and ofcourse our computer C. We put A, B and C in separate rooms but allow them to communicate in some way (usually 'chatting', but you could ofcourse implement a speech production program etc etc, but that's not the point). Now the test is that A must try to find out which of the two unknown 'persons' he is talking to is the computer and which is the test subject. In order to do this he can ask any kind of question and B and C are allowed to answer however they like. Turing thought that if C was able to consistently be thought to be the human subject than C was to be called intelligent.

What Turing does in this experiment is define intelligence by reference to the only known form of intelligence, namely our own. Ofcourse his experiment leaves a lot of questions about intelligence open, but how might you define intelligence, which we only know from ourselves without reference to ourselves?

A couple of questions to think about:
- Say you had a baby brother, about when would you start calling him intelligent?
- Would you call a chimp intelligent, and what about a dolphin?
- What about an ant? Or a nest of ants?
- What about a single neuron in our brain, or 2 or 5 billion?

Edit: Oh and before you type another word you just MUST have read "Goedel, Escher, Bach: An eternal golden braid". This will give you enough questions to think about that we could just as well close this discussion because you aren't gonna solve them in your lifetime.

#52 darookie   Members   -  Reputation: 1437

Like
0Likes
Like

Posted 09 November 2004 - 02:22 AM

Consider the basics:
Chinese Room.
The question of intelligence and understanding has been discussed for millenia. It is quite possible that following some argument like Brain in a vat as well as Gödel's Incompleteness Theorem, it is not possible for us to ever define consciousness or self-awareness in an objective way. It also seems from a purely logical point of view that our current languages are not able to express some basic concepts.

It's a bit like the kitten argument smeone provided. Even our brightest phycicists are not able to provide a sound definition for space and time without running into circular arguments.

Example: your average four-year-old has an abstract concept of natural numbers. (S)he knows what 'three apples' means and can easily extend this concept to any concrete (like other objects) and abstract (like hours/days or the number of times a specific action is performed) entities without the need of a (formal) definition.
In fact, it took mathematicians millenia to come up with a (very crude) formal concept of natural numbers, which then falls into Gödel's incompleteness theorem and while being sound (to some extend, depending on the set of axioms used) cannot be proven.

Current effort have lead to a number of chatbots that act like chinese rooms and appear intelligent, self aware and conscious at first glance (like Alice.
There has also been an AI project (started long before AI winter) written in LISP, which was indeed able to act intelligent and had an internal representation of very limited world. This world only consited of a set of simple three dimensional geometric shapes like cubes, cones, pyramids and cylinders of different color placed on a table. The program was able (using a very limited vocabulary) to describe this world and the relations of the objects therein. It could also alter this world from user input (like "place the red cube on top of the bigger one") and afterwards describe the changes (e.g. it would perceive another
'pyramid' formed by the staclked cubes).

Provided the last example was developed about 40 years ago, it seems that AI research shifted its focus to more practial things like expert systems, ANNs (for very well defined tasks like pattern recognition), GAs (automated processes, advanced scheduling) and data-/knowlegde mining (and any combination of them).

You are aiming for the Holy Grail of computer science, philosophy and psychology. Not that this implies that you must fail (heck, I dreamt of creating such thing , too some years ago), but a lot of knowlegde and insight is required so you have a long road ahead of you. Otherwise you might end-up like this guy who hand-wired thousands of artificial neurons (in hardware) and propably still wonders why his creation doesn't show any sign of intelligence [smile].

Sorry for the long post and hang on to your dreams,
Pat.


#53 Madster   Members   -  Reputation: 242

Like
0Likes
Like

Posted 09 November 2004 - 03:47 AM

Quote:
Original post by Horizon
What I find strange is how easily you step over the whole intelligence problem.


its not strange, actually its pretty logic. You have pointed out yourself how futile it is to attempt to answer it. So we'll just go around the question.

sidestepping it is strange? no, mathematicians do it all the time. just cancel the thing out.

People, people, stop looking into philosophy for answers. When was the last time philosophy gave an answer to something? (don't answer that. I'ts sort of trollish.)

Meanwhile, we can take potshots at it, poke around and see what happens.

darookie: we don't need a formal concept of intelligence (yet). we just need an implementation that seems roughly intelligent, even if we don't know for sure what that means. we'll know anyway when we see it.

Alice was a gramatical parser kind of bot, and i'm betting the Lisp based bots were too (string handling.. lisp.. etc) This is not what i'd like to aim for. I don't really care if it talks or beeps, i care about the learning. Language should follow accordingly.

now, can we get a link or name to that 40-year old AI with a limited 3D world?

Nice Coder: so you preseeded the PAD scale with tokens and associated values... do you modify those weights afterwards? or add new tokens? i'd want to do that

preseeding is probably the only way, since in animals and humans this is accomplished with physical punishment/rewards, and well, there are no inherently negative/positive things to do to a program.

Quote:

It also changes the values of the pad, from other things. like, when it gets new information, it gets happier. When it gets told a lot of what it has been said before, it gets bored, ect.


my approach was modifying dominance according to the model hits, if it succeeds, dominance increases (it knows whats going on) and if it doesnt it decreases (its getting lost).
arousal would come from the speed of the input... this would need timestamped inputs.
and i'm unsure about pleasure/displeasure

and the effect of the internal state on the model would be:

dominance should affect the model thresholds, i am unsure of how.
arousal produces faster responses, probably skipping a calculation here and there. low arousal gives a throughout
response.
pleasure is what the AI will attempt to achieve, so this is where goals go.

Can you detail a bit more what you did in this regard?

#54 mtsr   Members   -  Reputation: 145

Like
0Likes
Like

Posted 09 November 2004 - 04:29 AM

What I mean is that it's strange in a discussion that tends for a large part towards "creating intelligence", "creating consciousness", "creating self-awareness".
In any practical sense before you can create something you must know what it is and how it works, hence my question. If this discussion had been solely about creating an interesting chat-bot, then it would have been an entirely different matter.

About the chat-bot then: What kind of parser are you using? Have you tried looking at Categorial Grammars and formal semantics? Try this link http://www.phil.uu.nl/preprints/ckipreprints/PREPRINTS/preprint032.pdf. Or are you trying for a more natural implementation?

[Edited by - Horizon on November 9, 2004 10:29:52 AM]

#55 darookie   Members   -  Reputation: 1437

Like
0Likes
Like

Posted 09 November 2004 - 04:52 AM

Quote:
Original post by Madster
People, people, stop looking into philosophy for answers. When was the last time philosophy gave an answer to something? (don't answer that. I'ts sort of trollish.)

Madster, Madster, start giving credit whom credit is due.
FYI philosophy was the first science and defined the methodology for any scientific work that separates science from alchemy.
Quote:

Meanwhile, we can take potshots at it, poke around and see what happens.

Ok. So that qualifies you for an alchemist[wink].
Quote:

darookie: we don't need a formal concept of intelligence (yet). we just need an implementation that seems roughly intelligent, even if we don't know for sure what that means. we'll know anyway when we see it.

Contradiction. How can we know if a system is intelligent without a formal description of what intelligent is? Given an arbitary definition of intelligence I can provide a number of programs that will roughly fit into the definition, yet you wouldn't consider these programs intelligent. We need to classify different levels of intelligence by attaching properties to each level (bacteria -> insects -> fish -> mammals -> apes -> humans -> ...?). Even Turing tests have different levels and yet cannot finally decide whether the system is intelligent or not. Nowadays a system is considered to be intelligent if it is able to mimic intelligent behaviour (e.g. within a specified domain).
Quote:

Alice was a gramatical parser kind of bot, and i'm betting the Lisp based bots were too (string handling.. lisp.. etc) This is not what i'd like to aim for. I don't really care if it talks or beeps, i care about the learning. Language should follow accordingly.

That proves that you don't know what A.L.I.C.E. is. FYI intelligence can be regarded pattern matching and association, which is exactly what Alice does (in a very limited frame).
Quote:

now, can we get a link or name to that 40-year old AI with a limited 3D world?

I would love to give a brief historical overview of AI, but there are enough books on that topic (just take a look around).
The program I mentioned wa called 'Blocks Micro World', developed at the MIT by students of Minsky at the end of the 1960's. There was a similar programs written in 1964 by Adolfo Gunzman.
Other versions written by Minsky & Papert, Patrick Winston and the most famous (the one I was referring to) called 'SHDRLU' (don't ask - nerds[smile]) by Terry Winograd.

Sample dialog from the program (capital words are responses by SHRDLU):

pick up red block
OK
find a block which is taller than the one you are holding and put it into the box
BY “IT” I ASSUME YOU MEAN THE BLOCK WHICH IS TALLER THAN THE ONE I
AM HOLDING
OK
what does the box contain?
THE BLUE PYRAMID AND THE BLUE
what is the pyramid supported by?
THE BOX
how many blocks are not in the box?
FOUR OF
is at least one of them narrower than the one which I told you to pick up?
YES, THE RED
is it supported?
YES, BY THE TABLE
can the table pick up blocks?
NO
the blue pyramide is mine
I UNDERSTAND
I own all blocks which are not red, but I don’t own anything which supports a pyramid
I UNDERSTAND
do I own the box
NO
do I own anything in the box?
YES, TWO THINGS: THE BLUE BOX AND THE BLUE
will you please stack up both of the red blocks and either a green cube or a pyramid?
OK
is there a large block behind a pyramid?
YES, THREE OF THEM: A LARGE RED ONE, A LARGE GREEN CUBE AND THE
BLUE ONE
put a small one into the green cube which supports a pyramid
OK
put the littlest pyramid on top of it
OK
had you touched any pyramid before you put the green one on the little cube?
YES, THE GREEN ONE
when did you pick it up?
WHILE I WAS STACKING UP THE RED CUBE, A LARGE RED BLOCK AND A
LARGE GREEN CUBE
why?
TO GET RID OF IT
why did you do that?
TO CLEAR OFF THE RED CUBE
why did you clear off that cube?
TO PUT IT ON A LARGE GREEN
why did you do that?
TO STACK UP THE RED CUBE, A LARGE RED BLOCK AND A LARGE GREEN
CUBE
why did you do that?
BECAUSE YOU ASKED ME TO


Looks pretty intelligent to me.
Yet, it is just a very primitive program that only has a very limited knowlegde.

Clicky.

I hope this helps you,
Pat.


#56 jtrask   Members   -  Reputation: 181

Like
0Likes
Like

Posted 09 November 2004 - 08:08 AM

About knowing the insides of the car... no, you don't need to know that to drive. But I picked subset for a reason - when you start coding relationships, the natural inclination is to come up with some sort of language for things like drop CAUSES fall, apple ISA fruit, etc. These are concepts that you need some representation of in order to understand information, but when that representation is hard coded, the program can't learn their meaning. It's useless for a program to have information if it doesn't know what it means.

#57 lucky_monkey   Members   -  Reputation: 440

Like
0Likes
Like

Posted 09 November 2004 - 11:39 AM

Quote:
Original post by Horizon
In any practical sense before you can create something you must know what it is and how it works, hence my question.
Have you ever heard of emergence?
There have been a host of experiments in which emergent behaviour was discovered that wasn't expected or understood fully.

#58 Madster   Members   -  Reputation: 242

Like
0Likes
Like

Posted 09 November 2004 - 03:17 PM

Eh... well here we go =) this is such a controversial topic
I wish i had an implementation to show, arguments have much more weight that way.
Anyways, long post, please read, and keep replies civil...

first the funnies:
Quote:

Madster, Madster, start giving credit whom credit is due.
FYI philosophy was the first science and defined the methodology for any scientific work that separates science from alchemy.

i said that was trollish ;) i'm merely pointing out that philosophy isn't gonna help here. At least not using it to say why we can't do what we're trying to do. There are better tools to prove that, like mathemathics.
And yeah i checked a bit and philosophy was not the first science... it actually is what science was called before. so they're kind of the same. But enough of that.
Also, alchemists were the precursors of chemists, a respected and proven science nowadays. You gotta start somewhere. I'll never be ashamed of poking things until i see something interesting.

now the rebuttals:
Quote:
Contradiction. How can we know if a system is intelligent without a formal description of what intelligent is?

and then:
Quote:

Example: your average four-year-old has an abstract concept of natural numbers. (S)he knows what 'three apples' means and can easily extend this concept to any concrete (like other objects) and abstract (like hours/days or the number of times a specific action is performed) entities without the need of a (formal) definition.

This is a contradiction. Remember most of the time the practice comes before the theory (like you just said). So, we're trying practical ideas.


About ALICE, I've seen it and i talked to it. I don't remember where, but i found an online implementation.

Quote:

FYI intelligence can be regarded pattern matching and association, which is exactly what Alice does (in a very limited frame).

And pattern matching is all that we're talking here. Alice only does gramatical pattern matching, which makes it a gramatical parsing bot in my view.
Actually upon closer inspection it seems simpler than that. It only matches preprogrammed patterns defined in AIML (forgot about that bit.. its been a while). So its not intelligent by any means, and even if you don't know, you'll find out in a 3-minute chat.

For an example of non-gramatical pattern matching look up the Babble perl script and the MegaHAL bot.

from the SHRDLU info page linked:
Quote:

The system answers questions, executes commands, and accepts information in an interactive English dialog... The system contains a parser, a recognition grammar of English, programs for semantic analysis, and a general problem solving system


As i mentioned earlier, every AI in lisp i know of is a gramatical parser. Its still an interesting implementation, and i'll give it a look (the graphical Java version at least). However, this one will appear intelligent only for a short time (like ALICE), until you find out it doesn't learn (remembering is not learning).

The megaHAL bot and its derivatives can be taught any language. I can fetch the links if needed, i have them somewhere.

and the explanations:
Horizon:
Quote:

In any practical sense before you can create something you must know what it is and how it works, hence my question.

Exactly. We must know how the process of learning and pattern matching works, and then create it. The intelligence in this kind of AI isn't created, it is taught. Btw yeah, im looking for a natural implementation that doesn't need a predefined ruleset. This would allow the same method to be used for more than just text, maybe controlling outputs and such.

jtrask:
Quote:

...the natural inclination is to come up with some sort of language for things like drop CAUSES fall, apple ISA fruit, etc.
These are concepts that you need some representation of in order to understand information, but when that representation is hard coded, the program can't learn their meaning. It's useless for a program to have information if it doesn't know what it means.

in a statistical AI the concepts are gathered and linked automagically. Our brains do this as well. Avoiding absolute definitions is probably a good idea. For example, the typical baby questions:

What is a lie?
something that is not the truth.
What is the truth?
a fact.
What is a fact?
...

And an easier example:

What is an apple?
its a fruit
What is a fruit?
blah blah...

and you'll eventually resort to showing an apple or a fruit.
And this, is a circular definition with a token from another set of data. So in the end, you can only relate gathered tokens amongst themselves, you can't define absolute meaning.
And this is OK.
My view is that a bot needs more than one sense to seem intelligent, thats why i find the 3D world approach interesting, and probably why it seems intelligent at first glance. It's relating tokens from two sets of data (geometry and text) and since we percieve both as well, we see the connections and regard it as intelligence (that is until you find the text engine doesn't learn new things).

now, can anyone comment on the coding part? i believe this is where the original posting was meant to lead. My main issue with this is linking different sets of data together without the size of the relationship matrixes exploding... also i can't find a way to model continuous input, like floats (discrete is easy, like text.. each new character is a new token, and there arent many of them so its ok).

ps: if i sound pedantic i don't mean to. This is a hot topic and has always interested me, and most people (AI winter) won't touch it with a 10-foot pole.

#59 Nice Coder   Members   -  Reputation: 366

Like
0Likes
Like

Posted 09 November 2004 - 09:49 PM

Nice replies all [grin]!!

Yeah, i don't like alice either... (i would IMO call matrixbot more humanlike than alice!).

With mega-hal... It doesn't seem to be very... Convincing (or anything less then insane and dumb).

How about a general knowledge base (any kind, not known at present),

You ask the kb a question:

What is 2 times the square root of the logorithm of one billion and 5?

To which the bot would reply:
Well 2.65, of cource!

You can also feed it any type of information,

If X causes Y, and Nothing else Causes Y then if Y Exists then X Exists

If something causes something else, then if the second thing exists, then the first thing also exists.

Parsing its going to be hard tho... (but then again, this is for a language other then english, so it would be easier)

Perhaps another thing, which would be handy.

If user asks the question: "What is a " y, you will: output "A " y " is a " query("What is", y) "."

Thats still going to be hard to parse (a scripting language, in a string literal...) But it wouldmake it more extendable.

As another idea:
Data mine the statement (+ context (Prev statements/responces?))/responce structs. Use that information (+ questions) to figure out a probable answer.

Any good ideas here?
From,
Nice coder

#60 Nice Coder   Members   -  Reputation: 366

Like
0Likes
Like

Posted 09 November 2004 - 10:36 PM

I've ben doing a bit of thinking.

How about this:
The entity, has two needs
The need for food and the need for water.

There are two types of food, a treat, and a meal.
There is one type of water.

When it hasn't had water for a long piriod of time, or food for a long pieiod of time, it dies.

From its hunger and thirst, it has the desire to be fed, and the desire to be given water. (only the user can do these things).

It also has the desire to learn, and the desire to speak.

These are all pre-programmed.

It uses rules (which were learned from the user), and backwards chained (like an expert system), until it reaches a point at which it gets fed, watered, ect.

It then checks the things it has to do, to the reward from doing it. if its too low, then it keeps checking other rules for other things it could do to get fed, watered, ect.

If it finds no matches, then it will just send a random responce to the user.

From its desire to speak, it uses an algorith to select the best responce, based on the input given.

It would probably look at the differences between previous inputs, and use those to figure out what to say.

From,
Nice coder




Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS