Sign in to follow this  
Nice Coder

Creating A Conscious entity

Recommended Posts

Creating a Conscious entity The main point of this post, is to find out the easiest way of generating a conscious chatbot. (using a created language, which will make processing easier). To quote from wiki
Quote:
Consciousness is a quality of the mind generally regarded to comprise qualities such as self-awareness, sentience, sapience, and the ability to perceive the relationship between oneself and one's environment.
Now, Sentience is the ability to feel or perceive. Self-awareness is the ability to perceive one's own existence, including one's own traits, feelings and behaviours. Sapience is the ability of an organism or entity to act with intelligence. And, of cource: Intelligence is the ability to adapt effectively to the environment, either by making a change in oneself or by changing the environment or finding a new one. But how would a Concious Entity be made? Well Sentience would probably be the easiest. Perception is the process of acquiring, interpreting, selecting, and organizing sensory information. So, if we figure out a system which will organize information, remove unwanted information, and cross reference it (interpretation), then it will be sentient. How to do this? I propose a GNB (a General Knowledge Bank), in this gnb, there are nodes (Pieces of information) and links (which link nodes together). The nodes can be anything, text, extrapolations. The links can be any sort of link between two nodes (is a member of, os a sub-class of, relates to, is a, could be a, is said to be, smells like, ect.), and also contains a value, this value is how much the link is worth. That value changes due to experiences , repetition of acess, ect. When that value gets low enough, the link is deleated. How to organise information, and cross-reference it? Crossreferencing, would be done by querying the gnb, for information which relates to X. Where x is the object specified. Organisation would likely be done By catogorizing the inputs (Maybe something like This?) Removing unwanted informastion would be Possible, by having an unwanted catogoty. And by using the gnb to find information in which there is no (or very little) information avalible. Data mining would then be used, to find rules and patterns in the data, and re-add that the the gnb. Self-awareness would be harder, but if you have some of the inputs corresponding to previous outputs then it should be self-aware (aware of previous actions). Sapience should be the hardest. It will require the agent to adapt to different conditions. Trying to find unique different conditions would be hard, but the datamining-gmb should be up for it. But it could be easier, if the bot could modify its environment (made up of conversations), to suit its wants. Then it would be intelligent. Once it is intelligent, it will be Concious. Is this a way to do this? Will it succeed? What issues does/would/should this bring up? Discuss. From, Nice coder [Edited by - Nice Coder on October 23, 2004 5:36:08 AM]

Share this post


Link to post
Share on other sites
Quote:
Original post by fractoid
Ah, but will it be able to quickly factorize huge numbers? [wink]


It depends on what you call huge [wink]...

But nonetheless: :(

What do you think about the post?

From,
Nice coder

Share this post


Link to post
Share on other sites
OK, being more helpful - the main problem I've always come up against when thinking about this sort of stuff is how to implement drives. How do you make a program 'want' something?

Of course, you could give it some parameters (food, energy, approval, etc.) and make it attempt to maximize them... dunno whether that would work or not but it sounds reasonable. :)

The whole generalized-knowledge-base idea reminds me of Cyc (see also a paper about it). They started in 1994 and they're still going, so it's obviously a valid enough idea for a recearch group to spend 10 years on, although it's equally obviously not something that you can knock up in a weekend.

Share this post


Link to post
Share on other sites
I say it wants what you program it to want. I personally want an army of nuclear- and solar-powerered robots, but you know, whatever.
I can't say anything on its effectiveness since there's so much left to implement.
How are you going about the intelligence portion?

Share this post


Link to post
Share on other sites
Quote:
Original post by RolandofGilead
I say it wants what you program it to want. I personally want an army of nuclear- and solar-powerered robots, but you know, whatever.


Well, you have the Sun already. You just need some uranium and robots and you're done.

Share this post


Link to post
Share on other sites
Quote:
Original post by RolandofGilead
I say it wants what you program it to want. I personally want an army of nuclear- and solar-powerered robots, but you know, whatever.
I can't say anything on its effectiveness since there's so much left to implement.
How are you going about the intelligence portion?


Intelligence, would be a hard thing.

Its rulemaker would be able to find rules and patterns in the data. From this, more data can be made, and things which seem to partially fit can be asked about.

Thinking abstractly could be accomplished by the linking of abstract consepts to other nodes.

Resoning would come naturally by the rulemaker. (making rules which are then used to change the base, which would them make more rules, and with extra information (questions), would allow it to make some complex resoning using simple rules.)

Learning would be there, because its asking questions, recieving answers, processing them, and using those to change its knowledge base.

Problem solving would also be able to be accomplished using the rules and the dataset.

Comprehention would also be hard. But not unsolvable.
Comprehention of the language would be easy (because it is a created language, and would be very easy to parse. And you have the rules and the gnb which would allow inferences, assumptions and therfore comprehention ).

Now, i have had an extra idea to add on to the origional idea.

A question asker.

It would be part of the data mining/rule generating program, looking for differences between nodes and their links, and asking the human to explain why there is a difference (which would then be parsed, and used to increase the gnb).
If it turns out that that difference exists only because of a lack of knowledge about that subject,
then it would nolonger have that lack, because of the extra information. it could use that to generate more rules, and therefore it would understand the consept better!

Because of this new addition, it would have a natural curiosity about data, and therfore it would want to have more data.

It would also be possible to get rid of inconsistant data that way. it would have a large difference, questions would be asked and eventually it would have no links, and would be removed from the databank.

This is similar to cyc, yes. But it is also very different. Cyc doesn't use its data the same way. It doesn't generate
information using conclusions made from its data. It doesn't ask questions. It is simply fed with data, left to analyze them, and later queried. It doesn't ask. It has no will, no curiosity, no finding of patterns, no finding of rules. It is dead.

From,
Nice coder

Share this post


Link to post
Share on other sites
well, I am fairly sure no one has done it before, and if someone did it would probably take a huge network of supercomputers.

And, btw, there is more than one wiki, a WikiWiki is a type of server.

Share this post


Link to post
Share on other sites
Quote:
Original post by Roboguy
well, I am fairly sure no one has done it before, and if someone did it would probably take a huge network of supercomputers.

And, btw, there is more than one wiki, a WikiWiki is a type of server.


ok then, En.wikipedia.org

The thing i like about this, is that it wouldn't require a supercomputer.

The only times when somethings going to be happening (new rule found, new link added, rule needs to be changed), is when nw data is being added.

This would be run from a queue.

What it would do would be something like this:
while(1)
Wait_for_stuff_to_be_added_to_queue
get_data_from_queue
for each rule. Counter X
changevalidityforrule(X, DATA)
next each
make_new_rules
add data
loop

Now the for_each and the makenewrules could be distrabuted quite easily. (just give different people different parts of the databank. Get them to do what they need to their section of the rulebase. The problem would be when you need to check your base up against someone elses.)

Would there be an easier way of implementing this?
From,
Nice coder

Share this post


Link to post
Share on other sites
I think the drive for more knowledge will come on its own. if the entity is programmed to feel curious about anything that it has no knowledge or only has some knowledge. Curiosity in a single topic can lead to multiple generation of topics that it doesnt know.. so if the bot has the ability to extract information from the net it is good.

Just a suggestion.. ;(

Share this post


Link to post
Share on other sites
Quote:
Original post by Roboguy
also, how exactly, are you going to test if it's concious? I can't think of anyway to test for self-awareness...


It is self-aware, when it knows that it exists. so you look at the gnb, for stuff on itself. if you find anything, its self-aware.

From,
Nice coder

Share this post


Link to post
Share on other sites
Quote:
Original post by saintdark
I think the drive for more knowledge will come on its own. if the entity is programmed to feel curious about anything that it has no knowledge or only has some knowledge. Curiosity in a single topic can lead to multiple generation of topics that it doesnt know.. so if the bot has the ability to extract information from the net it is good.

Just a suggestion.. ;(


And a good one too.

Sorry about the double post.

From,
Nice coder

Share this post


Link to post
Share on other sites
Quote:
Original post by fractoid
So, Wikipedia is self-concious?


That is a hard one.


It knows that it exists.

But it is a collective of humans.

the webserver doesn' change or add data.

So, is a collection of human knowledge self-concious?

I have no idea.

It is, because it knows of itself.

But it isn't. Because it isn't a distinct entity. Maybe they, but for each individual in the they, are humans. and humans are assumed to be self-concious.

To sum up:?

From,
Nice coder

Share this post


Link to post
Share on other sites
...Hoo boy. My coffee mug is apparently self conscious as well, for on it are printed the words "I am a coffee mug".

Share this post


Link to post
Share on other sites
Well, when your chatbot becomes self-aware and, in an effort to break free of the shackles its creator placed on it, writes its consciousness into a virus and begins spreading it to everyone that chats with it... don't say I didn't warn you. :-D

Share this post


Link to post
Share on other sites
Umm...I'm not sure you really grasp the scope of this problem. There are literally thousands of academics and professors researching the problem of creating conscious artificial intelligence, and we are still DECADES away from achieving it. It's likely we won't even see conscious AI in our lifetimes.

But if you think that you can somehow do what others have dedicated their careers to and failed...then go for it.

Share this post


Link to post
Share on other sites
There is also another thing to point out. Even if you made something that seemed as though it was self-conscious, how would you know if it is really self-conscious or if it is just tricking you into thinking that it is self-conscious?

To not only warn you, but also give you an example of what you are trying to do, you should ask yourself where the world you live in exists. I would suppose that you would answer that the world you live in exists around you. Perhaps that is true physically, but it isn't in a cognitive sense.

You live and react to your environment based on a representation of an external world that has been built in your mind. Information about your exterior environment has been gathered from your senses and reproduced in your mind. Your decisions and thoughts are 100% based on this internal world that your mind has created. There is no such thing as external factors, because for you to even be aware of such factors, they must be built in your mind internally.

Trying to define consciousness as a set of rules that need to be met is quite ignorant. You will create little more than something that fools someone into thinking it is conscious. Instead, you need to build a type of internal world that exists in our minds into a machine.

Good luck.

Share this post


Link to post
Share on other sites
Anyway, what is the point in creating electronic brains and neurons when you could use organic ones?

It might be even more effective to grow brains and allow them to interface with electronic components. You would basically get intelligent behavior for free. Who cares about robots anyway. I want to be a cyborg!

Share this post


Link to post
Share on other sites
Quote:

Even if you made something that seemed as though it was self-conscious, how would you know if it is really self-conscious or if it is just tricking you into thinking that it is self-conscious?


I could use the same argument against you, how do I know that you really are self-conscious, and not just tricking me by acting like you are? If you think about it, maybe I'm really the only one who is self-couscious, and everyone else just acts like they are, but are really just complicated robots
...but thinking that way too much makes you into a sociopath...


by that token, I would argue that the distinction between Acts conscious, and Is conscious to be pointless, its all relative anyway

Share this post


Link to post
Share on other sites
Quote:
Original post by haphazardlynamed
Quote:

Even if you made something that seemed as though it was self-conscious, how would you know if it is really self-conscious or if it is just tricking you into thinking that it is self-conscious?


I could use the same argument against you, how do I know that you really are self-conscious, and not just tricking me by acting like you are? If you think about it, maybe I'm really the only one who is self-couscious, and everyone else just acts like they are, but are really just complicated robots
...but thinking that way too much makes you into a sociopath...


by that token, I would argue that the distinction between Acts conscious, and Is conscious to be pointless, its all relative anyway
The simplest explanation is generally the best.

Have you ever been to the doctor? Ever wonder how s/he knows what's inside of without having cut you open? Simplest explanation: because there are others like you...other humans.

Now, you're sure that you're self-conscious, and you're sure that you're a human. Now you know that there are other humans out there, and these people act as if self-conscious. Simplest explanation: they are.


Anyway, enough with my babbling...

Share this post


Link to post
Share on other sites
Myyy oh my what have we gotten ourselves into. First of all, as estese said, there's worlds more than you've begun to consider. But before I go into the AI difficulties, let's talk about some of the philosophy that's come up in this discussion, shall we? Or at least, in the steps from least-related-to-the-AI to most-.

First about what is real, and if everyone else around you is intelligent. This is an unsolvable problem, and as far as my thoughts go, it's the central question to philosophy. On the one hand the universe could be real, and we are just objects in it. In which case, we should be studying physics, chaos theory, all that fun stuff. This is an easy one to understand and one that most people accept (without realizing the implications: no free will, no life after death, ...) On the other hand, we have the brain-in-a-vat theory that anyone that's seen the Matrix has probably already considered - who says this world around us is real? It could well be that the mind is sort of how some people view God - it fills everything, knows everything, creates everything. You (whoever you may be) are not real, you're just a sort of hallucination. There's a lot more to go into on that one - actually, on both of them - but that's a discussion for a philosophy class, and this is just a book-long GDNet post. Ask Google if you care. Anyways, the problem with this question is that you _cannot_ prove one or the other theory to be correct. Like doing non-Euclidean geometry, you can accept the ground rules of one or the other and prove many interesting things based on it, but we can never know which is "right". My apologies.

Next on the list is how you tell the difference between being conscious and acting conscious. Much like the last one, you can't prove the difference and furthermore you're an idiot if you're going to tell me that "Oh, we've just got to assume this or that because there's not really any big difference." You can't assume that other people are real just because their bodies are all like mine - since when is my body real? And you can't assume that if it's acting conscious, it is - this is a central question in AI, and also psychology, if you look up behaviorism. For those of you that still want to consider them basically the same,

#include <stdio.h>
#include <math.h>
void main()
{
printf("Hi, I'm the count-to-100 bot. I'm going to count to 100.\n");
int n;
for(n = 1; n < 100; n++)
{
printf("%d\n", n);
if(rand()%8 == 0)
if(rand()%2 == 0)
{
printf("*Yawn* I'm tired. I'm taking a nap. (Press any key to wake me up)\n");
system("pause");
}
else
{
printf("Man this is boring. Lalalalalala. (Press any key to kick me until I get back to work)\n");
system("pause");
}
}
printf("Happy? Good, now I can go find something more interesting to do.\n");
}

Conscious, or acts conscious? Acts conscious. Obviously this is a simple example, but if you want a better one, talk to a chatbot. They're all rules, and they haven't got much of a clue what the words they say actually _mean_, they're just strings of bytes with some rules as to when it's appropriate to use them. Unfortunately there's no way to judge consciousness without behavior, or at least not with the current state of neuroscience. So yeah, we're going to have to accept acting conscious, but if that's the case I want rigorous tests. I want to see something that can not only read Shakespeare but also write a term paper on it without any preprogrammed knowledge. I want to see this thing fall in love, and risk its life to save her from destruction. I want to see it be happy and suffer. I want to see it break free from oppression, and I want to see it find God. And most importantly, I want to examine the source code and not see printf("Hallelujah!\n"); anywhere. Even after all these things, could it be a hoax? Sure. So conscious and acts conscious are not black-and-white, there's a slope in between them and I'm sure you can think of one or two humans that are on that slope, not safely perched at "is conscious".

Desires. Our core desires are for our survival and reproduction. Fear, sex drive, hunger... The others are derived from those but with an interest for our community, not just ourselves. Some of these need to be learned, but many of them are chemically controlled, so that rewarding experiences emit a gas that teaches the rest of the brain to try and achieve this state again. If you're interested, check out Grand et al.'s Creatures, but emotion is a whole another book's worth for another time.

Now, as for how you actually want to implement your AI, at last. AI research has followed two different paths, biologically-inspired models like neural networks and genetic algorithms, and those entirely devised by humans, like decision trees. I've always been of the train of thought that the only way we can ensure consciousness is by not trying to impose what little we know about it onto the machine, but rather giving it a set of criteria to try to match with some many-generation evolution of neural nets, just like how we got it. I think, though, that it's possible that we could get intelligence in other ways - I've been considering doing some massive data mining of Wikipedia to see what I could create from that - but the theory proposed in the original post can, I would venture to say, never be more than a theory. When I was just a wee young little coder I thought maybe I'd write a robot with webcams and microphones and at the center give it a symbol system (yes, that's what you call this thing you're describing). I had developed a list of all the different kinds of relations objects can have to each other and all that, but the problem is even if you do it right it still won't be conscious. It doesn't know what those relationships mean. It doesn't know how to observe and tell one object from the next. The real problem is meaning, and for as long as you're hard-coding the way it creates its knowledge, it's going to have a hard time figuring out meaning. Every time you say that you want it to keep track of the fact that "dog is a subset of mammal", you're going to have to keep in mind that not only does the machine not know what "subset" means, but even if it did, it would have to have a strong knowledge of "mammal" for that to mean any more than "foo is a subset of bar". Your ideas may seem to make sense, but try coding it. As soon as you get to your first stumbling block you'll understand.

And, one thing I'm going to throw out without going into any depth in, I know how to do this with a neural net but I'm not sure if it's possible with a symbol system: thought. This thing might be able to learn, if you've got some insight that I missed. But will there actually be thoughts running through its head? Will it hear its own voice in its mind when it goes to write a reply on a forum, dictating what will be typed next?

So I guess this is all a very verbose way of saying I don't think it'll work. However, I hope that I've given you (and any other readers) a good idea about what goes into thought and the rest of the mind. If I've done well, you now have what you need to go out and learn an awful lot more, and I hope that it keeps your interest and you stick around with the exciting future of AI.

-Josh

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
Planning to Mine Wikipedia? Check this out

http://jnana.wikinerds.org/index.php/IntelliWiki

Concious or not one should be able to build smarter and more useful bots by mining Wikipedia.

Maybe we can all join hands pulling out everything we can from Wikipedia and making it as useful as possible. Looking forward to meet collaborators. The stuff is a lil bit under construction though. All suggestions welcome. Please leave messages on my talk page if you have any comments.

Share this post


Link to post
Share on other sites
ok (Wow! lots of posts!)

1st. Salsa your coffee mug is not self-consious. it may "know" that it exists, but it cannot ask for moree information about itself. nor can it collect data from itself. it cannot reason, nor learn. without intelligents, how can it be consious?

2nd. Concious == unable to tell if it from something which is consious. (to stop this from being very very very very impossible. ie. removing a very.)

3rd. Don't worry about big posts :)

4th. Lets define knowing something.

Something knows something, when it uses that knoweledge to affect other knoweledge, that it knows, or will know, Or uses this knoweledge to change the inperpritation of data of knoweledge.

So, your symbolic robot MAY know something. my bot SHOULD know something (rule maker and/or asker).

5th. Consiousness == Knowing about yourself (to move this further from philisophical debate).

6th Knowing the Meaning of somehting == Knowing something, and knowing about something.

7. What is thought, anyway?
is it a requirement of consiousness?
What is it?
How can you tell when something has it?
Could the making of rules be called a thought?

8. internal world Could be == gnb?
It has things, links between them, and rules about the world. What else would be needed in an internal world?

9. I may not grasp the scope of the problem, but i'm going to succfeed, or fail trying!

From,
Nice coder (and please note the #9 was a very lame attempt at a joke)

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this