Creating A Conscious entity

Started by
130 comments, last by Nice Coder 19 years, 2 months ago
Quote:
Even if you made something that seemed as though it was self-conscious, how would you know if it is really self-conscious or if it is just tricking you into thinking that it is self-conscious?


I could use the same argument against you, how do I know that you really are self-conscious, and not just tricking me by acting like you are? If you think about it, maybe I'm really the only one who is self-couscious, and everyone else just acts like they are, but are really just complicated robots
...but thinking that way too much makes you into a sociopath...


by that token, I would argue that the distinction between Acts conscious, and Is conscious to be pointless, its all relative anyway
Advertisement
Quote:Original post by haphazardlynamed
Quote:
Even if you made something that seemed as though it was self-conscious, how would you know if it is really self-conscious or if it is just tricking you into thinking that it is self-conscious?


I could use the same argument against you, how do I know that you really are self-conscious, and not just tricking me by acting like you are? If you think about it, maybe I'm really the only one who is self-couscious, and everyone else just acts like they are, but are really just complicated robots
...but thinking that way too much makes you into a sociopath...


by that token, I would argue that the distinction between Acts conscious, and Is conscious to be pointless, its all relative anyway
The simplest explanation is generally the best.

Have you ever been to the doctor? Ever wonder how s/he knows what's inside of without having cut you open? Simplest explanation: because there are others like you...other humans.

Now, you're sure that you're self-conscious, and you're sure that you're a human. Now you know that there are other humans out there, and these people act as if self-conscious. Simplest explanation: they are.


Anyway, enough with my babbling...
Myyy oh my what have we gotten ourselves into. First of all, as estese said, there's worlds more than you've begun to consider. But before I go into the AI difficulties, let's talk about some of the philosophy that's come up in this discussion, shall we? Or at least, in the steps from least-related-to-the-AI to most-.

First about what is real, and if everyone else around you is intelligent. This is an unsolvable problem, and as far as my thoughts go, it's the central question to philosophy. On the one hand the universe could be real, and we are just objects in it. In which case, we should be studying physics, chaos theory, all that fun stuff. This is an easy one to understand and one that most people accept (without realizing the implications: no free will, no life after death, ...) On the other hand, we have the brain-in-a-vat theory that anyone that's seen the Matrix has probably already considered - who says this world around us is real? It could well be that the mind is sort of how some people view God - it fills everything, knows everything, creates everything. You (whoever you may be) are not real, you're just a sort of hallucination. There's a lot more to go into on that one - actually, on both of them - but that's a discussion for a philosophy class, and this is just a book-long GDNet post. Ask Google if you care. Anyways, the problem with this question is that you _cannot_ prove one or the other theory to be correct. Like doing non-Euclidean geometry, you can accept the ground rules of one or the other and prove many interesting things based on it, but we can never know which is "right". My apologies.

Next on the list is how you tell the difference between being conscious and acting conscious. Much like the last one, you can't prove the difference and furthermore you're an idiot if you're going to tell me that "Oh, we've just got to assume this or that because there's not really any big difference." You can't assume that other people are real just because their bodies are all like mine - since when is my body real? And you can't assume that if it's acting conscious, it is - this is a central question in AI, and also psychology, if you look up behaviorism. For those of you that still want to consider them basically the same,

#include <stdio.h>
#include <math.h>
void main()
{
printf("Hi, I'm the count-to-100 bot. I'm going to count to 100.\n");
int n;
for(n = 1; n < 100; n++)
{
printf("%d\n", n);
if(rand()%8 == 0)
if(rand()%2 == 0)
{
printf("*Yawn* I'm tired. I'm taking a nap. (Press any key to wake me up)\n");
system("pause");
}
else
{
printf("Man this is boring. Lalalalalala. (Press any key to kick me until I get back to work)\n");
system("pause");
}
}
printf("Happy? Good, now I can go find something more interesting to do.\n");
}

Conscious, or acts conscious? Acts conscious. Obviously this is a simple example, but if you want a better one, talk to a chatbot. They're all rules, and they haven't got much of a clue what the words they say actually _mean_, they're just strings of bytes with some rules as to when it's appropriate to use them. Unfortunately there's no way to judge consciousness without behavior, or at least not with the current state of neuroscience. So yeah, we're going to have to accept acting conscious, but if that's the case I want rigorous tests. I want to see something that can not only read Shakespeare but also write a term paper on it without any preprogrammed knowledge. I want to see this thing fall in love, and risk its life to save her from destruction. I want to see it be happy and suffer. I want to see it break free from oppression, and I want to see it find God. And most importantly, I want to examine the source code and not see printf("Hallelujah!\n"); anywhere. Even after all these things, could it be a hoax? Sure. So conscious and acts conscious are not black-and-white, there's a slope in between them and I'm sure you can think of one or two humans that are on that slope, not safely perched at "is conscious".

Desires. Our core desires are for our survival and reproduction. Fear, sex drive, hunger... The others are derived from those but with an interest for our community, not just ourselves. Some of these need to be learned, but many of them are chemically controlled, so that rewarding experiences emit a gas that teaches the rest of the brain to try and achieve this state again. If you're interested, check out Grand et al.'s Creatures, but emotion is a whole another book's worth for another time.

Now, as for how you actually want to implement your AI, at last. AI research has followed two different paths, biologically-inspired models like neural networks and genetic algorithms, and those entirely devised by humans, like decision trees. I've always been of the train of thought that the only way we can ensure consciousness is by not trying to impose what little we know about it onto the machine, but rather giving it a set of criteria to try to match with some many-generation evolution of neural nets, just like how we got it. I think, though, that it's possible that we could get intelligence in other ways - I've been considering doing some massive data mining of Wikipedia to see what I could create from that - but the theory proposed in the original post can, I would venture to say, never be more than a theory. When I was just a wee young little coder I thought maybe I'd write a robot with webcams and microphones and at the center give it a symbol system (yes, that's what you call this thing you're describing). I had developed a list of all the different kinds of relations objects can have to each other and all that, but the problem is even if you do it right it still won't be conscious. It doesn't know what those relationships mean. It doesn't know how to observe and tell one object from the next. The real problem is meaning, and for as long as you're hard-coding the way it creates its knowledge, it's going to have a hard time figuring out meaning. Every time you say that you want it to keep track of the fact that "dog is a subset of mammal", you're going to have to keep in mind that not only does the machine not know what "subset" means, but even if it did, it would have to have a strong knowledge of "mammal" for that to mean any more than "foo is a subset of bar". Your ideas may seem to make sense, but try coding it. As soon as you get to your first stumbling block you'll understand.

And, one thing I'm going to throw out without going into any depth in, I know how to do this with a neural net but I'm not sure if it's possible with a symbol system: thought. This thing might be able to learn, if you've got some insight that I missed. But will there actually be thoughts running through its head? Will it hear its own voice in its mind when it goes to write a reply on a forum, dictating what will be typed next?

So I guess this is all a very verbose way of saying I don't think it'll work. However, I hope that I've given you (and any other readers) a good idea about what goes into thought and the rest of the mind. If I've done well, you now have what you need to go out and learn an awful lot more, and I hope that it keeps your interest and you stick around with the exciting future of AI.

-Josh
Planning to Mine Wikipedia? Check this out

http://jnana.wikinerds.org/index.php/IntelliWiki

Concious or not one should be able to build smarter and more useful bots by mining Wikipedia.

Maybe we can all join hands pulling out everything we can from Wikipedia and making it as useful as possible. Looking forward to meet collaborators. The stuff is a lil bit under construction though. All suggestions welcome. Please leave messages on my talk page if you have any comments.
ok (Wow! lots of posts!)

1st. Salsa your coffee mug is not self-consious. it may "know" that it exists, but it cannot ask for moree information about itself. nor can it collect data from itself. it cannot reason, nor learn. without intelligents, how can it be consious?

2nd. Concious == unable to tell if it from something which is consious. (to stop this from being very very very very impossible. ie. removing a very.)

3rd. Don't worry about big posts :)

4th. Lets define knowing something.

Something knows something, when it uses that knoweledge to affect other knoweledge, that it knows, or will know, Or uses this knoweledge to change the inperpritation of data of knoweledge.

So, your symbolic robot MAY know something. my bot SHOULD know something (rule maker and/or asker).

5th. Consiousness == Knowing about yourself (to move this further from philisophical debate).

6th Knowing the Meaning of somehting == Knowing something, and knowing about something.

7. What is thought, anyway?
is it a requirement of consiousness?
What is it?
How can you tell when something has it?
Could the making of rules be called a thought?

8. internal world Could be == gnb?
It has things, links between them, and rules about the world. What else would be needed in an internal world?

9. I may not grasp the scope of the problem, but i'm going to succfeed, or fail trying!

From,
Nice coder (and please note the #9 was a very lame attempt at a joke)
Click here to patch the mozilla IDN exploit, or click Here then type in Network.enableidn and set its value to false. Restart the browser for the patches to work.
1) I'm just wondering, would a program that is self-conscious and self-teaching reach a limit of the knowledge it can get. Also, wouldn't it reach that much faster than any human, because it processes much much faster than a normal human.

2) If it had all this information, It still wouldn't know what to do with it. So it would only be able to learn, it would not be able to use the information to effectively "learn" new processes. It would merely obtain data about the data it is getting, and storing it, not using the data to do anything new or understand the data.
Quote:Original post by Max_Payne
I don't believe such a program would ever work. Why? Because consciousness is more than a program. Consciousness requires permanent learning. We are conscious because of the nature of our brain, that is, the fact that its a neural network of a large matnitude. I believe no other structure than a neural network could really meet the requirements for consciousness. Please note, I'm not going with the "magic" theory here. I just think you're going in the wrong direction. I would rather simulate a large neural network made of smaller neural networks... And try to implement something that behaves like a basic animal with that, and eventually, evolve the system up to the point where it becomes conscious.

A huge knowledge bank would just be a huge knowledge bank. You could implement as many rules as you want, it still wouldn't work. Just think about this. How do you write an algorithm to understand the meaning of a sentence? How do you encode the meaning of a sentence? How do you use the meaning of this sentence? No algorithm could ever do this because its not the kind of work that can be performed by an algorithm.

You can't really divide consciousness into a set of tasks and simple processes that repeat themselves. It just doesn't work that way. Because consciousness is natural, and your approach is not.



Yes, most computers today only process information sequentially. Data stored in memory is not acted upon. It sits there in line waiting to be processed. That is a big difference from every cell in your brain working together. An organism's sensory registers not only store information for split seconds, but they are able to act on it in an encompassing way. That goes for all memories and experiences. The messages are not waiting in line for something to process them. They process themselves.

As for the references to Behaviorism made by a previous poster, I think using such a psychological basis for anything died out nearly 30 years ago. Most of the research that served as the basis for Behaviorism has been shown to be either inconclusive, incomplete, or socially biased. I suggest looking towards Cognitive Psychology as a better basis of understanding. Also, any reference to Psychodynamic Theory probably doesn’t constitute good information either. I don't think I know of any professional who takes that theory seriously anymore. That is, unless you work in Hollywood and really think Sigmund Freud understood the human psyche completely.
I don't believe such a program would ever work. Why? Because consciousness is more than a program. Consciousness requires permanent learning. We are conscious because of the nature of our brain, that is, the fact that its a neural network of a large matnitude. I believe no other structure than a neural network could really meet the requirements for consciousness. Please note, I'm not going with the "magic" theory here. I just think you're going in the wrong direction. I would rather simulate a large neural network made of smaller neural networks... And try to implement something that behaves like a basic animal with that, and eventually, evolve the system up to the point where it becomes conscious.

A huge knowledge bank would just be a huge knowledge bank. You could implement as many rules as you want, it still wouldn't work. Just think about this. How do you write an algorithm to understand the meaning of a sentence? How do you encode the meaning of a sentence? How do you use the meaning of this sentence? No algorithm could ever do this because its not the kind of work that can be performed by an algorithm.

You can't really divide consciousness into a set of tasks and simple processes that repeat themselves. It just doesn't work that way. Because consciousness is natural, and your approach is not.

Looking for a serious game project?
www.xgameproject.com
Quote:Original post by Zero-51
1) I'm just wondering, would a program that is self-conscious and self-teaching reach a limit of the knowledge it can get. Also, wouldn't it reach that much faster than any human, because it processes much much faster than a normal human.

2) If it had all this information, It still wouldn't know what to do with it. So it would only be able to learn, it would not be able to use the information to effectively "learn" new processes. It would merely obtain data about the data it is getting, and storing it, not using the data to do anything new or understand the data.


1. yes it would. and it would quickly (compared to millions of years), if it had acess to the internet (google on steroids).

2. It would use the information to change the links between nodes. This would be like reinterpriting something, because with the extra information, a rule could be made, another changed in weighting, and nodes could be made and destroyed.

It would know the data, it would know about the data, it could understand the data!

Just imagine being able to talk with somehting that understood what you were saying!

User: Its snowing here
Chatbot: Where are you?
User: In the himilayas
Chatbot: Well of course, its always snowing in the himilaias!
User: Why
Chatbot: Because mountains are very heigh, and when you have very heigh things, they get very cold. and because mountains are very heigh, they get snow when it rains.
User: How do you know it rains in the himilayas?
Chatbot: Because it rains in places that are part of the surface of the earth, and the himilayas are a part of the surface of the earth.
User: How do you know what is on the surface of the earth?
Chatbot: Cities are built on contenents. Contenents are build on tectonic plates. Tectonic plates are on the serface of the earth.
So, cities are on the surface of the earth.

!!!

From,
Nice coder
Click here to patch the mozilla IDN exploit, or click Here then type in Network.enableidn and set its value to false. Restart the browser for the patches to work.
Quote:Original post by Max_Payne
I don't believe such a program would ever work. Why? Because consciousness is more than a program. Consciousness requires permanent learning. We are conscious because of the nature of our brain, that is, the fact that its a neural network of a large matnitude. I believe no other structure than a neural network could really meet the requirements for consciousness. Please note, I'm not going with the "magic" theory here. I just think you're going in the wrong direction. I would rather simulate a large neural network made of smaller neural networks... And try to implement something that behaves like a basic animal with that, and eventually, evolve the system up to the point where it becomes conscious.

A huge knowledge bank would just be a huge knowledge bank. You could implement as many rules as you want, it still wouldn't work. Just think about this. How do you write an algorithm to understand the meaning of a sentence? How do you encode the meaning of a sentence? How do you use the meaning of this sentence? No algorithm could ever do this because its not the kind of work that can be performed by an algorithm.

You can't really divide consciousness into a set of tasks and simple processes that repeat themselves. It just doesn't work that way. Because consciousness is natural, and your approach is not.


1. Sorry about the double post.

2. Why do you think that the neural net is the only way?

3. The meaning of sentences would be quite simple, because this bot would only converse in a language which was made so that it can understand everything that is said. and it would know how the words were put together, what the words themselves were, ect.

4. That doesn't quite make sense, how can natural things be different to non-natural things? How can using a non-natural approach stop consiousness from forming?

Zielfreig- What difference would parellel-processing have to consiousness?

From,
Nice coder
Click here to patch the mozilla IDN exploit, or click Here then type in Network.enableidn and set its value to false. Restart the browser for the patches to work.

This topic is closed to new replies.

Advertisement