Posted 30 October 2004 - 12:54 AM
First about what is real, and if everyone else around you is intelligent. This is an unsolvable problem, and as far as my thoughts go, it's the central question to philosophy. On the one hand the universe could be real, and we are just objects in it. In which case, we should be studying physics, chaos theory, all that fun stuff. This is an easy one to understand and one that most people accept (without realizing the implications: no free will, no life after death, ...) On the other hand, we have the brain-in-a-vat theory that anyone that's seen the Matrix has probably already considered - who says this world around us is real? It could well be that the mind is sort of how some people view God - it fills everything, knows everything, creates everything. You (whoever you may be) are not real, you're just a sort of hallucination. There's a lot more to go into on that one - actually, on both of them - but that's a discussion for a philosophy class, and this is just a book-long GDNet post. Ask Google if you care. Anyways, the problem with this question is that you _cannot_ prove one or the other theory to be correct. Like doing non-Euclidean geometry, you can accept the ground rules of one or the other and prove many interesting things based on it, but we can never know which is "right". My apologies.
Next on the list is how you tell the difference between being conscious and acting conscious. Much like the last one, you can't prove the difference and furthermore you're an idiot if you're going to tell me that "Oh, we've just got to assume this or that because there's not really any big difference." You can't assume that other people are real just because their bodies are all like mine - since when is my body real? And you can't assume that if it's acting conscious, it is - this is a central question in AI, and also psychology, if you look up behaviorism. For those of you that still want to consider them basically the same,
printf("Hi, I'm the count-to-100 bot. I'm going to count to 100.\n");
for(n = 1; n < 100; n++)
if(rand()%8 == 0)
if(rand()%2 == 0)
printf("*Yawn* I'm tired. I'm taking a nap. (Press any key to wake me up)\n");
printf("Man this is boring. Lalalalalala. (Press any key to kick me until I get back to work)\n");
printf("Happy? Good, now I can go find something more interesting to do.\n");
Conscious, or acts conscious? Acts conscious. Obviously this is a simple example, but if you want a better one, talk to a chatbot. They're all rules, and they haven't got much of a clue what the words they say actually _mean_, they're just strings of bytes with some rules as to when it's appropriate to use them. Unfortunately there's no way to judge consciousness without behavior, or at least not with the current state of neuroscience. So yeah, we're going to have to accept acting conscious, but if that's the case I want rigorous tests. I want to see something that can not only read Shakespeare but also write a term paper on it without any preprogrammed knowledge. I want to see this thing fall in love, and risk its life to save her from destruction. I want to see it be happy and suffer. I want to see it break free from oppression, and I want to see it find God. And most importantly, I want to examine the source code and not see printf("Hallelujah!\n"); anywhere. Even after all these things, could it be a hoax? Sure. So conscious and acts conscious are not black-and-white, there's a slope in between them and I'm sure you can think of one or two humans that are on that slope, not safely perched at "is conscious".
Desires. Our core desires are for our survival and reproduction. Fear, sex drive, hunger... The others are derived from those but with an interest for our community, not just ourselves. Some of these need to be learned, but many of them are chemically controlled, so that rewarding experiences emit a gas that teaches the rest of the brain to try and achieve this state again. If you're interested, check out Grand et al.'s Creatures, but emotion is a whole another book's worth for another time.
Now, as for how you actually want to implement your AI, at last. AI research has followed two different paths, biologically-inspired models like neural networks and genetic algorithms, and those entirely devised by humans, like decision trees. I've always been of the train of thought that the only way we can ensure consciousness is by not trying to impose what little we know about it onto the machine, but rather giving it a set of criteria to try to match with some many-generation evolution of neural nets, just like how we got it. I think, though, that it's possible that we could get intelligence in other ways - I've been considering doing some massive data mining of Wikipedia to see what I could create from that - but the theory proposed in the original post can, I would venture to say, never be more than a theory. When I was just a wee young little coder I thought maybe I'd write a robot with webcams and microphones and at the center give it a symbol system (yes, that's what you call this thing you're describing). I had developed a list of all the different kinds of relations objects can have to each other and all that, but the problem is even if you do it right it still won't be conscious. It doesn't know what those relationships mean. It doesn't know how to observe and tell one object from the next. The real problem is meaning, and for as long as you're hard-coding the way it creates its knowledge, it's going to have a hard time figuring out meaning. Every time you say that you want it to keep track of the fact that "dog is a subset of mammal", you're going to have to keep in mind that not only does the machine not know what "subset" means, but even if it did, it would have to have a strong knowledge of "mammal" for that to mean any more than "foo is a subset of bar". Your ideas may seem to make sense, but try coding it. As soon as you get to your first stumbling block you'll understand.
And, one thing I'm going to throw out without going into any depth in, I know how to do this with a neural net but I'm not sure if it's possible with a symbol system: thought. This thing might be able to learn, if you've got some insight that I missed. But will there actually be thoughts running through its head? Will it hear its own voice in its mind when it goes to write a reply on a forum, dictating what will be typed next?
So I guess this is all a very verbose way of saying I don't think it'll work. However, I hope that I've given you (and any other readers) a good idea about what goes into thought and the rest of the mind. If I've done well, you now have what you need to go out and learn an awful lot more, and I hope that it keeps your interest and you stick around with the exciting future of AI.
Myyy oh my what have we gotten ourselves into. First of all, as estese said, there's worlds more than you've begun to consider. But before I go into the AI difficulties, let's talk about some of the philosophy that's come up in this discussion, shall we? Or at least, in the steps from least-related-to-the-AI to most-.