Creating A Conscious entity

Started by
130 comments, last by Nice Coder 19 years, 2 months ago
The problem is not so much that your bot would be incapable of doing anything, because if you look at chatbots out there, they certainly can have impressive conversations. The problem is that you used the C-word. While it's slightly less ambiguous as other words you encounter doing this sort of stuff (try "soul" some time), it still *basically* doesn't mean anything. We can't define consciousness, we merely presume we would know it if we saw it - which makes it quite difficult to code. If you just want a program that knows it exists, try this: create a file, conscious.bat, with the following text

dir conscious

and run that program. What do you find? This program knows of its own existence! The difficulties in getting real consciousness are just way beyond the scope of any post I can write in even three books worth. Yes, you can gain a lot of information, plug it into a hardcoded grammar engine, and have something that speaks to you in English and even does a pretty good job.

But for as long as you're aiming for consciousness, you're going to have a hard time doing this with a symbol system. The brain plays by the rules, but the mind doesn't. Neural nets can capture that. This approach - and believe me, I've tried - just can't. If you want to figure out why, try heading over to Amazon and picking yourself up a copy of Godel, Escher, Bach: An Eternal Golden Braid . Even if you don't want to figure out why, read that book. It'll show you the scope of the problem you're trying to tackle, and then maybe you can figure out your own way of dealing with it.
Advertisement
Quote:Original post by jtrask
The problem is not so much that your bot would be incapable of doing anything, because if you look at chatbots out there, they certainly can have impressive conversations. The problem is that you used the C-word. While it's slightly less ambiguous as other words you encounter doing this sort of stuff (try "soul" some time), it still *basically* doesn't mean anything. We can't define consciousness, we merely presume we would know it if we saw it - which makes it quite difficult to code. If you just want a program that knows it exists, try this: create a file, conscious.bat, with the following text

dir conscious

and run that program. What do you find? This program knows of its own existence! The difficulties in getting real consciousness are just way beyond the scope of any post I can write in even three books worth. Yes, you can gain a lot of information, plug it into a hardcoded grammar engine, and have something that speaks to you in English and even does a pretty good job.

But for as long as you're aiming for consciousness, you're going to have a hard time doing this with a symbol system. The brain plays by the rules, but the mind doesn't. Neural nets can capture that. This approach - and believe me, I've tried - just can't. If you want to figure out why, try heading over to Amazon and picking yourself up a copy of Godel, Escher, Bach: An Eternal Golden Braid . Even if you don't want to figure out why, read that book. It'll show you the scope of the problem you're trying to tackle, and then maybe you can figure out your own way of dealing with it.


Conciousness would be a very good thing (tm). But i'd be Really Impressed if i could find/make a self-aware or intelligent entity.

What i really want to know, is what makes neural nets so special?
All they do is a couple of mult's and a sigmoid function... not all that much.
Maybe self-growing ANN's, now how would that work?

From,
Nice coder
Click here to patch the mozilla IDN exploit, or click Here then type in Network.enableidn and set its value to false. Restart the browser for the patches to work.
Simple neural nets like you've described really aren't that special, except that their knowledge isn't hard coded, it's... more or less... learned. What's more impressive is recurrent nets, that can trace patterns over time. Gasnets (and others with similar intentions) that can have, in some rudimentary form, "desires" and, debatably, "emotions". The other cool thing about gasnets is that they can metalearn, since they can make decisions about what training to give off on their own. Simple feed-forward backprop nets are nice, but you're right, they don't change the world. But get out there and see what other kinds of nets there are... try finding Grand's explanation of the brains in Creatures, they're really quite cool.
Quote:Original post by Nice Coder

1. yes it would. and it would quickly (compared to millions of years), if it had acess to the internet (google on steroids).

2. It would use the information to change the links between nodes. This would be like reinterpriting something, because with the extra information, a rule could be made, another changed in weighting, and nodes could be made and destroyed.

It would know the data, it would know about the data, it could understand the data!


OK, lets say your bot knew about geology, and it knows that geology is the study of the Earth. It also knows some data that would be seen as under the label of geology.
If I were to ask it: "Do you think that geology is different than biology?" would it answer: "Geology is the study of the Earth and its components, and biology is the study of living things; therefore they are two different things." or would it answer: "Geology is different than biology."?

Also, lets say that it did not know about, lets say, the process of melting down aluminum, and you were to ask it: "Do you know if geology has anything to do with the melting temperature of aluminum?" would it be able to asnwer that question with the response, "I think that it may not." or with "I don't know anything about that." If it were sentient it would most likely not say that it does not know about the information, it would try to give its best guess, because Intelligent things can create new ideas on their own.

Also, would it be capable of abstract thinking?
Meaning. Would I be able to ask it what would happen if I were to stomp my foot on the ground, and have it not pay attention or return an answer about geology; and the answer is not the product of an error, just the AI not listening to want I'm talking about or asking it about.
Quote:Original post by Zero-51
Quote:Original post by Nice Coder

1. yes it would. and it would quickly (compared to millions of years), if it had acess to the internet (google on steroids).

2. It would use the information to change the links between nodes. This would be like reinterpriting something, because with the extra information, a rule could be made, another changed in weighting, and nodes could be made and destroyed.

It would know the data, it would know about the data, it could understand the data!


OK, lets say your bot knew about geology, and it knows that geology is the study of the Earth. It also knows some data that would be seen as under the label of geology.
If I were to ask it: "Do you think that geology is different than biology?" would it answer: "Geology is the study of the Earth and its components, and biology is the study of living things; therefore they are two different things." or would it answer: "Geology is different than biology."?

Also, lets say that it did not know about, lets say, the process of melting down aluminum, and you were to ask it: "Do you know if geology has anything to do with the melting temperature of aluminum?" would it be able to asnwer that question with the response, "I think that it may not." or with "I don't know anything about that." If it were sentient it would most likely not say that it does not know about the information, it would try to give its best guess, because Intelligent things can create new ideas on their own.

Also, would it be capable of abstract thinking?
Meaning. Would I be able to ask it what would happen if I were to stomp my foot on the ground, and have it not pay attention or return an answer about geology; and the answer is not the product of an error, just the AI not listening to want I'm talking about or asking it about.


ok...

The third thing (not listening) would be a bit conterproductive (because it is a chatbot, and it chats to humans).

Getting it to randomly change the subject could be implemented (but it probably won't be good for it).

As for #1, it would probably answer "They are two different things". Then if you asked it why, it would tell you.

As for #2, it would probably answer "No, they are unrelated", because it would probably know what aluminium is (bauxite ore->aluminium), and know that it has no connection with "Geology" which would just be a node. Isn't data mining great? [grin]

From,
Nice coder
Click here to patch the mozilla IDN exploit, or click Here then type in Network.enableidn and set its value to false. Restart the browser for the patches to work.
Quote:Original post by Max_Payne
I don't believe such a program would ever work. Why? Because consciousness is more than a program. Consciousness requires permanent learning. We are conscious because of the nature of our brain, that is, the fact that its a neural network of a large matnitude. I believe no other structure than a neural network could really meet the requirements for consciousness. Please note, I'm not going with the "magic" theory here. I just think you're going in the wrong direction. I would rather simulate a large neural network made of smaller neural networks... And try to implement something that behaves like a basic animal with that, and eventually, evolve the system up to the point where it becomes conscious.

A huge knowledge bank would just be a huge knowledge bank. You could implement as many rules as you want, it still wouldn't work. Just think about this. How do you write an algorithm to understand the meaning of a sentence? How do you encode the meaning of a sentence? How do you use the meaning of this sentence? No algorithm could ever do this because its not the kind of work that can be performed by an algorithm.

You can't really divide consciousness into a set of tasks and simple processes that repeat themselves. It just doesn't work that way. Because consciousness is natural, and your approach is not.
I think neural nets are not the only way to create an intelligent entity - another way is to create a "logic machine" which keeps track of the "action potentials" for doing different things, and follows the course with the highest one. I'm not sure how that would work, but I think if AI is done in our generation, it will not be done with neural nets. Once you have a logic machine, it's easy for it to know that it exists, almost trivial actually.
“[The clergy] believe that any portion of power confided to me, will be exerted in opposition to their schemes. And they believe rightly: for I have sworn upon the altar of God, eternal hostility against every form of tyranny over the mind of man” - Thomas Jefferson
Quote:Original post by Tron3k
I think neural nets are not the only way to create an intelligent entity - another way is to create a "logic machine" which keeps track of the "action potentials" for doing different things, and follows the course with the highest one. I'm not sure how that would work, but I think if AI is done in our generation, it will not be done with neural nets. Once you have a logic machine, it's easy for it to know that it exists, almost trivial actually.
Neural nets are logic machines. The theory behind them is based (partially) on a model of the human brain. Neurons interact by building up a potential to fire and then firing if the potential exceeds a threshold. This isn't exactly what you're talking about, but is similar.

How are the actions specified? By humans? this isn't viable, there are too many possible actions.

+ how is it trivial for a logic machine to know it exists?
wel, this is (sort of) a logic machine.

It mines data (thinks, makes new connections)
It collects data
It outputs data
It has rules (bayens, from 0.0 to 1.0, alows fuzzy rules)
It has a databace (nodes, links, and rules)

I would say that it is a logical machine.

Also, Why would something need to be a neural net, to be consious?
What are the resons behind that line of thought?

From,
Nice coder
Click here to patch the mozilla IDN exploit, or click Here then type in Network.enableidn and set its value to false. Restart the browser for the patches to work.
Quote:Original post by Nice Coder
wel, this is (sort of) a logic machine.

It mines data (thinks, makes new connections)
It collects data
It outputs data
It has rules (bayens, from 0.0 to 1.0, alows fuzzy rules)
It has a databace (nodes, links, and rules)

I would say that it is a logical machine.

Also, Why would something need to be a neural net, to be consious?
What are the resons behind that line of thought?

From,
Nice coder
Anything based on logical rules could be called a logic machine :)

I haven't read all the posts in this thread but I'm not sure why anyone would say that neural nets are the only possible way to evolve/create consciousness. I'm not sure that anyone has a firm enough grasp on the essence of consciousness to make such a bold statement.

Consciousness Explained, by Daniel C. Dennett
http://en.wikipedia.org/wiki/Consciousness_Explained
http://www.amazon.com/exec/obidos/tg/detail/-/0316180661/104-6934701-6782354

Orczillabot

This topic is closed to new replies.

Advertisement