Creating A Conscious entity

Started by
130 comments, last by Nice Coder 19 years, 2 months ago
This is the hard part.

Luckily we have data mining to help us :)

We basically use statistics to help us.

We say calculate what things happen, and the things that presedes it.
We then calculate the difference each individual thing had, which caused what happened, to happen.

We should get some peakes, those are our major causes.

We could basically take the average, the standered deviation, and we find things which deviate farther then the std.

When something does, and is greater then one Std on the positive side, we have out causes. On the other side of the scale, we have our inhibitors.

Of cource everything in the base would have to be fuzzyfied, so there would be no learned "Hard-and-fast" rules, which are just normal logic.

Although, we could get rules which are very probable (as in 99.99999...% probable), and make that into a normal logical rule (to speed up processing.).

We should look into other data mining algorithms to find one which would be useful.
From,
Nice coder
Click here to patch the mozilla IDN exploit, or click Here then type in Network.enableidn and set its value to false. Restart the browser for the patches to work.
Advertisement
Quote:Original post by Nice Coder
perhaps, a lower of the bar, would be nice?

How about creating an entity which believes that it is conscious?

From,
Nice coder


I see too many people get hung up on "real" vs "it only appears to".
Is the sky blue? well not really. It appears blue, but there are other colors there that we cannot see (below blue frequency). (Analogy warning! the sky may not be blue at all in your area.)

So if its not... does it really matter? which would be the difference from something that is conscious and something that by all possible tests seems to be conscious?

About those statistics.... they look a lot like what i suggested at the beginning of the thread :S (except for the cause deduction.. thats interesting and could be improved and generalised)

edit: reading that i realised a proper interfase for my markov model (that had me stumped)
how about this?
list<list<token>> predict(float treshold=0.75);

the ugly list<list<token>> is better understood if you use a basic token, like a char. Then list<token> is a string, and list<list<token>> is a list of strings. tada! So this would return a list of strings that are predicted to follow, all with a probablility over 0.75 default, overrideable.
Pretty comfy methinks, and similar interfase could be used for deduct, which is about the same thing but not quite (conditionals!).

So anyways.. this is a pretty long edit. I'll get back to the templatized model this weekend. Probably. Don't quote me on that.
Working on a fully self-funded project
To make any true AI, you really need to give it a goal. I've come to this conclusion after many years of contemplating this topic. The goal that seems to work best for anything that displays some degree of consciousness is survival. An excelent example of this is a new field of robotics called BEAM robotics (can't remember what BEAM actually stands for). These robots intelligence was accidentally created. The story goes that the creator of these robots got sick of his cat tricking his programmed robots (designed for cleaning his house) into thinking it was furniture, so he made supposedly dumb robots with no programing besides capacitors and one or two gates and straped a solar panel to their back to power them, and placed a grill under his desk where the robots would clean their feet (they were meant to walk around randomly). After several days he found that the robots hadn't been on the tray yet, and upon closer inspection found that only the areas that had a bright light source was cleaned. Even though there was no programing, the robots wanted to "live", and so they stuck to the places with bright lighting, which produced the strongest current in their solar panels. Since all they needed to live was energy, they never went into the dark areas.

I'm not sure how you'd implement survival into the system you were planing, but often some of the simplest things can produce supprising results (as the example above demonstrates so well)
in beam robots survival means keeping the juice flowing. since they're directly wired to the outputs, if you wire it correctly, it just works.

In an AI, i'd make survival mean keeping some hidden value over a certain level. This would provide the main AI's drive. The question remains of what should drain or replenish the value. It depends on what you want the AI to do.
Working on a fully self-funded project
Why not just have a off button xD
i'm sorry dave but i can't let you do that...

:) It is all about motivation. If we program this bot with a very very strong motivation to learn, then it will do everything to try and learn new things.

From,
Nice coder
Click here to patch the mozilla IDN exploit, or click Here then type in Network.enableidn and set its value to false. Restart the browser for the patches to work.

Just read a great book on the subject, called On Intelligence by Jeff Hawkins. Hopefully someone didn't already suggest this.

~SPH

AIM ME: aGaBoOgAmOnGeR
Nobody did. What were the interesting things in it?

From,
Nice coder
Click here to patch the mozilla IDN exploit, or click Here then type in Network.enableidn and set its value to false. Restart the browser for the patches to work.
Oook. A Big Iron AI.

For your information, this kind of project is roughly being done.
There is a project that is attempting to gain the sum of human knowledge.

I forget the link, it's been a year since I saw the project. But the idea is people online are asked to put in a piece of common sense data and the logic engine tries to make a connection.


I'm looking into NLP right now, in preparation for doing a Master's in it, so I feel comfortable contributing here. I'm also taking a senior-level AI course.
What I know about big AI: the theory really isn't there.
We can't do much in terms of a simple document analysis. How are you going to handle non-domain specific topics?

Now, here's my idea.
Create a "baby". Much research has gone into making adult AIs. Has someone started with a "blank slate" and attempted to train elementary facts into a neural net?


Why are neural nets considered better than logic engines?
Because they are i)nat-cher-al, and ii)they are a great way to represent fuzzy logic engines. They also are very simple, unlike logic structures.
Of course training non-trivial neural nets is fairly difficult. (I did research last summer with respect to recurrent classical neural nets)

Let me ask you guys this question:
Situation: Hospital where a man has just died. He has enough chemicals and poky things in him that the body will not deteriorate, but his heart just gave out after that last McFat burger. It is evident that Mind Has Left The House.

Why won't he come alive if a doctor puts a fake heart in?

This is the reverse question, really.

What is death, and how do we go from death to life?
~V'lionBugle4d
Hi. This is my first post (yah, weird place to jump in, but what the heck, eh?), so as seems to be the general veiwpoint here on your own posts: its here, its queer, get used to it. Don't assume anything I say is fact, but realize that I might get lucky on something and have it be.

This post is strictly about AI, based on the only current knowledge base of intelligence that we have (ourselves). I belive that a bottom up form of AI creation is the only plausible one to create something in our own image. However, as has already been stated, the hardware is nowhere near ready for such an undertaking yet, so all we can do is rub our hands together and plan out how we think it will go when it is able to.

I agree with the idea that all living things operate on a function basis. All living things have a specific function that they are hard-coded to do. That's why (imo) one can't answer the question "Why are we here?" (existentially). We are simply here. It is what is. Following that line of thought, I belive that the primary drive of living things is to be. Not necessarily as a singular entity, but as a whole entity. That's why bees sting you, even though they die soon after. Or why a person might sacrifice their life to save their offspring.

If we assume that this is the primary drive of a living thing, we can also draw secondary drives from this. The need to stay alive, the need to procreate, etc. And from these we derive third level drives: the need to keep our environment at the best possible state for our survival (this does not exclude our physical body, which is the inner environment of our conciousness; this will include hunger/thirst/sleep drives, etc.). This branching tree will grow in this way to encompass the need to get up and go to class on time, or the need to bring your girlfriend flowers when you screw up. Yes, imo I do classify that as a need, however insignificant.

So to regard the ongoing debate over whether something like an answering machine, which reacts to a prespecified change in its environment with a prespecified action, I would agree. In my opinion, all action is only a reaction to action before it, so that from some unspecified point at time=0, action->reaction->reaction->reaction->reaction->reaction->.... to infinity.
This includes thought. All thoughts are on a basic level internal reactions to external input via our sensory perceptions. So, on a basic level, I think that is why no one can say whether or not they themselves are the only true intelligent being, and everyone and everything else is only a high-level bot. Because we all have the common denominator that everything we know has been taught to us. My newborn son doesn't know yet that removing skin cells from his arm using his fingernails (scratching) can cause an unpleasant chemical to be realeased into his body, which he would perceive as pain. He has yet to experience it, so he doesn't mind scratching anything. However, anyone reading this post will undoubtedly agree that while removing a few cells is rewarding (which my son knows, since he does it now), removing a lot at a time hurts. The scenario can be drawn from the analogy of falling. My newborn doesn't know that falling off the couch hurts, until he does it enough.

So from all this rigamarole that you at best only skimmed through, I draw my proposal that all knowledge is learned, and that all drives stem from a primary drive.

Finally, regarding language, I ask this: can an entity be intelligent with no way of another entity perceiving it? The obvious answer is yes, though it wouldn't be much good. If a star implodes somewhere too far away for us to know about it, did it actually happen? Yes, but we don't care, and have no reason to care, nor any basis on which to ask the question if we didn't know about it in the first place, lol. Thus, I would propose that all living things have a form of input and output. Some have more than others. In humans, our input is our senses. So, our only means of output can be what is able to be input via our senses. So, our means of communication are learned, not preprogrammed into us. We experiment with various ways to get our environment to a certain desired state by doing things that we can interpret via our own senses. So, any particular output means something to us. If it achieves our desired change in the environment, we assume a connection between this output and the idea we wanted to get across. Language, for example, follows this trend, if we remember that spoken words are only various sound waves that our brain interprets to mean something based on what it learned that particular sound to mean. So we learn how to manipulate our environment. We learn to associate how light is reflected into our eyes with objects we assume from extensive testing to be, and where these objects are relative to our bodies. Etc., etc...
So I conclude that language is only i/o, and not necessarily an efficient measure of intelligence. However, what good is an AI if it can't interact with us? Not a lot, so it needs ways to receive information about its environment and ways to send out information.

In regards to AI, I think that it is erroneous to believe that the only form of intelligence classifiable as such to be greater than or equal to our own. All knowledge must be learned, but from what source? Why is learning from a source within its own environment any better than learning from a source without it? However, it does seem a bit cheaty to hard-code the answers in, lol. But as any programmer would tell you, it matters not the how something works, but does it work and how well.

Kudos, on an exceptional forum you guys have going here. I love this sort of rhetoric, and don't have anyone around to debate with. So it has been fun reading the ongoing idea so far. Keep it rollin'!

Imho, it is not the end result that matters, but what you gain from getting there. This topic may or may not be of any consequence now, but it is still fun and interesting to talk about. Besides, how else will new ideas come about, except from talking and thinking about current ones in a way that no one has thought yet.

This topic is closed to new replies.

Advertisement