Creating A Conscious entity

Started by
130 comments, last by Nice Coder 19 years, 2 months ago
Whoa thought this thread was dead and here it is =D

To the angry AP back there and those who come after: if you don't like it, don't read it. The web is not push media, and even if it was, you can always change the channel.

Quote:Original post by Xior
for curiousity you could just encourage it to "like" positive results. have it put more power into researching positive results. and give it an initial programmed "curiousity" that would branch out based on positive results.

Interesting bit about creativity. Yeah, i consider it emergent behavior too, and could stem from both positive reinforcement and needs. Should rely heavily on the knowledge model.

For example, when doing something, if it gets boring (too much of the same known patterns, this is measurable) then spice it up with other known stuff, for variety. Evaluate if the result is pleasant (contains a decent amount of predictable info but some surprise)

Oh i started to code a small template based Markov model toolbox, but i had to stop for lack of time =/ i hope i can pick it up soon again.

Cheers, keep them brainwheels turning!
Working on a fully self-funded project
Advertisement
Nice idea madster.

Perhaps we could, for eg. Let it be curious, about things that seem to be related to things it knows about? Because that way, it gets to learn adjasent information quickly, as well as it can start generalising away parts of the data.

From,
Nice coder
Click here to patch the mozilla IDN exploit, or click Here then type in Network.enableidn and set its value to false. Restart the browser for the patches to work.
i was thinking again. if we just did the motivation/curiousity thing based on positive results.. we need to define positive results further. just positive results that increase its knowledge would make the machine eventually kill everything and/or have no regard for other life in pursuit of knowledge. we'd have to give it the initial programming of something like asimovs three laws then the forth be it should improve itself based on positive results. just if we give it free will to think like that it would get out of hand we need some control mechanism.
I was thinking something like that too.

We would need some sort of internal control system, what i was thinking of, is somehting of the order of asmovian robots.

We start with a goal:
To prevent human beings from coming to harm.

Now, that gets rid of the first clause in the first law (You can't prevent humans from coming to harm, when your harming them, now can you?)

And the second law (With appropriate facts known, like that People don't like robots which don't follow orders, and that that is causing a small amount of harm).

the third law would follow, If there is no robot to help humans, then that is a most definite harm to them.

Now, with the appropriate machinery in place (ie. the learning algorithms, to allow it to make facts and decisions), it should be possible to make one of these robots. (in the chatbot sence)

Now, the major stumbling block, is how to stop the base from making irrelevent conclusions...

Maybe assign objects a priority?
So the heard support moniter would have a heigh priority, but the sixth petal of the flower next door would have a low priority.

Priorities would mostly be supplied by humans, but could be caluculated from prior knoweledge of what was importaint.

When updayting the rulebase, it goes and makes rules, and analises objects which have a heigh priority before looking at those with a low priority.

This should happen continuously. (ie. there should always be Some rule to worry about...)

Also, there should be generalisations, for closly related objects.

For eg, If Cackeroach I-24 Lives in the cupord under the stairs (because the robot doesn't know that they are bad), it should know that Cockeroach I-25 Lives there too, becayuse it doesn't know any better.

It should also be corrected as soon as it learns better (for eg. by seeing I-25), without the usual conflicting rules resolution.

From,
Nice coder
Click here to patch the mozilla IDN exploit, or click Here then type in Network.enableidn and set its value to false. Restart the browser for the patches to work.
maybe it would calculate how many possible applications it has for each thing, and the things with more possible applications [not including generalized ones] (that you could apply it to or do with it or associate with it or something) are more relevant. and therefor should have a higher priority.

it could further play with the more/most relevant results it gets with each other, plug them into each other to come up with even more results, thats part of how the prioritizing would work/gointoeffect i think.

in fact, it could sort out its own prioritization system based on fequently occuring words and frequently occuring sentences and their interation etc.. depends on the database you give it i guess. and the interactions it can determine, and how significant each thing is, as it could do slight calculations to determine significane of ripple effect for each if you know what i mean., then plug that back in for each, or for just the ones that were more so in the first place. it would then research the more significant ones further, and you see how it works i think.

now what is it going to test and how is it going to test it are the other hard questions. for example.. how does it select the right data to do the right test in the right way. and how does it know how to do that.

[Edited by - Xior on January 14, 2005 7:41:16 AM]
it could also cross reference the data that it sorts out in its database with the data that it interacts with in the real world. have two different databases etc. and determine a new relevancy resulting from both. like one determines relevant things in the real world and the other determines relevant/importiant/highpriority things in its database and another device matches up the two somehow.
I was also thinking of cross referencing the internal data, with the external data.

Perhaps the simpler system, or ranking based on the sum of the weights of the links, would work?

From,
Nice coder
Click here to patch the mozilla IDN exploit, or click Here then type in Network.enableidn and set its value to false. Restart the browser for the patches to work.
I'm taking the hard line here: The only thing that can really know that something is concious is itself.

From our perspective we cannot create a digital concious entity as there is no absolute definition of sentience, we cannot accurately claim something to be so, and there is no way to absolutely prove it.

If we could do it then we would all essentially be gods, via all the things we could do to pause, copy, modify, delete, and replay the sentience etc. From our perspective it could never be sentient, not if we could directly manipulate or control it's behaviour.

For all intents and purposes, there is only, and will only ever be, simulation.

Furthurmore Asimov's three laws are inadequate, as shown in the film "I, Robot", and in his books. There is no exact sets of rules to define the correct reponse for every situation because there are an infinite number of situations.
"In order to understand recursion, you must first understand recursion."
My website dedicated to sorting algorithms
perhaps, a lower of the bar, would be nice?

How about creating an entity which believes that it is conscious?

From,
Nice coder
Click here to patch the mozilla IDN exploit, or click Here then type in Network.enableidn and set its value to false. Restart the browser for the patches to work.
Quote:Original post by Nice Coder
I was also thinking of cross referencing the internal data, with the external data.

Perhaps the simpler system, or ranking based on the sum of the weights of the links, would work?

From,
Nice coder


right. we'd put in learning algorithims etc in it, and it would try to learn things, and to test if some of them work it would try them in the real world, as a final test. now its just a matter of determining the exact learning algorithms by which it will use/learn/process the data we give and it how that interacts with how it will test it. how it can improve on its own learning process as well. going to be hard for it to come up with its own complicated algorithms... thats the only stumbling block that i see.

This topic is closed to new replies.

Advertisement