Creating A Conscious entity

Started by
130 comments, last by Nice Coder 19 years, 2 months ago
Bump.

There seems to be many threads starting that would fit in here nicely.
Advertisement
First time back at this thread in a while. Was just reading over my blog (joshstens.blogspot.com if you're interested) where I made mention of some of my earlier responses as further reading for my philosophical discussions (also, though I declared at the time that it "wouldn't work", that was also while the goal was to create actual consciousness from a symbol system, which has changed since then. I wouldn't be so critical any more) and now that I'm back I've read over the past few things and wanted to talk again. NC's algorithm interested me - the idea of making random temporary nodes isn't really scalable, but at least the general rule is approaching something worthwhile in AI: a simple algorithm that can do big things, that hasn't been researched a million times in the past. Needs some work, but definitely keep thinking about it. The question of whether we have the words to describe how it works, whether we would have consciousness if we simulated the brain down to particle level, etc., I'm going to make a whole slew of assumptions and tell you that the answer to all of that is "yes". Assuming the general minimum-faith-required scientific view of the world that says that there are laws of physics governing exactly how every particle in the universe moves (no "soul", no separate world in which the mind exists, and on a different note, assuming no brain-in-a-vat a la The Matrix), this is entirely possible - and it's been done. Some of the most impressive AI out there has been done that way. Generally it's called a neural net, but most of the times that those are written, it's to learn a specific pattern, not be "conscious". Check the reference to Creatures that I made a long time ago. Here's how the mind works, as succinctly as I can put it logically (I make no promise about word count; I've got habits of verbosity)

1. There is no "mind". What we perceive to be our mind is an overall phenomenon created by the lower levels of our mind. See emergence, Conway's Game of Life. It's just like how a memory allocation module, a file system, process management, etc. can all unite to make an operating system. Add Solitaire, if you want.

2. The brain is a basically one-directional processing mechanism. Information flows into the inputs, is processed (this is the only place things can move backwards, since the processing is going on in parallel and can basically loop), and then flows out the outputs. The inputs are your senses (sight, sound, etc.). The outputs are, for the most part, muscles, including speech.

3. The processing unit between the inputs and outputs of your brain are your mind. Your senses are converted into neural signals - chemical and electrical pulses - that, after modification, are converted to outputs. Need evidence of this? Do something right now. Anything. Done? There, you received inputs (reading what I wrote, through your eyes; seeing), thought about what to do, and then did it. That thinking is, obviously, the hard part.

4. Thinking is emergent. It's just a higher result of a simple lower level mechanism (see Heinlein's The Moon is a Harsh Mistress, though it's not a very good reference. Best I could do.) That mechanism? Neural functioning. A neuron, or "brain cell", is a self-contained computing unit. It works as follows: sum the weighted inputs (weighted like, this input counts as 0.7 and this one counts as 0.3 and this one counts as 0.5, so when they all fire at 1.0 multiply them by 0.7, 0.3, and 0.5 and then add those numbers together). Squash them into the range of 0 to 1, -1 to 1, whatever. Some standard. Good function for that is sigmoid, which I believe is y=1/(1+e^x) - if my memory fails me, then it's -x. If the number you get as the result of squashing is higher than some preset threshold, fire (output a 1 to any neuron hooked up to this one for a short amount of time). Then train. Usually in simulation, via backprop or whatever, this is some multivariable calculus based on how different the output values are from the ideal output values. This obviously doesn't work in all cases, as we don't have "ideal output values." If I recall correctly, in biological brains it's some form of Hebbian learning - "those that fire together, wire together" - saying that when good things happen, strengthen the relationship between any two neurons that fired together recently, since they're the most likely ones to have had something to do with the output. Optionally, some neurons can fire gasses (simulated on the computer, of course) that do things like increase the learning rate, lower the firing threshold, etc.

There. A brain/mind/... Now, can this all be done on computer? Well, yes. But it would take years of patience (Try raising a child. Now try raising one with no instincts.) and a hell of a computer. Are there probably better higher-level rules that could create intelligence? Well, yes, of course. But do we know them? No. And do we know how to create neural nets? Yes. So though I'm interested in hearing attempts at higher-level rules, if you want to create "real" consciousness, know what you're up against, don't just try to replicate how we think, and don't use symbols that there's no way that the machine could ever learn the _meaning_ of, since you can hardcode the word "God" but be lightyears away from making a machine that believes - or, alternatively, write a neural net.
Nice, jtrask.

But i ask you this: How do we understand something?
How do we know what something is?

When you ask someone what paper is, what do they do? They hold out a sheet of it in front of you, and say "Thats what paper is".

So, all the word paper is, is a pointer which references all different types of paper into the gerneral. With no specifics, paper is just your ordinary white pater with nothing special about it. Why is this so? because it is the most commonly used type of paper.

The word paper also refers to objects of paper, which are in the outside world. Its the connection to the outside world which makes you "understand it".

Now what happens to other objects that we cannot see, or havent seen yet?
You describe them, using objects that you do know as a reference.

What happens when you understand something?
You know how to manipulate its inputs to perform what you want it to do, or
You know how to use it, or
You know how it works.

Just something to think about.
From,
Nice coder
Click here to patch the mozilla IDN exploit, or click Here then type in Network.enableidn and set its value to false. Restart the browser for the patches to work.
we need to get more specifics. there is a lot of general stuff about "nodes" and "algorithims" which algoritim? see what i mean. if we can get specific from this point we might be able to make some progress. like how would a computer link its viusal interface with its knowledge database? how will it be able to analise what is sees in a verbal and mathematical fashion? how will it compare this data to other relevant data?
Lets get specific.

Its world view is a 2d matrix or vectors.

In those vectors there are pointers to objects in its memory, or else it is a defult placemarker object, with properties filled in. for eg. Flammable, Killable, Eatable, ect.

The Input module looks at this, and whenever something happens, it noted what happened, how it happened, what caused it, and what was the result. It also can add objects into its database, including some references.

It then calculates probabilities, to then be feed into the Processing module.

The porcessing module, uses relevent information to generate inferences and logical rules.

Relevency is determined by time and by closeness, so for eg. if you see a lot of checkens and eggs together, at around the same time, you would say that chickens and eggs are connected. Now with a bit more resoning, you figure out that a chicken must precede an egg, and after an egg hatches, a chicken is formed.

So now all of those are all relevent to chickens and eggs.

Simple, effective.

Now for questions and answers, that your going to ask it: the linguistic module takes questions, then asks the processing module for answers. it then turnes that into the AICL (ai commmon language), for output.

You can also tell it things from the linguistic module, new objects, relationships.

From,
Nice coder
Click here to patch the mozilla IDN exploit, or click Here then type in Network.enableidn and set its value to false. Restart the browser for the patches to work.
Quote:Original post by Nice Coder

You can also tell it things from the linguistic module, new objects, relationships.

From,
Nice coder


guess you could also have it check to see if what it sees cooresponds to what it understands already. come to its own conclusions. if you can tell it new relationships and objects verbally then i guess it could learn by itself.
And we could assist in its oil change and it'll start to get fond of us. As you said Xior, get more specific.
I have neurons in my butt!!!
it could use models to see if new things it sees could possibly work in those models. like it knows all the things it can do with a cat in one category, tries switching objects or something to see if the new item works, see where it went wrong and tries again, varies formulas. a few basic models on how things work with links from those models to potentially everything else and it could learn by itself. allow it to make new models if those models work as well.

can i throw a human like i can throw a cat? well i'd have to change this and that variable in order to do it, since this and that variable changes... (one example of many models) and real life can always be a final test for the robot to see if what it did actually worked.
How is it going to tell if something is "interesting" to itself? How do you tell if something is interesting to you?

You are all a bunch of idiots. You can't make a concious AI. But go ahead and keep this up and sooner or later you will realize that you are WAY out of your intelligence level.

~Guy not fond of idiots [grin]

P.S. Look at Xior's other posts...you guys are doomed from the start.
hey mr anonymous poster. please dont call people idiots . it does not apply and the fact that , even though the problem in hand may seem impossible at present, it wont get solved if nobody talks about it. how bout we all forget about artificial conciousness and maby in a coupple hundred years it will pop up all by itself. have you ever solved anything without exploring it fully? im not joining the discussion, its over my head, but im really enjoying this post and everyones input.keep working everybody, keep reaching. if you dont try you cant succeed ( corny but true), and dont listen to this anonymous poster. he/she's the real "idiot"
I currently only use c# and directX9, in case its relivant and i didnt mention it in the post

This topic is closed to new replies.

Advertisement