Thought experiment

Started by
15 comments, last by Scrambles 20 years, 9 months ago
I was amusing myself the other day thinking about the relationship between intelligence and the complexity of inputs to a system. Animals (like us) learn from birth, to some extent by recognising complex patterns in the world around them, and classifying newly experiencd patterns on old ones. Our input is a continuous sense of touch throughout (almost) every inch of our surface; sensitivity to sound to an extroadinary degree, sensitivity to a servicable chunk of the UV spectrum, as well as taste and smell which as far as I can tell provide special responses for a select range of chemicals. Machines, that we want to make intelligent, generally have a discrete or simple range of inputs: a string of characters (Eliza), a mouse and keyboard, (AI on your average computer), or perhaps with more sophisticated robots, limited light/heat/sound sensors, laser depth guides, or processed digital images. So my question is -- if we ever developed a system capable of attaining a reasonable level of intelligence -- how would we ever know? Imagine a human baby, with an unformed mind, that has only ever been able to perceive all black or all white. No touch, hearing, taste or smell, or any variety in vision. It normally sees only black; you can press a button wired to its brain that makes it see white instead, while the button is depressed. The baby is also unable to move, and can only respond through an LED connected to its brain, that is on or off. Would this mind ever grow to be what we call intelligent on that complexity of input? It would never be able to know it is in a world, or even that it is human. Could it possibly have needs, or desires? If, when adult, its full senses are completely returned, would it ever be able to even attain the mental capabilities of even a simple stimulus-response agent? If the mind could become intelligent on that limited input, would you ever be able to tell by pressing the button and recording the LED activity? The point is, I think it is impossible to make a machine even comparably intelligent to animals, let alone humans, without it having as complex a set of inputs as them. Human and animal reasoning is not done purely with clear-cut symbols or syllogisms, but involving complex patterns and nuances in our observed environment. I''m sure there are no end of flaws and holes in this line of reasoning; I''d love to hear anyone elses thoughts on the issue, or any references to related books or articles. Apologies for the length of the post. Happy brain-racking, Ken Scambler
Ken Scambler
Advertisement
"So my question is -- if we ever developed a system capable of attaining a reasonable level of intelligence -- how would we ever know?"

I think we would learn by the same means we can say with a fair degree of certainty that some species of animal are intelligent....

We''d Observe it, Communicate with it and judge the relevance and nature of the responses.

I think this question was addressed, if not answered, in a more formal method by Alan Turing.

Why not take a look at

http://www.abelard.org/turpap/turpap.htm

I LOVE Mind Experiments.... the Universe has just disappeared!




Stevie

Don''t follow me, I''m lost.
StevieDon't follow me, I'm lost.
Yeah, I agree the machines are handicapped by limited sensors. If you put the same limitation on a human baby I doubt he/she would develop into an intelligent adult. In fact, I''d guess that a human limited to the sensors of a machine is not going to even become as intelligent as some machines that have been built so far.

We don''t have the tech to remove the handicap from the machines, not yet. But I don''t think we should call some machine intelligent just because it outperforms a human with the same handicap. I''d rather say that humans aren''t really intelligent until they have developed intelligence. A human vegetable doesn''t get the title "intelligent" just because he''s made of human meat.

-solo (my site)
-solo (my site)
For the last 4 years i''ve been an AI student and quite a while back came to this conclusion - you don''t study computer science to learn how intelligence works. Computer guys tend to be math and engineering people and that''s what they''ve studied. If you want to know how the human mind works, you ask the people who study that, which i believe are the psychologists

So over the last year i''ve been trying to read up on psych. Problem is, i''ve got 12 years experience with computers, 4 with AI and only a couple of months reading intro psych books, so i don''t know much about it. But is that going to keep me from making some comments? Heck no!

quote:
Imagine a human baby, with an unformed mind... No touch, hearing, taste or smell, or any variety in vision...
Would this mind ever grow to be what we call intelligent... Could it possibly have needs, or desires? If, when adult, its full senses are completely returned, would it ever be able to even attain the mental capabilities of even a simple stimulus-response agent?


i sort of have an answer to this

First, i used to be a firm believer that intelligence is completely independent of perception. i still mostly believe that but after doing some study of visual and aural i admit intelligence isn''t one thing, it''s distributed all over and part of that is in the perception modules. What you''re talking about is a kid with no perception. Is he still intelligent? My personal opinion is absolutely. Just like someone born deaf, dumb and blind is intelligent

Second, will it have desires and needs? That one''s easy. Yes. There''s a lot of argument about DNA and environment but we seem to know that certain things are hard coded. One of those is goals and desires. Those are hard wired in. Low level ones like not starving and getting poked with a stick at least. Sex and pregnancy are in there as well. Many of the other goals we learn have are just our strategies to be safe and procreate. Our goals of getting rich, a tan, boob jobs, famous, learning pick pocketing, telling jokes, etc. are ways to get in someone''s pants and then grab a pizza afterwards. None of which is really implemented as "a goal" as if it were some STRIPS operator or integer or something. We actually operate off of pain and certain chemical reactions (getting horny, biological clocks, etc.). We don''t eat to stay alive, we eat because if we don''t it hurts. Ditto touching a hot burner or shooting yourself in the eye with a BB gun. So really there''s just a couple of hard-wired goals/actions but it leads to complex behavior. And while it''s worked out well enough for evolution, it can be bypassed. Procreation can be avoided by birth control (from simple masturbation and condoms all the way up to surgical vasectomy). And the pleasure/pain thing is easy to manipulate with heroin and to a lesser degree with things like coke, caffiene, nicotine, ritalin, etc.

Also hard coded in the brain appears to be knowledge of physics (with the exception of pivot points and balance), object recognition (trace the edges which can be done by checking color) and a fear of snakes (that one i found odd). Also hard coded is a lot of blueprint/building instructions for building a brain. Oh, and there''s a theory popular with some famous cognitive linguists that language is also hard coded. Not a specific language but the basic constructs that allow us to speak and which result in all languages being similar on some level (don''t know much about this one; i''m not a big fan of language)

Third, we have big brains for a variety of reason (many of which have nothing to do with intelligence) but a baby''s brain is goofy for a couple reasons. First, it''s not big enough to really do much. Unlike other animals, human babies are born kinda worthless. Can''t talk, can''t walk, can''t get a job making shoes. Worthless. But despite that, the brain is so freakin'' big it kills many moms trying to squeeze through (and because of the size, women tend to only have one kid at a time; had women been programmed to have 10 kids at a time they''d either all die in child birth or have hips so wide they''d be physically slow and akward and would get eaten by lions). So the way humans'' brains are created is that you just start with instructions for how to build a brain. Once you get out of the womb and have more space, you start to build the different functions of your brain. The building instructions appear to be of the form "gather this input, analyze it to figure out which neurons to conenct together, gather more input, repeat and tweak". Most of the wiring and (i assume) pretty much all of the perceptual/sensory wiring happens in the first few years. The point? If you do not give a baby lots of things to look at, his visual center doesn''t get built in the brain. If you take the kid out of the deprivation chamber at age 20 he''d have physically working eyes but i don''t think he''d be able to see - the eyes wouldn''t be hooked up to the brain so the brain wouldn''t be able to process the inputs

Fourth, i don''t believe intelligence has to be a stimulus-response thing. For example, dreams don''t have sensory inputs. Talking to yourself in your head is the same. And a phone conversation might have sounds but the intelligent parts of it (knowing what to say to cheer someone up, answer a calculus question or make a funny story) are purely internal. i think. So yeah, i''d say such a person would be intelligent. Might be less functional than a stimulus-response robot but it''d still be intelligent

Oh yeah, if i remember correctly, much of this knowledge came from some sicko sadists who do actually slice up baby kittens (sever sensory nerves) or put them in sensory deprivation chambers and then see what happens to them


quote:
Machines, that we want to make intelligent, generally have a discrete or simple range of inputs: a string of characters (Eliza), a mouse and keyboard...
The point is, I think it is impossible to make a machine even comparably intelligent to animals, let alone humans, without it having as complex a set of inputs as them. Human and animal reasoning is not done purely with clear-cut symbols or syllogisms, but involving complex patterns and nuances in our observed environment.


The core of intelligence (pattern recognition, learning through repetition and concious decision making through being concious) is pretty small. The complex patterns we exhibit are all the add ons that come from processing lots of patterns. But the ability to process patterns and the behaviors we get from actually processing tons of them are pretty different. Which is why some people are considered intelligent or stupid in a given topic based on experience. i''m smart when it comes to reading and writing (which i do a lot of) but clueless and an idiot when it comes to math (which i avoid like the plague) and music (which my brother is wonderful at creating and which i''ve never tried). A caveman might seem awfully dumb walking in front of cars, being scared of elevators, not bringing roses on the first date, etc. because he hasn''t had the quantity of different experiences needed to operate in those domains

Speaking of which, a resume/hiring pet peeve of mine - years experience. If you have 10 years of C++ experience but it''s just doing the same 2 or 3 tasks over and over, you''re not likely to know more than someone with 5 years experience who''s done 300 different things for a variety of different people and projects. We learn by getting different experiences (good and bad) and comparing them to figure out what the important differences were. Variety counts

As for the input we give computers, it''s actually pretty complex. Certainly it can be made to be so. We already have vision and speech processing systems that get music and sight. But even cutting off it''s senses many games are fed lots of generated polygons and light sources and vog orbises or whatever. Plus analog (?) joystick and fancy joystick inputs, network data, keyboard info (which might only be 200 characters but DNA has just 4 and it''s pretty complex; the patterns formed are the complex inputs, not the discreet elements) and mouse movements and all sorts of stuff

Oh, a nifty thing i learned about sensory processing. It happens in stages and with multiple filters. In vision you have a 1 second long "iconic memory" buffer which stores all the raw vision info. A scanner thingy then looks for important information and passes that to the next level of processing (which level is next depends on what you''re looking for)

So what is important information? It''s information that is important to you which means it''s information that helps you satisfy one of your current goals. That includes knowing about unexpected things, which is interesting (i think) because it means that before you look at something you predict what you''re going to see. The things you expected to see are thrown away (not processed, not passed up) unless you have need of them for some reason related to a goal. Unexpected things mean you don''t know what situation you''re in and thus can''t be sure you know how safe you are so you better to an analysis to figure out what gives

The whole point of this is that passing complex patterns to a computer or human or ferret really doesn''t matter unless that thingy has a goal/desire and is looking for something specific. Meaning you don''t really percieve without having goals and expectations. Which is what computers certainly don''t have today

OK, so on to the whole point of this post (i think):
quote:
Machines, that we want to make intelligent


Question: what would it take to make a machine intelligent? Intelligence is one of those not-agreed-upon terms so i''m going to define it as "you think it''s mentally just like you".

i think machines don''t appear intelligent today because a)machines don''t have machine-specific goals (we give them goals like "do what i tell you") and b)we don''t let machines make self-interested decisions. Basically, we don''t give machines much freedom. We open notepad or whatever and we go in and tell them exactly what they want, what they can look at, how they look at it, what choices they can make and what they''ll do with the info. If you tell a computer "add these numbers together and print the sum over here" we don''t really give the computer a chance to excercise any intelligence. And most people don''t want to program a computer with instructions like "don''t worry about me, do what''s best for you". If you did, poor Sparky would get awful testy when you upgraded him 18 months later with a faster model. And the last thing we need is to give a Windows computer more reason to crash

Anyway, them''s my thoughts

-baylor
humans are only "living machine" we handle input and respond

some of our input types: sound, vistion, touch, smell, balance


and we produse a output, desiding on the input
I believe it''s important to correct a few errors in baylor''s post before others respond and the discussion heads off on a wrong tangent...

quote:Original post by baylor
So the way humans'' brains are created is that you just start with instructions for how to build a brain. Once you get out of the womb and have more space, you start to build the different functions of your brain. The building instructions appear to be of the form "gather this input, analyze it to figure out which neurons to conenct together, gather more input, repeat and tweak". Most of the wiring and (i assume) pretty much all of the perceptual/sensory wiring happens in the first few years. The point? If you do not give a baby lots of things to look at, his visual center doesn''t get built in the brain. If you take the kid out of the deprivation chamber at age 20 he''d have physically working eyes but i don''t think he''d be able to see - the eyes wouldn''t be hooked up to the brain so the brain wouldn''t be able to process the inputs


Core elements of the brain are among the first systems to develop in the embryo and by the end of the first trimester, the brain is functionally developed... in that all of the morphological structures are in place and are working at basic levels. So, for example a second trimester baby can suck it''s thumb in the womb (which requires coordinated action to move the arm, extend the thumb, sense that it is in the mouth, stop moving the arm and kick in the autonomic suckling function). Certainly, the baby doesn''t yet have higher cognitive functions like being able to read a book, order french fries or chat about the weather, but it does have an active brain that is developing synapses continuously. By the third trimester the baby can respond to external stimulus, it can hear, touch, taste (and hence smell) and see, although sight within the womb is limited to intensity changes. The retinae are fully developed and provide full sensory information to the occipital lobe... it''s just that the information is quite simple because of the environment.

We continue to actively grow new synapses up until about 4-5 years of age... and we are still increasing myelination up until about age 40. After that we suffer active atrophy of our white matter (and not just from unnatural causes like alcohol).

It is certainly not true that we only learn outside the womb, nor is it true that we only develop our ability to grow synapses outside the brain... although there is certainly a correlation with learning rate and synapse growth... and sensory inflow, which is higher once we leave the womb.

Regards,

Timkin
Thanks for your responses everyone.

stevie56: I was going to mention the Turing Test, but decided that the post was already long enough, and it didn''t really affect the point much. The reason i asked...

"if we ever developed a system capable of attaining a reasonable level of intelligence -- how would we ever know?"

...was to reinforce the point that it is probably impossible to do so with trivial or limited inputs.

5010: I agree completely.

Baylor:
Hmmm. Some interesting points here.
I disagree with one point, I don''t think the baby could possibly have needs or desires, at least in the sense we know them. I can''t see how even primal needs such as food, warmth, love and sex are possible when the impulses are not "grounded" in tangible inputs. The baby could not possible be aware of these issues consciously, and without any grounding, the subconscious impulses would be impotent and meaningless.
Also I''d be interested in any links to actual research done in that area, by sickos or otherwise, if you can remember where you heard that.

Timkin: Thanks, I didn''t know that. I think the biggest weak point in my argument is my lack of biological knowledge -- the answer to my hypothetical questions probably lies as much in biology as anything else.

These points notwithstanding, I still think the thought experiment is a neat illustration of how little chance we have of achieving strong AI with the I/O limitations of current hardware and software used for agents and robots.

Today''s advanced symbolic reasoning software typically uses some or all of production systems, frames, semantic nets, predicate calculus, classification trees -- all useful tools -- but all so clean-cut, so limited. They use vastly oversimplified symbols representing real world concepts, yet with absolutely no real world grounding for the agent. How can an agent build its own patterns and beliefs without any complex real-world grounding for them to learn from? Currently they regurgitate derivative data in a narrow scope fed to them by a programmer.

As for neural nets -- not that I want to get baylor started on them again -- Matt Buckland''s site mentions a researcher who unsuccessfully tried to hardwire 2 million neurons together to get the intelligence of a cat. I havn''t read anything else about this, but I''m willing to bet his project didn''t even make a token stab at simulating the vast range and complexity of inputs that a cat has.

You feed it peanuts, you get monkeys, so to speak.

Except monkeys are smart.

Ken Scambler




Ken Scambler
"So my question is -- if we ever developed a system capable of attaining a reasonable level of intelligence -- how would we ever know?"

So... the concensus seems to be that sensory input is everything (or at least a very large part) in developing intelligence?

On that basis, then, a person born deaf and blind cannot be, or become, intelligent?

A person born without limbs cannot wield tools, so cannot be intelligent?

A dumb person cannot answer questions vocally, therefore cannot be interviewed to assess their intelligence?

I think the Turing Test (formulated in the 50''s) is as near as we''re going to get to testing if a machine is intelligent. The point is the if, after a conversation, we cannot tell from the responses (no matter how elicited) whether we are addressing a human or a machine, then we may have a machine that is intelligent within the domain of the discourse.

After all, we test our kids at school don''t we? All those tests are limited and specialised, intended to produce limited and specialised results which may have longlasting effects on the child''s future development.

Seems to me if it''s good enough for a child, it''s good enough for a machine?

And, just to follow the logic a step further, a professor is deemed to be intelligent when he''s awake, but not when he''s asleep or in a coma?

Leads me to ask at what instant does the light click on or off?



Stevie

Don''t follow me, I''m lost.
StevieDon't follow me, I'm lost.
First, let me just say that this is a fascinating thread!

The usual disclaimer: I am not a psychologist, but...

Firstly, I disagree with the whole premise that an AI would be limited by the fact that the inputs (mouse, keyboard etc.) are so limited and/or primative. An AI that could only interpret keyboard input could theoretically master a vast number of languages and dialects, and movements of the mouse could be interpreted as a language. What''s the difference between a subtle mouse movement and a violent one? When are subtle movements used, and in what direction.

Also, the AI''s a computer simulation of intelligence, why can''t the simuli that it feels also be a simulation? I can construct my own world around it, with it''s own set of physical and mathematical laws, and let it explore. Or I could feed it Project Gutenberg. Or I could give it knowledge of HTTP protocols and point it at google.

In addition, the whole cause-and-effect nature of synapses in the brain means that one synapse firing and affecting others counts as an input. The more connections in the brain, the more inputs, and also the more feedback from these inputs. This is what makes us different from animals - we''ve got more brain cells. I''m not saying that you don''t need input, but even simple inputs can be used.

Baylor: your post is really interesting, and I love the idea of the early brain being a set of instructions on how to build a better brain, with the capability to reason, have emotion, speak and build microprocessors. I disagree that people have an inbuilt knowledge of physics, but rather that they have the ability to learn about the physical world around them _very_ quickly (mixed with what we call common sense). If we had an inbuilt knowledge of physics, Galileo wouldn''t have had to tell us that heavy objects fall at the same speed as light ones, and it would make the topics of Quantum Mechanics and General Relativity less mind-bending. Same with snakes, I suppose: less an inbuilt fear, more the foundations of a general fear of things that can hurt us, and we "learn" to fear certain things, even if it seems to be irrational later on.

Having said that, it makes sense that things like sex and food are inbuilt. Without these core impulses the species may die out pretty quick. The baby may not realise that it wants food, but it feels pain because it''s hungry. When it feeds it feels less pain, so it associates hunger with food. In the same way it feels sexual urges when looking at someone, and associates these feelings together.

Oooh. Long post.

[teamonkey]
[teamonkey] [blog] [tinyminions]
quote:On that basis, then, a person born deaf and blind cannot be, or become, intelligent?

A person born without limbs cannot wield tools, so cannot be intelligent?

A dumb person cannot answer questions vocally, therefore cannot be interviewed to assess their intelligence?


Good point. I was thinking about that one actually.

A good example there is Helen Keller, who was born deaf and blind, and still learned to read (braille), write and speak, and went on to become a famous social campaigner and rights activist.

However, even someone with her formidable handicap still has a huge range of complex inputs in touch, smell and taste, far more than even the most sophisticated machine today, and certainly enough for the brain to develop causal reasoning faculties.

I don''t see any reason why Helen Keller would be any less intelligent than anyone else because of her handicap; so I doubt the relationship between I/O complexity and intelligence would be linear -- but I expect there is a lower limit to I/O complexity where intelligence can not develop fully, even given otherwise favourable circumstances. Mankind just hasn''t built anything above that limit -- yet.

Having said that, I agree that I/O wouldn''t be the only, or even the biggest factor in developing intelligence. It just seems to be an important factor that always gets overlooked.

Ken Scambler

Ken Scambler

This topic is closed to new replies.

Advertisement