Archived

This topic is now archived and is closed to further replies.

Scrambles

Thought experiment

Recommended Posts

I was amusing myself the other day thinking about the relationship between intelligence and the complexity of inputs to a system. Animals (like us) learn from birth, to some extent by recognising complex patterns in the world around them, and classifying newly experiencd patterns on old ones. Our input is a continuous sense of touch throughout (almost) every inch of our surface; sensitivity to sound to an extroadinary degree, sensitivity to a servicable chunk of the UV spectrum, as well as taste and smell which as far as I can tell provide special responses for a select range of chemicals. Machines, that we want to make intelligent, generally have a discrete or simple range of inputs: a string of characters (Eliza), a mouse and keyboard, (AI on your average computer), or perhaps with more sophisticated robots, limited light/heat/sound sensors, laser depth guides, or processed digital images. So my question is -- if we ever developed a system capable of attaining a reasonable level of intelligence -- how would we ever know? Imagine a human baby, with an unformed mind, that has only ever been able to perceive all black or all white. No touch, hearing, taste or smell, or any variety in vision. It normally sees only black; you can press a button wired to its brain that makes it see white instead, while the button is depressed. The baby is also unable to move, and can only respond through an LED connected to its brain, that is on or off. Would this mind ever grow to be what we call intelligent on that complexity of input? It would never be able to know it is in a world, or even that it is human. Could it possibly have needs, or desires? If, when adult, its full senses are completely returned, would it ever be able to even attain the mental capabilities of even a simple stimulus-response agent? If the mind could become intelligent on that limited input, would you ever be able to tell by pressing the button and recording the LED activity? The point is, I think it is impossible to make a machine even comparably intelligent to animals, let alone humans, without it having as complex a set of inputs as them. Human and animal reasoning is not done purely with clear-cut symbols or syllogisms, but involving complex patterns and nuances in our observed environment. I''m sure there are no end of flaws and holes in this line of reasoning; I''d love to hear anyone elses thoughts on the issue, or any references to related books or articles. Apologies for the length of the post. Happy brain-racking, Ken Scambler

Share this post


Link to post
Share on other sites
"So my question is -- if we ever developed a system capable of attaining a reasonable level of intelligence -- how would we ever know?"

I think we would learn by the same means we can say with a fair degree of certainty that some species of animal are intelligent....

We''d Observe it, Communicate with it and judge the relevance and nature of the responses.

I think this question was addressed, if not answered, in a more formal method by Alan Turing.

Why not take a look at

http://www.abelard.org/turpap/turpap.htm

I LOVE Mind Experiments.... the Universe has just disappeared!




Stevie

Don''t follow me, I''m lost.

Share this post


Link to post
Share on other sites
Yeah, I agree the machines are handicapped by limited sensors. If you put the same limitation on a human baby I doubt he/she would develop into an intelligent adult. In fact, I''d guess that a human limited to the sensors of a machine is not going to even become as intelligent as some machines that have been built so far.

We don''t have the tech to remove the handicap from the machines, not yet. But I don''t think we should call some machine intelligent just because it outperforms a human with the same handicap. I''d rather say that humans aren''t really intelligent until they have developed intelligence. A human vegetable doesn''t get the title "intelligent" just because he''s made of human meat.

-solo (my site)

Share this post


Link to post
Share on other sites
For the last 4 years i''ve been an AI student and quite a while back came to this conclusion - you don''t study computer science to learn how intelligence works. Computer guys tend to be math and engineering people and that''s what they''ve studied. If you want to know how the human mind works, you ask the people who study that, which i believe are the psychologists

So over the last year i''ve been trying to read up on psych. Problem is, i''ve got 12 years experience with computers, 4 with AI and only a couple of months reading intro psych books, so i don''t know much about it. But is that going to keep me from making some comments? Heck no!

quote:

Imagine a human baby, with an unformed mind... No touch, hearing, taste or smell, or any variety in vision...
Would this mind ever grow to be what we call intelligent... Could it possibly have needs, or desires? If, when adult, its full senses are completely returned, would it ever be able to even attain the mental capabilities of even a simple stimulus-response agent?



i sort of have an answer to this

First, i used to be a firm believer that intelligence is completely independent of perception. i still mostly believe that but after doing some study of visual and aural i admit intelligence isn''t one thing, it''s distributed all over and part of that is in the perception modules. What you''re talking about is a kid with no perception. Is he still intelligent? My personal opinion is absolutely. Just like someone born deaf, dumb and blind is intelligent

Second, will it have desires and needs? That one''s easy. Yes. There''s a lot of argument about DNA and environment but we seem to know that certain things are hard coded. One of those is goals and desires. Those are hard wired in. Low level ones like not starving and getting poked with a stick at least. Sex and pregnancy are in there as well. Many of the other goals we learn have are just our strategies to be safe and procreate. Our goals of getting rich, a tan, boob jobs, famous, learning pick pocketing, telling jokes, etc. are ways to get in someone''s pants and then grab a pizza afterwards. None of which is really implemented as "a goal" as if it were some STRIPS operator or integer or something. We actually operate off of pain and certain chemical reactions (getting horny, biological clocks, etc.). We don''t eat to stay alive, we eat because if we don''t it hurts. Ditto touching a hot burner or shooting yourself in the eye with a BB gun. So really there''s just a couple of hard-wired goals/actions but it leads to complex behavior. And while it''s worked out well enough for evolution, it can be bypassed. Procreation can be avoided by birth control (from simple masturbation and condoms all the way up to surgical vasectomy). And the pleasure/pain thing is easy to manipulate with heroin and to a lesser degree with things like coke, caffiene, nicotine, ritalin, etc.

Also hard coded in the brain appears to be knowledge of physics (with the exception of pivot points and balance), object recognition (trace the edges which can be done by checking color) and a fear of snakes (that one i found odd). Also hard coded is a lot of blueprint/building instructions for building a brain. Oh, and there''s a theory popular with some famous cognitive linguists that language is also hard coded. Not a specific language but the basic constructs that allow us to speak and which result in all languages being similar on some level (don''t know much about this one; i''m not a big fan of language)

Third, we have big brains for a variety of reason (many of which have nothing to do with intelligence) but a baby''s brain is goofy for a couple reasons. First, it''s not big enough to really do much. Unlike other animals, human babies are born kinda worthless. Can''t talk, can''t walk, can''t get a job making shoes. Worthless. But despite that, the brain is so freakin'' big it kills many moms trying to squeeze through (and because of the size, women tend to only have one kid at a time; had women been programmed to have 10 kids at a time they''d either all die in child birth or have hips so wide they''d be physically slow and akward and would get eaten by lions). So the way humans'' brains are created is that you just start with instructions for how to build a brain. Once you get out of the womb and have more space, you start to build the different functions of your brain. The building instructions appear to be of the form "gather this input, analyze it to figure out which neurons to conenct together, gather more input, repeat and tweak". Most of the wiring and (i assume) pretty much all of the perceptual/sensory wiring happens in the first few years. The point? If you do not give a baby lots of things to look at, his visual center doesn''t get built in the brain. If you take the kid out of the deprivation chamber at age 20 he''d have physically working eyes but i don''t think he''d be able to see - the eyes wouldn''t be hooked up to the brain so the brain wouldn''t be able to process the inputs

Fourth, i don''t believe intelligence has to be a stimulus-response thing. For example, dreams don''t have sensory inputs. Talking to yourself in your head is the same. And a phone conversation might have sounds but the intelligent parts of it (knowing what to say to cheer someone up, answer a calculus question or make a funny story) are purely internal. i think. So yeah, i''d say such a person would be intelligent. Might be less functional than a stimulus-response robot but it''d still be intelligent

Oh yeah, if i remember correctly, much of this knowledge came from some sicko sadists who do actually slice up baby kittens (sever sensory nerves) or put them in sensory deprivation chambers and then see what happens to them


quote:

Machines, that we want to make intelligent, generally have a discrete or simple range of inputs: a string of characters (Eliza), a mouse and keyboard...
The point is, I think it is impossible to make a machine even comparably intelligent to animals, let alone humans, without it having as complex a set of inputs as them. Human and animal reasoning is not done purely with clear-cut symbols or syllogisms, but involving complex patterns and nuances in our observed environment.



The core of intelligence (pattern recognition, learning through repetition and concious decision making through being concious) is pretty small. The complex patterns we exhibit are all the add ons that come from processing lots of patterns. But the ability to process patterns and the behaviors we get from actually processing tons of them are pretty different. Which is why some people are considered intelligent or stupid in a given topic based on experience. i''m smart when it comes to reading and writing (which i do a lot of) but clueless and an idiot when it comes to math (which i avoid like the plague) and music (which my brother is wonderful at creating and which i''ve never tried). A caveman might seem awfully dumb walking in front of cars, being scared of elevators, not bringing roses on the first date, etc. because he hasn''t had the quantity of different experiences needed to operate in those domains

Speaking of which, a resume/hiring pet peeve of mine - years experience. If you have 10 years of C++ experience but it''s just doing the same 2 or 3 tasks over and over, you''re not likely to know more than someone with 5 years experience who''s done 300 different things for a variety of different people and projects. We learn by getting different experiences (good and bad) and comparing them to figure out what the important differences were. Variety counts

As for the input we give computers, it''s actually pretty complex. Certainly it can be made to be so. We already have vision and speech processing systems that get music and sight. But even cutting off it''s senses many games are fed lots of generated polygons and light sources and vog orbises or whatever. Plus analog (?) joystick and fancy joystick inputs, network data, keyboard info (which might only be 200 characters but DNA has just 4 and it''s pretty complex; the patterns formed are the complex inputs, not the discreet elements) and mouse movements and all sorts of stuff

Oh, a nifty thing i learned about sensory processing. It happens in stages and with multiple filters. In vision you have a 1 second long "iconic memory" buffer which stores all the raw vision info. A scanner thingy then looks for important information and passes that to the next level of processing (which level is next depends on what you''re looking for)

So what is important information? It''s information that is important to you which means it''s information that helps you satisfy one of your current goals. That includes knowing about unexpected things, which is interesting (i think) because it means that before you look at something you predict what you''re going to see. The things you expected to see are thrown away (not processed, not passed up) unless you have need of them for some reason related to a goal. Unexpected things mean you don''t know what situation you''re in and thus can''t be sure you know how safe you are so you better to an analysis to figure out what gives

The whole point of this is that passing complex patterns to a computer or human or ferret really doesn''t matter unless that thingy has a goal/desire and is looking for something specific. Meaning you don''t really percieve without having goals and expectations. Which is what computers certainly don''t have today

OK, so on to the whole point of this post (i think):
quote:

Machines, that we want to make intelligent



Question: what would it take to make a machine intelligent? Intelligence is one of those not-agreed-upon terms so i''m going to define it as "you think it''s mentally just like you".

i think machines don''t appear intelligent today because a)machines don''t have machine-specific goals (we give them goals like "do what i tell you") and b)we don''t let machines make self-interested decisions. Basically, we don''t give machines much freedom. We open notepad or whatever and we go in and tell them exactly what they want, what they can look at, how they look at it, what choices they can make and what they''ll do with the info. If you tell a computer "add these numbers together and print the sum over here" we don''t really give the computer a chance to excercise any intelligence. And most people don''t want to program a computer with instructions like "don''t worry about me, do what''s best for you". If you did, poor Sparky would get awful testy when you upgraded him 18 months later with a faster model. And the last thing we need is to give a Windows computer more reason to crash

Anyway, them''s my thoughts

-baylor

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
humans are only "living machine" we handle input and respond

some of our input types: sound, vistion, touch, smell, balance


and we produse a output, desiding on the input

Share this post


Link to post
Share on other sites
I believe it''s important to correct a few errors in baylor''s post before others respond and the discussion heads off on a wrong tangent...

quote:
Original post by baylor
So the way humans'' brains are created is that you just start with instructions for how to build a brain. Once you get out of the womb and have more space, you start to build the different functions of your brain. The building instructions appear to be of the form "gather this input, analyze it to figure out which neurons to conenct together, gather more input, repeat and tweak". Most of the wiring and (i assume) pretty much all of the perceptual/sensory wiring happens in the first few years. The point? If you do not give a baby lots of things to look at, his visual center doesn''t get built in the brain. If you take the kid out of the deprivation chamber at age 20 he''d have physically working eyes but i don''t think he''d be able to see - the eyes wouldn''t be hooked up to the brain so the brain wouldn''t be able to process the inputs



Core elements of the brain are among the first systems to develop in the embryo and by the end of the first trimester, the brain is functionally developed... in that all of the morphological structures are in place and are working at basic levels. So, for example a second trimester baby can suck it''s thumb in the womb (which requires coordinated action to move the arm, extend the thumb, sense that it is in the mouth, stop moving the arm and kick in the autonomic suckling function). Certainly, the baby doesn''t yet have higher cognitive functions like being able to read a book, order french fries or chat about the weather, but it does have an active brain that is developing synapses continuously. By the third trimester the baby can respond to external stimulus, it can hear, touch, taste (and hence smell) and see, although sight within the womb is limited to intensity changes. The retinae are fully developed and provide full sensory information to the occipital lobe... it''s just that the information is quite simple because of the environment.

We continue to actively grow new synapses up until about 4-5 years of age... and we are still increasing myelination up until about age 40. After that we suffer active atrophy of our white matter (and not just from unnatural causes like alcohol).

It is certainly not true that we only learn outside the womb, nor is it true that we only develop our ability to grow synapses outside the brain... although there is certainly a correlation with learning rate and synapse growth... and sensory inflow, which is higher once we leave the womb.

Regards,

Timkin

Share this post


Link to post
Share on other sites
Thanks for your responses everyone.

stevie56: I was going to mention the Turing Test, but decided that the post was already long enough, and it didn''t really affect the point much. The reason i asked...

"if we ever developed a system capable of attaining a reasonable level of intelligence -- how would we ever know?"

...was to reinforce the point that it is probably impossible to do so with trivial or limited inputs.

5010: I agree completely.

Baylor:
Hmmm. Some interesting points here.
I disagree with one point, I don''t think the baby could possibly have needs or desires, at least in the sense we know them. I can''t see how even primal needs such as food, warmth, love and sex are possible when the impulses are not "grounded" in tangible inputs. The baby could not possible be aware of these issues consciously, and without any grounding, the subconscious impulses would be impotent and meaningless.
Also I''d be interested in any links to actual research done in that area, by sickos or otherwise, if you can remember where you heard that.

Timkin: Thanks, I didn''t know that. I think the biggest weak point in my argument is my lack of biological knowledge -- the answer to my hypothetical questions probably lies as much in biology as anything else.

These points notwithstanding, I still think the thought experiment is a neat illustration of how little chance we have of achieving strong AI with the I/O limitations of current hardware and software used for agents and robots.

Today''s advanced symbolic reasoning software typically uses some or all of production systems, frames, semantic nets, predicate calculus, classification trees -- all useful tools -- but all so clean-cut, so limited. They use vastly oversimplified symbols representing real world concepts, yet with absolutely no real world grounding for the agent. How can an agent build its own patterns and beliefs without any complex real-world grounding for them to learn from? Currently they regurgitate derivative data in a narrow scope fed to them by a programmer.

As for neural nets -- not that I want to get baylor started on them again -- Matt Buckland''s site mentions a researcher who unsuccessfully tried to hardwire 2 million neurons together to get the intelligence of a cat. I havn''t read anything else about this, but I''m willing to bet his project didn''t even make a token stab at simulating the vast range and complexity of inputs that a cat has.

You feed it peanuts, you get monkeys, so to speak.

Except monkeys are smart.

Ken Scambler




Share this post


Link to post
Share on other sites
"So my question is -- if we ever developed a system capable of attaining a reasonable level of intelligence -- how would we ever know?"

So... the concensus seems to be that sensory input is everything (or at least a very large part) in developing intelligence?

On that basis, then, a person born deaf and blind cannot be, or become, intelligent?

A person born without limbs cannot wield tools, so cannot be intelligent?

A dumb person cannot answer questions vocally, therefore cannot be interviewed to assess their intelligence?

I think the Turing Test (formulated in the 50''s) is as near as we''re going to get to testing if a machine is intelligent. The point is the if, after a conversation, we cannot tell from the responses (no matter how elicited) whether we are addressing a human or a machine, then we may have a machine that is intelligent within the domain of the discourse.

After all, we test our kids at school don''t we? All those tests are limited and specialised, intended to produce limited and specialised results which may have longlasting effects on the child''s future development.

Seems to me if it''s good enough for a child, it''s good enough for a machine?

And, just to follow the logic a step further, a professor is deemed to be intelligent when he''s awake, but not when he''s asleep or in a coma?

Leads me to ask at what instant does the light click on or off?



Stevie

Don''t follow me, I''m lost.

Share this post


Link to post
Share on other sites
First, let me just say that this is a fascinating thread!

The usual disclaimer: I am not a psychologist, but...

Firstly, I disagree with the whole premise that an AI would be limited by the fact that the inputs (mouse, keyboard etc.) are so limited and/or primative. An AI that could only interpret keyboard input could theoretically master a vast number of languages and dialects, and movements of the mouse could be interpreted as a language. What''s the difference between a subtle mouse movement and a violent one? When are subtle movements used, and in what direction.

Also, the AI''s a computer simulation of intelligence, why can''t the simuli that it feels also be a simulation? I can construct my own world around it, with it''s own set of physical and mathematical laws, and let it explore. Or I could feed it Project Gutenberg. Or I could give it knowledge of HTTP protocols and point it at google.

In addition, the whole cause-and-effect nature of synapses in the brain means that one synapse firing and affecting others counts as an input. The more connections in the brain, the more inputs, and also the more feedback from these inputs. This is what makes us different from animals - we''ve got more brain cells. I''m not saying that you don''t need input, but even simple inputs can be used.

Baylor: your post is really interesting, and I love the idea of the early brain being a set of instructions on how to build a better brain, with the capability to reason, have emotion, speak and build microprocessors. I disagree that people have an inbuilt knowledge of physics, but rather that they have the ability to learn about the physical world around them _very_ quickly (mixed with what we call common sense). If we had an inbuilt knowledge of physics, Galileo wouldn''t have had to tell us that heavy objects fall at the same speed as light ones, and it would make the topics of Quantum Mechanics and General Relativity less mind-bending. Same with snakes, I suppose: less an inbuilt fear, more the foundations of a general fear of things that can hurt us, and we "learn" to fear certain things, even if it seems to be irrational later on.

Having said that, it makes sense that things like sex and food are inbuilt. Without these core impulses the species may die out pretty quick. The baby may not realise that it wants food, but it feels pain because it''s hungry. When it feeds it feels less pain, so it associates hunger with food. In the same way it feels sexual urges when looking at someone, and associates these feelings together.

Oooh. Long post.

[teamonkey]

Share this post


Link to post
Share on other sites
quote:
On that basis, then, a person born deaf and blind cannot be, or become, intelligent?

A person born without limbs cannot wield tools, so cannot be intelligent?

A dumb person cannot answer questions vocally, therefore cannot be interviewed to assess their intelligence?


Good point. I was thinking about that one actually.

A good example there is Helen Keller, who was born deaf and blind, and still learned to read (braille), write and speak, and went on to become a famous social campaigner and rights activist.

However, even someone with her formidable handicap still has a huge range of complex inputs in touch, smell and taste, far more than even the most sophisticated machine today, and certainly enough for the brain to develop causal reasoning faculties.

I don''t see any reason why Helen Keller would be any less intelligent than anyone else because of her handicap; so I doubt the relationship between I/O complexity and intelligence would be linear -- but I expect there is a lower limit to I/O complexity where intelligence can not develop fully, even given otherwise favourable circumstances. Mankind just hasn''t built anything above that limit -- yet.

Having said that, I agree that I/O wouldn''t be the only, or even the biggest factor in developing intelligence. It just seems to be an important factor that always gets overlooked.

Ken Scambler

Share this post


Link to post
Share on other sites
quote:

Baylor: your post is really interesting, and I love the idea of the early brain being a set of instructions on how to build a better brain... I disagree that people have an inbuilt knowledge of physics, but rather that they have the ability to learn about the physical world around them _very_ quickly



i''m not a psychologist either, although i''m going to start taking some classes in the fall in hopes of becoming one

As for the inborn fear of snakes and knowledge of physics, it sounded pretty goofy to me too. i think i read about that in How The Human Mind Works by MIT psychologist Steven Pinker (i know someone earlier in the thread asked where i found some of this stuff). The first 1/3 of the book is wonderful (the last 1/3 is pretty mediocre). The stuff on physics was done by testing babies (3 months old?) to see if they were amazed or non-plussed by certain things. Example - imagine a board/piece of paper in the middle of your view. On the left roll a ball. If the ball comes out the right, the baby doesn''t care. If the ball goes in on the left and DOESN''T come out the right, the baby is shocked. Ditto if a red stick on the right and the left of the paper move in tandem (indicating they are connected/one piece) and is surprised if they don''t move together. If they do move together and then you lift the paper and show that they''re two separate sticks they''re also surprised (something about assuming that objects that move together and look similar are the same object; like how you see a guy in a suit''s head, hand and suit and you can''t physically see that the hand and head are connected but you just "know" that they are). And some other experiments on some physics stuff. Don''t ask me, i don''t know physics

Anyway, the Pinker book and a few other books i''ve read cover the same stuff. Pinker also has a book called "The Blank Slate" or something like that that supposedly talks about which things are hard wired and which are learned

As for learning out of the womb, you should probably go with what Timkin said since he''s been doing this longer than i have. But what i got from reading the books is that a lot of what we''re born with is instructions on how to build the brain (well, the basic structures are all there but not hooked up/tuned) and that the instructions ask you to get input from the environment (sight, sound, etc.) to tune those connections. Sounds like some things like thumb sucking are learned in the womb (didn''t know that)

Again, i forget exactly where i got most of that info (and if i''m quoting it right) but the Pinker book is the one i like most. i''ve also been reading an Intro to Cog Psych text book written by some UNLV professor. It''s a standard college survey text book but i''m half way through and think it''s pretty interesting and well written. i haven''t read too many advanced books (except maybe Anderson''s Architecture of Cognition on how to build a mind in lisp) but the intro/pop stuff i''ve read has been pretty interesting. To me at least. And not a single world on neural networks

-baylor

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
I love posts like these. We need more ai ppl with minors in psych or philosophy.
quote:
Original post by baylor
For the last 4 years i''ve been an AI student and quite a while back came to this conclusion - you don''t study computer science to learn how intelligence works. Computer guys tend to be math and engineering people and that''s what they''ve studied. If you want to know how the human mind works, you ask the people who study that, which i believe are the psychologists

their insights are helpful, and you''ve definitely been around the engineering types, but what''s more fun is asking *yourself* ''what is the human mind?'' and ''why does it work?''. If you can''t tell I''m a big fan of the paradigm of one genuis coming around every once in a while and turning everything on its head(revolution vs evolution).

quote:
So over the last year i''ve been trying to read up on psych. Problem is, i''ve got 12 years experience with computers, 4 with AI and only a couple of months reading intro psych books, so i don''t know much about it. But is that going to keep me from making some comments? Heck no!

doesn''t stop me either, you''d be surprised how little I officially know(which I won''t tell anyone, I prefer being heard, still working on being listened to).

quote:
Fourth, i don''t believe intelligence has to be a stimulus-response thing. For example, dreams don''t have sensory inputs. Talking to yourself in your head is the same.

Ah, it all starts with assumptions, so keep the belief but justify it differently. There are sensory inputs here, you provide them. In fact that''s what all of our intelligence as humans does, we sense the world, interpret it, create a model, and operate on that model.

Baylor-I''ve had an idea and I think you could agree that a really smart ai program might be only a few megs yet be tremendously brilliant by its much larger data files(kind of like games today).

Share this post


Link to post
Share on other sites
quote:
Original post by teamonkey
In addition, the whole cause-and-effect nature of synapses in the brain means that one synapse firing and affecting others counts as an input. The more connections in the brain, the more inputs, and also the more feedback from these inputs. This is what makes us different from animals - we''ve got more brain cells.



Not true. After verifying this with one of the researchers in my lab (who works on brain volume studies in humans and changes during the human life span), the elephant has the largest brain with certain whales coming a very close second... adjusted for body mass, dolphins probably have the largest brains of all creatures. Humans stand out only in so far as they have a high white to grey matter ratio... but again, not significantly greater to explain our supposed intellect difference from animals.

My own personal opinion is that we aren''t as different from certain animal species as we would like to think we are... the problem in accepting this lies in our inability to assess intelligence objectively and our anthropomorphic and sociologic bias as the ''dominant'' species of the planet.

Timkin

Share this post


Link to post
Share on other sites
Intelligence is a very abstract concept. It is a human-made conception and therefore hard to put onto non-human things. Intelligence is almost impossible to measure and makes it hard to do serious, valid studies and comparissons on it. Is it a sum of accumulated knowledge? Is it how we interpret the knowledge we know? Is it both? Certainly, most would agree Albert Einstein was intelligent. Was he just as intelligent when he was 1 or 2 years old? Again, depends on what you define intelligent to be. I would say yes, as I define intelligence to be ''productive/innovative ways of using the knowledge you have, not the raw retainment of information''. An encyclopedia has alot of information, but does nothing with it, and is not intelligent. Also, the social definitions of intelligence change and make it hard to know what is real or not. Few hundred years ago, you were considered a complete idiot for thinking the world was round. So in that situation, what is more intelligent, to preach the Earth is round (which it is) or to not buck the system and save being ridiculed or executed and just say it is flat. Fuzzy example, but clearly more than one option for the same choice can be considered intelligent. So merely defining the word is a chore all in itself.

How to tell when a machine is intelligent is an even tricker problem. First, get out of the mind-set that the computer need to be ''human-like'' in behavior and thought to be intelligent. As many of you mentioned, unless the machine has the same sensory inputs we have, the same societal influences, the same weather, time of day, heat, cold, etc... that we have, they will never ever never interpret the data the same. A machine will walk into a blizzard with no coat on. Is that dumb? For a machine no, he won''t care about the cold, but a human would. So understanding the context you want this intelligence to function in is crucial to determining its very existance. Defining that context is really most important step. Is my tic-tac-toe AI intelligent? It can play a game and never lose, so in its context, it is totally genius. But no, it will never solve cancer. =)





Share this post


Link to post
Share on other sites
quote:
Original post by Timkin
Not true. After verifying this with one of the researchers in my lab (who works on brain volume studies in humans



OK, this is completely off topic, but i didn''t know you could measure brain volume. More specifically, i''ve been trying to play with a theory i made up that autism is caused by an equaliteralization of the hemispheres but to even begin to test that i''d need a way to measure left and right hemisphere size. i called around a couple of autism researchers with access to MRI but was told they didn''t know any way to measure hemispherical volume. i would love to know how your friend does it. Preferably while leaving the subject alive (her mom would kill me if i cracked open her cranium)

Hey, i said it was off topic


quote:

My own personal opinion is that we aren''t as different from certain animal species as we would like to think we are



That was always my assumption then i had lunch with a lady at MIT who wrote a book on parrot intelligence (Alex Studies i think) who told me i was a bozo. Another book i read confirmed this oft-believed opinion

Here''s my understanding, once more based on something i read once

Your brain has several duties:
- thinking
- storage
- perception

So not all of those neurons do thinking. Some are like a big, wet hard drive or a whole bunch of post it notes. Bigger for them = longer memory. So someone with a smaller brain could theoretically (i assume) be just as smart as you but not remember his 3rd grade teacher as well as you

Another good chunk of your brain is I/O. For example, pick a point on your skin. There''s probably two neurons dedicated to it (one input, one output). The more skin you have (or more senses, more smells you can detect, more visual processing you do, etc.), the more neurons and brain mass you need

So this book i read mentioned something really fascinating. Brain cells are dedicated to functions. It''s not one big mass of intelligence, it''s a bunch of modules (sorta), one for each function you have. We have a much smaller brain than pigs when it comes to the area dedicated to smelling. Eels have a sense we don''t (ability to sense electrical fields) and thus a chunk of their brain goes into that. We have an incredibly large chunk of our brain dedicated to vision - 1/3 i believe. A third of our brain to do nothing but look at stuff (which gets used by other systems in other ways for things like dreaming, remembering and thinking; co-option) But it''s actually decreased from pappa monkey - chimps have 1/2 their brain dedicated to vision. Why? Because they need better eye sight than us. According to the Pinker book, monkeys running and jumping through trees need to be able to quickly pick out branches and evaluate their weight capacity so they know where to jump to next. Human brains aren''t fast enough to swing through dense forest

So rather than being just a big honkin'' general processing unit like a CPU, a brain''s mass is divided between i/o, storage and a series of specialized modules that do specific things

Which begs the completely non-game-ai related question - what would happen if our brains were bigger or smaller

If smaller, maybe we wouldn''t be able to remember something that happened 40 years ago. Babies and kids have smaller brains and thus maybe they can''t remember something that long ago. Good thing you don''t have to when you''re 4

If it was smaller, maybe it would take us a little longer to find Waldo. Maybe we''d be more subsceptible to optical illusions (like MC Escher drawings). Maybe we''d enjoy pro wrestling

With bigger brains, maybe we could drive faster through parking lots. Why? Your brain (hopefully) has the goal "don''t hit anyone". So your attention center tells your eyes to look for possible places children could dart out from. In a parking lot, that''s a lot of places. You get overwhelmed if you try to pay attention to/keep track of all of them. So you drive slower to compensate. Maybe with a bigger brain we could listen to two people talk at the same time. We could do three non-procedural tasks (tasks that have turned into automatic behaviors/reflexes) at the same time. Maybe we could even talk on cell phones and be safe drivers

Or maybe we''d be just the same only we''d be able to handle another sense like infravision or sonar or sensing the magnetic poles

Maybe we''d look 5 moves ahead in chess rather than just 4. Maybe we''d be able to add 10 numbers together without using paper or creating intermediate sums (ie, no problem decomposition). Maybe we''d remember names easier, never ask someone to repeat a phone number or could keep a 20 item shopping list in our head

Anyway, the point is that you can''t compare two brains by just volume. It''d be like comparing the weight of a carberator (sp?) to a Mr Coffee. If things were better just because they were bigger my mom would be the greatest thing in the world. But sometimes size doesn''t matter. Much

-baylor, who is so off topic his head will explode

Share this post


Link to post
Share on other sites
Warning: OFF TOPIC

Ignore this post if you aren''t interested in brain volume.

quote:
Original post by baylor
OK, this is completely off topic, but i didn''t know you could measure brain volume. More specifically, i''ve been trying to play with a theory i made up that autism is caused by an equaliteralization of the hemispheres but to even begin to test that i''d need a way to measure left and right hemisphere size. i called around a couple of autism researchers with access to MRI but was told they didn''t know any way to measure hemispherical volume. i would love to know how your friend does it. Preferably while leaving the subject alive (her mom would kill me if i cracked open her cranium)



Wow, I''m really surprised that they didn''t know how to measure brain volume from an MRI. It''s trivial. The scan protocol dictates the number of slices, slice thickness, slice separation and the voxel resolution per slice. .. so the volume is a trivial computation... basically a summation of voxels with known volume.

As to comparison of brain volumes, the scans are aligned in a standard space wherein volume comparisons can be made. My boss, for example, has an exceptionally large brain! I''m still waiting for an opportunity to get a free scan... basically I have to wait until we do another study and they need a ''normal''. I won''t tell ''em I''m not normal if they don''t ask!

quote:
Original post by baylor
That was always my assumption then i had lunch with a lady at MIT who wrote a book on parrot intelligence (Alex Studies i think) who told me i was a bozo. Another book i read confirmed this oft-believed opinion



There are certainly many animal species that don''t display the same sorts of intelligence as the primates, cetaceans, etc. That doesn''t mean you (or I) are bozo''s for believing that human intelligence is similar to animal intelligence... indeed, we''re not alone in this thought. I''ve forgotten the name of the researcher (although I could dig it up if needs be) but there was an excellent study done of Orang Utan''s kept in captivity. One particular subject displayed many of the traits you would expect of a human. For example, he (I think it was a he) found a small piece of wire in his enclosure and hid it so that the keepers wouldn''t find it. When they were gone, he tried to open his cage with it. He varied it''s shape by bending it and kept working on the lock. When they would return he would hide the ''lockpick'' again and return to his break out attempts when they left. At some point he was transferred to another zoo for some mating fun and came back some time later (I think it was around 2 years, but I''m not certain). At the first opportunity when he was alone, he went back to his hiding place, retreived his tool and started picking his lock again. He succeeded and broke out. He had a short bout of freedom before being captured again. His scheme was found out when the keepers went searched his enclosure and found the lockpick. They checked recent video footage of his enclosure and saw what he had been doing. I believe they even verified it by leaving another piece of wire lying around for him to find and monitoring his activities.

Now THAT is one smart primate. He displays abstract thinking and reasoning (that the wire would be useful as a tool, that the tool could be used on the lock, that this would bring freedom), long term memory, subterfuge and deception, etc. I was completely amazed when I first read about this.

There are many other stories like this that cannot be put down to mimicking humans. Unfortunately, the mind set of the 18th and 19th centuries still pervades modern thought in that only humans could be intelligent. Heck, only 100 years ago, some researchers thought that anyone who didn''t have white skin wasn''t intelligent and were a lesser species than humans.

quote:
Original post by baylor
Here''s my understanding, once more based on something i read once

Your brain has several duties:
- thinking
- storage
- perception

So not all of those neurons do thinking.



I think we''d be hard pressed to find any neurons that do thinking! By that, I''m expressing my believe that higher cognitive functions are an emergent property of lower order functional operations.

As to the different parts of the brain... yes, there are many functionally different parts of the brain. The research area of fMRI (functional Magnetic Resonance Imaging) is concerned with mapping the regions on the cortex that correlate with functional activities of the body (speach, sight, hearing, movement, etc). In addition to this, there are groups of tissue withint the brain that are structurally different as well as functionally different from each other. These are given names to identify them. For example, the hippocampus, amygdala, thalamus, hypothalamus, cerebellum, pons, etc... These different regions are known to be responsible for certain aspects of the brains and bodies functions. For example, the cerebellum is generally thought to be where repetive actions are turned into conditioned responses and stored. The hippocampus is a centre for memory (particularly short term memory). The thalamus is the communications hub of the brain. Nearly all signals from the body are routed through the thalamus and many spatially isolated areas of the cortex are connected through thalamo-cortical neuronal loops. It is the activity of all of these different areas of the brain - sometimes working in unison, sometimes working in isolation - that give us life. As to how consciousness arises from that... well, I have my theories, but if I truly new the answer, I''m sure I''d quite famous!

quote:
Original post by baylor
Anyway, the point is that you can''t compare two brains by just volume. It''d be like comparing the weight of a carberator (sp?) to a Mr Coffee. If things were better just because they were bigger my mom would be the greatest thing in the world. But sometimes size doesn''t matter. Much



True. Although there is a correlation between relative brain size and perceived intelligence (i.e., as related to intelligence testing... but not just IQ). Of course, being intelligent and/or having a larger brain doesn''t mean you''re successful, happy or well liked. It''s how you use what you''re given that''s important!

Cheers,

Timkin

Share this post


Link to post
Share on other sites
"OK, this is completely off topic, but i didn''t know you could measure brain volume"

Take one head, preferably dead, a hacksaw and a bucket of water...

Share this post


Link to post
Share on other sites