human intelligence

Started by
82 comments, last by Gianni Guarino 10 years, 2 months ago
I think the thread name answers the question. Machines will never ever be human, it may have almost all the attributes of a human but will never be human.

UNREAL ENGINE 4:
Total LOC: ~3M Lines
Total Languages: ~32

--
GREAT QUOTES:
I can do ALL things through Christ - Jesus Christ
--
Logic will get you from A-Z, imagination gets you everywhere - Albert Einstein
--
The problems of the world cannot be solved by skeptics or cynics whose horizons are limited by the obvious realities. - John F. Kennedy

Advertisement

I think the thread name answers the question. Machines will never ever be human, it may have almost all the attributes of a human but will never be human.

That in no way answers the question about whether it is possible to design a machine that surpasses human intelligence.

I think the thread name answers the question. Machines will never ever be human, it may have almost all the attributes of a human but will never be human.


That in no way answers the question about whether it is possible to design a machine that surpasses human intelligence.
Try recreating the function the eye and brain performs in less than a micro second. Could you squeeze up to 1,000,000 microchips into a thinner than a mm plate?
You could try making a machine that's smarter than a child.
It requires an 83,000 processor supercomputer to simulate 1% of the brain in 40 minutes (over 220,000 pc's to do what a single, small organ does in less than a second, every day for the rest of it's use).

Yes a human can't solve mathematical problems as fast as a computer but that's because the human brain is a massive general purpose organ. It efficiently, effectively and intelligently managing all the numerous body systems. You could try simulating in one second what the brain does in a second with any single computer (super or otherwise) and see what happens.

If the brain where to do just mathematical problems or any single task then comparing it to any computer will be like comparing a Bugatti Veyron to a snail.

You've never seen a human suffer from (LMS) Low Memory Syndrome. Soak a computer in water for 30 minutes and see.

Whenever you do make a machine that can do these, try making it as human-like. It shouldn't have memory overload, over heating, runtime error, hanging/crashing, system shutdown etc. because it's using too many parts etc. It may then be possible to think of making something better.
From 2050-2100-whenever.

UNREAL ENGINE 4:
Total LOC: ~3M Lines
Total Languages: ~32

--
GREAT QUOTES:
I can do ALL things through Christ - Jesus Christ
--
Logic will get you from A-Z, imagination gets you everywhere - Albert Einstein
--
The problems of the world cannot be solved by skeptics or cynics whose horizons are limited by the obvious realities. - John F. Kennedy

Machines with human intelligence are not going to be based on current technology. A human being *is* a machine, created by random evolution. If nature can come up with human intelligence by accident, we can eventually improve it by design if enough resources are spent on the problem (ie do we actually want machines that can think?).

That does require working out exactly how the brain functions though, which is where the difficulty lies.


Soak a computer in water for 30 minutes and see.

What does that have to do with anything? Try running 240V through a human for 10 days straight and see how they function. Blathering on about transistors and human eye function adds nothing to the conversation. It's just repeating what's already been said.


You could try simulating in one second what the brain does in a second with any single computer (super or otherwise) and see what happens.

The only reason we can't simulate the brain effectively is because we don't understand it.

Agree. If I wanted to calculate 1000000! I wouldn't ask a mathematician to start sharpening his pencil, I'd write a library and I'd still beat the human.

EDIT: Even if the library was inefficient.


You are the human. And yes, "you" (a human) would have to write it.

They call me the Tutorial Doctor.

LennyLen, he does this regularly. It seems like he just reads the first sentence and then bases his whole reply on that. He comes across as a borderline troll sometimes.

As for AI in machines surpassing humans. All I have to say is what was science fiction in the past became science fact. I wouldn't knock anything. Though, the problem of a Skynet scenario would become very real because if you make the AI smart enough to learn it would learn how to bypass its safety protocols (become self aware). As for can a machine be made that surpasses a human's intelligence...it isn't a matter of if or can, but rather when.

Can you? That's impressive! Tell me, what was the exact shade of color exactly 1/8th from the top of your eyesight radius? What were the exact dimensions of the blades of grass around you?

Sigh. This had to come.

Sadly, it only shows that you didn't read my post properly, nor do you understand (or you deliberatly pretend not understanding) how human perception or the human mind works in any way.

It is obvious that even a below-average human's intelligence is superior to very advanced artificial intelligence, but it is also obvious that the human ability to memorize quantifiable data is neglegible compared to a computer. I would most certainly fail trying to memorize the first 100,000 primes even if you gave me 3 months of time. To a cellphone-sized computer, this is no challenge. However, computers likewise fail at pathetically trivial tasks. Such a comparison is widely meaningless.

My post said that finding a memory prowess challenge where the human beats a laptop is very unsuitable to demonstrate the superiority of one over the other. The challenge is so simple that it is compelling to "prove" the obviously wrong: Humans are better at remembering things (as "proven" by my example).

The "proof" by child memories still stands.

No, I don't remember the exact shade of some pixels on my retina 30 years ago or the number of grass blades anywhere. The reason being that my visual organs do not have a concept of pixels, nor of exact shades, nor does my brain have any such concept. Besides, a computer is not able to reliably answer the question "how many grass blades are in this image" either, even without having to remember the number, even when it explicitly tries to (other than me, who explicitly tries not to remember that information).

My brain, like the vast majority of human brains, receives a pre-integrated, contrast-enhanced (horizontal/bipolar cells) and modulated (ganglion cells), fault-corrected signal coming from a very non-uniform sample grid with a very non-uniform color reception and a very non-objective automatic luminance regulation. Plus, superposition of two images from different viewpoints combined in one.

The brain somehow transforms this... stuff... into something, which it selectively filters for information that is important for the present situation. That is what I "see". It is not an array of pixels of some particular shade, not even remotely.

This is a key to survival and to managing everyday situations. The brain then selects what part of this information (and other information) is important for the situation and how much of it, if any, is important to be remembered. This involves several circular propagations on a more or less hardwired system, attenuated or amplified by some metric which somehow involves emotions and olfactoric senses and some "recipe" which so far nobody can understand. There are several "layers" of storage (not just short-term and long-term memory) as well. That is what I "remember".

It works the same for all "properly working" humans.

Trying to compare this process to image data as picked up by a camera and stored in a computer is meaningless. It's like comparing a cow's ability of flying an airplane compared to a scissor's ability to produce eggs.

No, I probably can't remember 4,000 events either, though maybe I could, who knows. My memories are not stored in an array, and I am not counting them, so it is hard to tell how many they are. However, it is also meaningless to try to find out. The human memory, in the same way as perception is highly selective in what is stored (at least on "properly working" humans, there exist a few individuals where this isn't the case, they are seriously troubled every moment of their everyday life). This is a property that is essential for survival. The brain is supposed not to store all information, this is by design.

On the other hand, it is also highly fault-tolerant. You are still able to properly identify most things almost all the time when you acquire a retina defect later (supposed it's not a 100% defect). Humans are still able to perform this task rather trivially and with a very low error rate having lost one eye completely and having lost upwards of 50% on the remaining eye. Try and make a computer match data with a noise ratio upwards of 75%. Or try Google for "similar images" and see what you get, for that matter.

It is however meaningless how much of my eyesight I could lose, whether or not I can remember 400 or 4,000 or 40,271 events in my life, or whether I can remember some particular shade of some color. A computer is entirely unable to reproduce most of this kind of memory either way, so there is no base for comparison in the first place.

A computer could, however, conceivably reproduce a memory (or a ruleset, or other information) such as "fire is hot", "hot not good for your hands", or "things you drop fall to the ground", or "eggs only have limited support for microwaving", or "you can put a sphere into a circular hole".

These basic rules/patterns/facts are all things which most people learn in childhood. Also, they are things that not only the most advanced human, but even humans which are of quite sub-average intelligence reliably remember to the end of their lives.

Like most children, I had to learn multiplication tables in school. Unluckily, all present time computers have arithmetic hardwired, so it isn't very suitable for a "memory" comparison (but maybe you can still find a functional Z80?), but if that was the case, my grandfather would still win, since there is no 85 year old computer in service (and certainly there are worldwide less than a handful of computers older than 20-25 years in uninterrupted service, without replacing harddisks etc).

Being able to remember a single event/fact/ruleset over 40/80/100 years will show "superiority" to the computer according to the given challenge, since 1 > 0, and so far hardly any computer can remember anything from 40 years ago (if at all) and none can remember anything from 60, 80 or 100 years ago. But even leaving the fact that computers don't yet exist for that long out of consideration, the most advanced computer isn't nearly as capable as a very much sub-average human, and definitively has not been and will not remain functional nearly as long as the average human (not without replacing the "brain" and restoring data from backup anyway, which is cheating).

To go deeper into detail why such comparisons are meaningless, consider the following:

In A.J. Hanson's Visualizing Quaternions book, there exists an example to which he refers as "urban legend" of an upside-down F16. According to the legend, the board computer would turn the airplane upside down when crossing the equator because the sign of the latitude flipped. The author says he could not find a reference as to whether this actually happened, or whether it only happened in simulations (hence "legend").

It makes no difference whether it happened for real, or only in a simulation (same thing for the computer!). The point is that an intelligent being would be immediately aware that turning the airplane upside down for no apparent reason (and defiance to the visible horizon and the gyroscope) is a nonsensical decision, and something must be wrong.

This legend is very similar to an actual event where a civil airplane left a trench a couple of hundred meters long in a forest at a new airplane's first public demonstration.
The initial story was that the pilot performed a show-off manueuver which went slightly over the allowed tolerances, and when he pulled the stick, nothing happened. The board computer had deemed that the manueuver wasn't so good for the airplane (of course, crashing into a forest isn't precisely good either, but the computer failed to see that). This was later settled in an officieal statement backed by the (presumably well-paid) pilot which said it was a mere "piloting error".

Both events are examples how being able to perform calculations and intelligence are not the same things.

Similar can be said about pattern matching. Computers are much better at finding a fingerprint in a database than a human would be. They are also much better at identifying a person's face in a crowd.

However, the police still has every "hit" verified by a human, and biometric passports need to be sourced with photographs with a very specific layout and very exact placement. Why is this the case?

The reason is simple: Computers are not better at the job. They are faster at doing calculations. They are thus better at finding some statistical match out of a large number of samples, given a precise human-generated metric and well-chosen comparison patterns. Their results may or may not correlate with an actual match.

Once every so and so often (and often enough to be significant), the computer will report a match where the human reviewer will immediately see that the match is total bollocks. Similarly, the computer only achieves reasonably good outputs if given high-quality standardized patterns to match against.

Average humans are not able to match ten thousand faces per second, but they are able to identify/recognize another human given very deficient input patterns with a surprisingly low error rate. Especially women are exceedingly good at face-matching (don't ask me why, someone might come up with a hunter-vs-breeder evolutionary theory, but since the gender depends merely on one chromosome, I'd wager that unless face recognition is coded on the X-chromosome, there's hardly a way this could be a reason).

Either way, try and have a computer recognize a face from a 30° angle above front view when only having seen that person from the side before. Or try to get a positive recognition on someone looking away in almost the opposite direction. Women still get it 99% right even in absurdly bad conditions (and, they do that without having trained on the task in particular, without someone else writing a specialized "program" for them to work in that border case).

Good post Samoth. I acknowledge computers are faster and more efficient at things humans are not. But as far as "more intelligent" they just don't come close. Perhaps we should have sorted out the definition of "intelligence" we all can agree on first.

They call me the Tutorial Doctor.

This topic is closed to new replies.

Advertisement