• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.
Sign in to follow this  
Followers 0
lomateron

human intelligence

84 posts in this topic

I think the thread name answers the question. Machines will never ever be human, it may have almost all the attributes of a human but will never be human.
-3

Share this post


Link to post
Share on other sites

I think the thread name answers the question. Machines will never ever be human, it may have almost all the attributes of a human but will never be human.

 

That in no way answers the question about whether it is possible to design a machine that surpasses human intelligence.

0

Share this post


Link to post
Share on other sites

I think the thread name answers the question. Machines will never ever be human, it may have almost all the attributes of a human but will never be human.


That in no way answers the question about whether it is possible to design a machine that surpasses human intelligence.
Try recreating the function the eye and brain performs in less than a micro second. Could you squeeze up to 1,000,000 microchips into a thinner than a mm plate?
You could try making a machine that's smarter than a child.
It requires an 83,000 processor supercomputer to simulate 1% of the brain in 40 minutes (over 220,000 pc's to do what a single, small organ does in less than a second, every day for the rest of it's use).

Yes a human can't solve mathematical problems as fast as a computer but that's because the human brain is a massive general purpose organ. It efficiently, effectively and intelligently managing all the numerous body systems. You could try simulating in one second what the brain does in a second with any single computer (super or otherwise) and see what happens.

If the brain where to do just mathematical problems or any single task then comparing it to any computer will be like comparing a Bugatti Veyron to a snail.

You've never seen a human suffer from (LMS) Low Memory Syndrome. Soak a computer in water for 30 minutes and see.

Whenever you do make a machine that can do these, try making it as human-like. It shouldn't have memory overload, over heating, runtime error, hanging/crashing, system shutdown etc. because it's using too many parts etc. It may then be possible to think of making something better.
From 2050-2100-whenever.
0

Share this post


Link to post
Share on other sites

Machines with human intelligence are not going to be based on current technology. A human being *is* a machine, created by random evolution.  If nature can come up with human intelligence by accident, we can eventually improve it by design if enough resources are spent on the problem (ie do we actually want machines that can think?).

 

That does require working out exactly how the brain functions though, which is where the difficulty lies. 

 


Soak a computer in water for 30 minutes and see.

 

What does that have to do with anything?  Try running 240V through a human for 10 days straight and see how they function.  Blathering on about transistors and human eye function adds nothing to the conversation. It's just repeating what's already been said.

 


You could try simulating in one second what the brain does in a second with any single computer (super or otherwise) and see what happens.

 

The only reason we can't simulate the brain effectively is because we don't understand it.

1

Share this post


Link to post
Share on other sites

Does anyone here thinks human intelligence is overrated and that it's just a matter of months until someone finds the right algorithm and just with a intel i7 and some GBytes of memory we can surpass human intelligence after running the algorithm for some months?

 

prove me wrong

  1. It would take more than a few months to program the algorithm, especially if it required gigabytes of code to start running.  Although if you're just curious if it could take some-one to put together the correct idea, yeah, one person is enough.
  2. Programs that benefit from months of human computation are the closest thing we have to a program with human intelligence, however incomplete. There is no measure to the amount of human intelligence required to fulfill the logical calculations, solving the problems for a program that doesn't know the answers, correcting mistakes that were inevitably introduced along the way.  That seems underrated.
  3. Something I read just randomly wandering around as a total noob at programming was iIf a program rewrote its own kernel, that program would stop working. Eventually it'd make a permanent change that caused bugs even if it could debug itself, the debugger would have bugs.

 

Preemted by a month in my journal: http://www.gamedev.net/blog/1780-theoretical-games-that-evolve-from-player-input/

0

Share this post


Link to post
Share on other sites

Agree. If I wanted to calculate 1000000! I wouldn't ask a mathematician to start sharpening his pencil, I'd write a library and I'd still beat the human.
 
EDIT: Even if the library was inefficient.


You are the human. And yes, "you" (a human) would have to write it.
0

Share this post


Link to post
Share on other sites

LennyLen, he does this regularly. It seems like he just reads the first sentence and then bases his whole reply on that. He comes across as a borderline troll sometimes.

 

As for AI in machines surpassing humans. All I have to say is what was science fiction in the past became science fact. I wouldn't knock anything. Though, the problem of a Skynet scenario would become very real because if you make the AI smart enough to learn it would learn how to bypass its safety protocols (become self aware). As for can a machine be made that surpasses a human's intelligence...it isn't a matter of if or can, but rather when.

2

Share this post


Link to post
Share on other sites

Can you? That's impressive! Tell me, what was the exact shade of color exactly 1/8th from the top of your eyesight radius? What were the exact dimensions of the blades of grass around you?

 

Sigh. This had to come.

 

Sadly, it only shows that you didn't read my post properly, nor do you understand (or you deliberatly pretend not understanding) how human perception or the human mind works in any way.

 

It is obvious that even a below-average human's intelligence is superior to very advanced artificial intelligence, but it is also obvious that the human ability to memorize quantifiable data is neglegible compared to a computer. I would most certainly fail trying to memorize the first 100,000 primes even if you gave me 3 months of time. To a cellphone-sized computer, this is no challenge. However, computers likewise fail at pathetically trivial tasks. Such a comparison is widely meaningless.

 

My post said that finding a memory prowess challenge where the human beats a laptop is very unsuitable to demonstrate the superiority of one over the other. The challenge is so simple that it is compelling to "prove" the obviously wrong: Humans are better at remembering things (as "proven" by my example).

 

The "proof" by child memories still stands.

 

No, I don't remember the exact shade of some pixels on my retina 30 years ago or the number of grass blades anywhere. The reason being that my visual organs do not have a concept of pixels, nor of exact shades, nor does my brain have any such concept. Besides, a computer is not able to reliably answer the question "how many grass blades are in this image" either, even without having to remember the number, even when it explicitly tries to (other than me, who explicitly tries not to remember that information).

 

My brain, like the vast majority of human brains, receives a pre-integrated, contrast-enhanced (horizontal/bipolar cells) and modulated (ganglion cells), fault-corrected signal coming from a very non-uniform sample grid with a very non-uniform color reception and a very non-objective automatic luminance regulation. Plus, superposition of two images from different viewpoints combined in one.

The brain somehow transforms this... stuff... into something, which it selectively filters for information that is important for the present situation. That is what I "see". It is not an array of pixels of some particular shade, not even remotely.

 

This is a key to survival and to managing everyday situations. The brain then selects what part of this information (and other information) is important for the situation and how much of it, if any, is important to be remembered. This involves several circular propagations on a more or less hardwired system, attenuated or amplified by some metric which somehow involves emotions and olfactoric senses and some "recipe" which so far nobody can understand. There are several "layers" of storage (not just short-term and long-term memory) as well. That is what I "remember".

 

It works the same for all "properly working" humans.

 

Trying to compare this process to image data as picked up by a camera and stored in a computer is meaningless. It's like comparing a cow's ability of flying an airplane compared to a scissor's ability to produce eggs.

 

No, I probably can't remember 4,000 events either, though maybe I could, who knows. My memories are not stored in an array, and I am not counting them, so it is hard to tell how many they are. However, it is also meaningless to try to find out. The human memory, in the same way as perception is highly selective in what is stored (at least on "properly working" humans, there exist a few individuals where this isn't the case, they are seriously troubled every moment of their everyday life). This is a property that is essential for survival. The brain is supposed not to store all information, this is by design.

On the other hand, it is also highly fault-tolerant. You are still able to properly identify most things almost all the time when you acquire a retina defect later (supposed it's not a 100% defect). Humans are still able to perform this task rather trivially and with a very low error rate having lost one eye completely and having lost upwards of 50% on the remaining eye. Try and make a computer match data with a noise ratio upwards of 75%. Or try Google for "similar images" and see what you get, for that matter.

 

It is however meaningless how much of my eyesight I could lose, whether or not I can remember 400 or 4,000 or 40,271 events in my life, or whether I can remember some particular shade of some color. A computer is entirely unable to reproduce most of this kind of memory either way, so there is no base for comparison in the first place.

 

A computer could, however, conceivably reproduce a memory (or a ruleset, or other information) such as "fire is hot", "hot not good for your hands", or "things you drop fall to the ground", or "eggs only have limited support for microwaving", or "you can put a sphere into a circular hole".

 

These basic rules/patterns/facts are all things which most people learn in childhood. Also, they are things that not only the most advanced human, but even humans which are of quite sub-average intelligence reliably remember to the end of their lives.

Like most children, I had to learn multiplication tables in school. Unluckily, all present time computers have arithmetic hardwired, so it isn't very suitable for a "memory" comparison (but maybe you can still find a functional Z80?), but if that was the case, my grandfather would still win, since there is no 85 year old computer in service (and certainly there are worldwide less than a handful of computers older than 20-25 years in uninterrupted service, without replacing harddisks etc).

 

Being able to remember a single event/fact/ruleset over 40/80/100 years will show "superiority" to the computer according to the given challenge, since 1 > 0, and so far hardly any computer can remember anything from 40 years ago (if at all) and none can remember anything from 60, 80 or 100 years ago. But even leaving the fact that computers don't yet exist for that long out of consideration, the most advanced computer isn't nearly as capable as a very much sub-average human, and definitively has not been and will not remain functional nearly as long as the average human (not without replacing the "brain" and restoring data from backup anyway, which is cheating).

Edited by samoth
1

Share this post


Link to post
Share on other sites

To go deeper into detail why such comparisons are meaningless, consider the following:

 

In A.J. Hanson's Visualizing Quaternions book, there exists an example to which he refers as "urban legend" of an upside-down F16. According to the legend, the board computer would turn the airplane upside down when crossing the equator because the sign of the latitude flipped. The author says he could not find a reference as to whether this actually happened, or whether it only happened in simulations (hence "legend").

It makes no difference whether it happened for real, or only in a simulation (same thing for the computer!). The point is that an intelligent being would be immediately aware that turning the airplane upside down for no apparent reason (and defiance to the visible horizon and the gyroscope) is a nonsensical decision, and something must be wrong.

This legend is very similar to an actual event where a civil airplane left a trench a couple of hundred meters long in a forest at a new airplane's first public demonstration.
The initial story was that the pilot performed a show-off manueuver which went slightly over the allowed tolerances, and when he pulled the stick,  nothing happened. The board computer had deemed that the manueuver wasn't so good for the airplane (of course, crashing into a forest isn't precisely good either, but the computer failed to see that). This was later settled in an officieal statement backed by the (presumably well-paid) pilot which said it was a mere "piloting error".

Both events are examples how being able to perform calculations and intelligence are not the same things.

 

Similar can be said about pattern matching. Computers are much better at finding a fingerprint in a database than a human would be. They are also much better at identifying a person's face in a crowd.

However, the police still has every "hit" verified by a human, and biometric passports need to be sourced with photographs with a very specific layout and very exact placement. Why is this the case?

 

The reason is simple: Computers are not better at the job. They are faster at doing calculations. They are thus better at finding some statistical match out of a large number of samples, given a precise human-generated metric and well-chosen comparison patterns. Their results may or may not correlate with an actual match.

Once every so and so often (and often enough to be significant), the computer will report a match where the human reviewer will immediately see that the match is total bollocks. Similarly, the computer only achieves reasonably good outputs if given high-quality standardized patterns to match against.

 

Average humans are not able to match ten thousand faces per second, but they are able to identify/recognize another human given very deficient input patterns with a surprisingly low error rate. Especially women are exceedingly good at face-matching (don't ask me why, someone might come up with a hunter-vs-breeder evolutionary theory, but since the gender depends merely on one chromosome, I'd wager that unless face recognition is coded on the X-chromosome, there's hardly a way this could be a reason).

Either way, try and have a computer recognize a face from a 30° angle above front view when only having seen that person from the side before. Or try to get a positive recognition on someone looking away in almost the opposite direction. Women still get it 99% right even in absurdly bad conditions (and, they do that without having trained on the task in particular, without someone else writing a specialized "program" for them to work in that border case).

Edited by samoth
2

Share this post


Link to post
Share on other sites

Good post Samoth. I acknowledge computers are faster and more efficient at things humans are not. But as far as "more intelligent" they just don't come close. Perhaps we should have sorted out the definition of "intelligence" we all can agree on first. 

Edited by Tutorial Doctor
0

Share this post


Link to post
Share on other sites


Perhaps we should have sorted out the definition of "intelligence" we all can agree on first. 

 

This is a very good idea.  I would say the definition of intelligence is the ability to apply existing knowledge to new situations and to be able to acquire new knowledge.

 

Is just giving a definition enough though?  We can obviously use that definition to show that computers as we know them are not intelligent.  While they can be made to look like they are learning, this is purely as a result of clever programming, using tools such as conditionals and lookup tables.  But what about us?  While we may feel that we are not programmed and that we really are learning from our experiences, we don't know enough about how the brain works to definitively say that any decisions we make are not just a result of predetermined arrangements of neurons.  

0

Share this post


Link to post
Share on other sites


I would say the definition of intelligence is the ability to apply existing knowledge to new situations and to be able to acquire new knowledge.

 

I really like that definition, but it seems too short to be accurate. What I mean is that with a little thought, perhaps someone could find a case in which such a definition would not demonstrate intelligence but some other term. 

 

I also like the phrase "maximizing future freedom of action." But again, I think there is also a scenario where such a definition would demonstrate some other term other than intelligence. 

 

When I thought "intelligence" I thought "the ability to gather information." 

 

Now, what term might define the ability to apply information gathered? Or what is knowledge? Or what is wisdom?

 

I always considered wisdom to be the tactical use of knowledge to make the best decision in a given situation. I considered knowledge to be stored information. Information that is not stored would not be a part of your "knowledge database"

 

Yet, one can "forget," and then later "recall" and then that information is retrieved from some dormant part of the brain. So was that knowledge lost or forgotten?

 

Now, I am not being philosophical, but these things seem like they would be important in this conversation. Do computers forget? How much info can they retain? 

0

Share this post


Link to post
Share on other sites


Or what is knowledge?

 

I would describe knowledge as a set of data that describes a specific scenario.

 


I always considered wisdom to be the tactical use of knowledge to make the best decision in a given situation. I considered knowledge to be stored information. Information that is not stored would not be a part of your "knowledge database"

 

I've always thought of wisdom as being an accumulation of knowledge.  If you consider someone like Albert Einstein, it's very evident that he was an extremely intelligent individual.  Now consider him as an infant - while he would have been just as intelligent, he would not have been wise as he did not have enough knowledge to use his intelligence.  Likewise you can have people who know a lot of things but who lack the intelligence to take what they know and apply it to new situations.

 

 



Now, I am not being philosophical, but these things seem like they would be important in this conversation. Do computers forget? How much info can they retain? 

 

I don't think the ability to recall information should be considered intelligence, though it does obviously affect the decisions an intelligence will make.

0

Share this post


Link to post
Share on other sites

Finding a precise definition that completely captures the concept of general intelligence will be as hard as finding a precise definition of beauty. It might be interesting from a philosophical standpoint, but from an engineering standpoint, narrow definitions are usually easier to work with, even if the only capture limited aspects of the underlying concept. It's perfectly fine to have several definitions or tests of intelligence and realize they capture some but not all aspects.

 

The problem with defining intelligence in terms of knowledge or wisdom is that you should be able to determine intelligence with black-box testing. Internal structure shouldn't matter.

0

Share this post


Link to post
Share on other sites

I think this topic is flawed from the very beginning, because each one of us has his own definition about what human intelligence is.

For some, it's the ability (the state), of being able to accomplish a given task (playing chess, whatever). For some others, it is the ability to learn something ... (being able given DOF, to learn to walk for instance) . A function.

 

Before answearing the OP question, we have to be on the same wave about what intelligence is. And this is a philosophical and still open question ...

Edited by Tournicoti
0

Share this post


Link to post
Share on other sites
Right tournicoti. Even looking at the entry on Wikipedia, leaves it rather open ended.

I had just considered "overall" at first, and I do think that "overall" human intelligence is far more advanced, mainly because of the many diverse situations it can handle, even from birth, autonomously.

The reason I said a computer would have to be able to debug itself in order to be considered close to our intelligence is that we can solve new and foreign problems a lot easier using non-standard ideas. We can create new plans for unfamiliar situations etc...

It is sorta strange though that so many things have open ended definitions, perhaps because definitions change over time, but that is contrary to the definition of "define" which is for the purpose of defining something as being different from something else, so then we have to ask the questions,

What is intelligence?
What is knowledge?
What is Wisdom?
What is forgetting?
What is recalling?
What is remembering?
What is ignorance?
What is uneducated?
What is educated?
What is assumption?

Etc.
Are they all the same? No. What defines each term as different from the other. Even in this simple scenario, computers just aren't there.
1

Share this post


Link to post
Share on other sites
I would summarize it roughly like this:
  • Understand causalities.
    • When I send out this nerve impulse, my head turns.
    • When my head moves, funny signals come from my inner ear. These are always the same for identical moves. Also, the world always moves in the opposite direction.
    • When I drop my teacup, it falls to the ground. When I drop my cell phone, it falls to the ground.
  • Combine related and unrelated causalities, and make decisions based on that combined knowledge.
    • Most things I tried, including teacups and cellphones, fell to the ground when I dropped them. I expect a sausage to fall to the ground as well, without having to try.
    • Often, things break or become dirty or otherwise unusable when they fall to the ground.
    • I don't like that happening (also my back hurts from picking them up) and therefore will not voluntarily drop things.
  • Perform sanity checks on decisions before and while taking action.
    • The GPS tells me to turn right. Into the river. Screw that cheap piece of junk.
    • The traffic light just switched to "Don't Walk". I don't think that I'm supposed to stop right here in the middle of the street.
  • Validate decisions, as their outcome becomes apparent.
    • I really should have dropped that darn cellphone and reached for the escape ladder instead.
    • Giving that tramp a ride was a stupid idea.
  • Improve future decisions based on the outcome validation.
    • Last time I got beaten up bad after pushing that guy. This time I'll pretend to give in and kick him in the balls from behind.
    • Next time someone asks you if you're a god, you say "Yes!".
  • Do all of the above without someone having to tell you, and without someone telling you how.
  • Adjust decision parameters by observing the outcomes of other people's decisions.
0

Share this post


Link to post
Share on other sites

It is also possible to skip most of the philosophical aspects of the matter by realizing that [url="http://prize.hutter1.net/"]compression is very closely related to intelligence[/url]. I know it sounds odd, but it is a deep and interesting idea.

0

Share this post


Link to post
Share on other sites

It is also possible to skip most of the philosophical aspects of the matter by realizing that compression is very closely related to intelligence. I know it sounds odd, but it is a deep and interesting idea.

Although it may seem that way, I think the idea is somewhat flawed. What compressors do is reduce entropy. If they work well, that is.

 

Dictionary-based compressors replace parts of the input with things they have stored in their dictionary, if that takes fewer bits, according to a scheme that the programmer has decided. Maybe the programmer is intelligent, but the program doesn't do anything special.

 

Statistical compressors, on the other hand, determine the likelihood of the next symbol that comes in by what they have seen before. You could much more easily associate this with "intelligence" even if they still only follow a very rigid pattern that the human programmer has devised.

 

But if you think about it, this is a behavior which is very much akin to a gambler playing roulette, who, upon having seen 17 three times, puts his money on 17. After all, 17 seems to be the lucky number. Would you deem this intelligent? Secret tip: A horse with a name like "Lightning" cannot lose.

Maybe PPMD uses a somewhat more sophisticated algorithm, but in the end it is the exact same thing. Looking into the crystal ball.

 

The only difference (and the difference that decides on the outcome!) to compressing enwik8 is that the input is different. Characters in enwik8 are not random, but are highly correlated, and there is a huge amount of entropy in that text. This is why compressors are more successful than our roulette gambler. Still, they do more or less the same thing. They use some statistical model and if they are lucky, this allows them to correctly predict the coming symbols.

 

What would be more impressive would be compression in a sense such as you show an image to a computer and it outputs  "fat guy in funny clothes making a sad face, bent over a dead girl, that's Pavarotti as Rigoletto". That is much more like the way a human would "compress" that photo.

0

Share this post


Link to post
Share on other sites

How is it possible that a human brain is not a finite-state machine? How can the human brain possibly work any differently at an atomic/cellular/electronic level then a computer does? 

 

Genetic Algorithms prove that a computer can recalculate possible solutions in a manner that helps arrive at a correct conclusion. Further, the fact that humans can sometimes be stumped is very analogous to a genetic algorithm that was unable to successfully arrive at a correct solution.

 

A couple years may be too soon... but saying that we won't have the power to do it in 100 years is like when  it was said that no one will need more that 637kB for a personal computer. 

 

A computer can perform calculations faster than MOST human brains... however there are people who have beaten computers at calculating ridiculous numeric calculations (http://en.wikipedia.org/wiki/Shakuntala_Devi). In fact, I believe that if the human brain where "programmed" to do calculations it would out perform current computers easily... but the difference being that the "thinking" algorithm we have is not optimized for numeric calculations, but instead it is optimized for learning, recalling, idea association, and short cut finding/adjusting. Further, I believe that the first true A.I will have intelligence on par to that of the original single celled organism... barely anything that would actually be considered A.I by "SciFi" Standards and it will eventually evolve it's self to higher and higher levels of intelligence until it eventually surpasses that of humans.. Although, I suppose, it may also be possible that humans are already "learning" and gaining intelligence at the "most optimized" speed so perhaps A.I will always be X steps behind. 

I think that thinking some one will eventually write some code, hit F5, and see... "Hello Dave, I think therefore I Am." is extremely far fetched... Infants take a month to learn how to roll over... and a year+ to even say a single word that has any relevance to exterior factors...  It seems pretty likely that A.I will model True intelligence and I think initial the initial learning phase (An infant is not at the initial learning phase, evolution has been working on it for millenia) is one that won't be able to be skipped. 

0

Share this post


Link to post
Share on other sites

A computer can perform calculations faster than MOST human brains... however there are people who have beaten computers at calculating ridiculous numeric calculations


That statement is absurd. Shakuntala Devi took 28 seconds to compute 7686369774870 * 2465099745779. That's quite a feat, no doubt. It would be hard for me to measure how long it takes my laptop to make that computation, but the order of magnitude is 0.00000001 seconds.
1

Share this post


Link to post
Share on other sites

 

 

Examples of the problems presented to Devi included calculating the cube root of 61,629,875, and the seventh root of 170,859,375.[3][4] Jensen reported that Devi provided the solution to the aforementioned problems (395 and 15, respectively) before Jensen could copy them down in his notebook

 

Right, I suppose it was an exaggeration to claim she beat a computer, but the point was that if a human brain were to be wired to be used exclusively for calculations, I'm sure it could beat a computer.

0

Share this post


Link to post
Share on other sites

Right, I suppose it was an exaggeration to claim she beat a computer, but the point was that if a human brain were to be wired to be used exclusively for calculations, I'm sure it could beat a computer.


Well, if a computer processor were to be wired to think, I'm sure it could beat a human. That makes about as much sense.
2

Share this post


Link to post
Share on other sites

Which was exactly my argument.... I was saying precisely that a human brain is no more than an extremely powerful computer. Our brains are made of the same matter and use the same electricity, it stands to reason that either, 1) A synthetic brain could be designed to do the same thing more efficiently or 2) Our brains are completely optimized such that a synthetic brain could at best match our thinking ability. 

0

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now
Sign in to follow this  
Followers 0