Sign in to follow this  
Helter Skelter

Our brains really DON'T work like computers!

Recommended Posts

Very interesting reading but too short if you ask me. From slashdot:
Quote:
Roland Piquepaille writes "We're using computers for so long now that I guess that many of you think that our brains are working like clusters of computers. Like them, we can do several things 'simultaneously' with our 'processors.' But each of these processors, in our brain or in a cluster of computers, is supposed to act sequentially. Not so fast! According to a new study from Cornell University, this is not true, and our mental processing is continuous. By tracking mouse movements of students working with their computers, the researchers found that our learning process was similar to other biological organisms: we're not learning through a series of 0's and 1's. Instead, our brain is cascading through shades of grey."
Full article can be read here: http://www.news.cornell.edu/stories/June05/new.mind.model.ssl.html

Share this post


Link to post
Share on other sites
Shades of grey? Whatever! as I hear the young folks say these days...
What they fail to understand is that computers can simulate continuous values quite nicely. For instance, you can simulate continuous-time, continuous valued differential equations. These can model things like falling bricks... which I hope live in a continuous world. These leaves some hope for simulation of the brain. One thing that might be a problem is that a computer has a finite amount of storage... is a brain's capacity finite? who knows?

We make logical decisions all the time. Should I eat that? Should I play tennis? Do I walk to the grocery store? Sounds like a lot of 1's and 0's to me. In essence, we do not know how the brain works in terms of intelligence. We can't even measure it very well.

If I were to try to model the brain, I would use a hybrid model which is a combination of discrete events and continuous time dynamics. This of course can be simulated on a computer. Coming up with a model, well... errr... ummm...

Share this post


Link to post
Share on other sites
Groundbreaking Discovery, eh, muhuahaha.

Quote:
Original post by DrEvil
So we need to upgrade computers from binary to base 256! Then we'll be good to go? hehe

I second that [grin]

Seriously, computer simulated neural network also works in "shades of gray". And even pretty much any machine-learning software. (If one going to say that signals in brain is "truly continuous", in brain signals is _not_ "truly continuous" too. You can count all molecules of neurotransmitter released, and that's discrete number).

[Edited by - Dmytry on June 30, 2005 7:24:49 AM]

Share this post


Link to post
Share on other sites
Note that everyone says, "simulates" shades of grey.

No matter how good a simulation is, it is still a simulation, especially since we're working on an inherently binary and discrete platform. Continuity and randomness are things that a computer really can't do. There will always be rounding errors and such. Yes, we can simulate continuous numbers, but the real number space cannot be represented on a computer without massive storage space. So, in the end, all we're left with are "simulations" of the real thing. And as most math people know, no matter how small the margin of error is, as computational complexity increases, the error will eventually compound to the point that it becomes significant.

Changing systems from base 2 to base 256 won't really solve the problem completely, not to mention it presents a whole new set of problems itself. There's a reason why computing is fundamentally base 2.

So, in every aspect, comparing a computer and a human brain is like comparing apples and oranges. The two just inherently process different thing. They work on completely different levels. So, this research, though may not be a totally new idea, goes a way in reinforcing what many already know.

Share this post


Link to post
Share on other sites
It seems as if you come from a mathematical background. I agree with a lot of what you have to say especially when it comes to errors. A lot has to do with the fact we do not know how to model the brain. Just because it's a brain doesn't mean there is a nth order chaotic system involved. It bothers me that people assume that the brain is complicated. We may just not have the tools to understand it very well at the moment. Therefore, I would think, it would be unfair to count out computers just yet. Kinda like putting the cart in front of the horse and throwing our hands up.

Share this post


Link to post
Share on other sites
Quote:
Original post by WeirdoFu
Note that everyone says, "simulates" shades of grey.

And as most math people know, no matter how small the margin of error is, as computational complexity increases, the error will eventually compound to the point that it becomes significant.



To operate a brain simulation (or a chemical simulation, etc..) there may be a minimal required percision. I'm not saying there is, just that it's a possibility. There is no law stating that to simulate a convincing brain we need to have a quantum level model.

I'd also bet that the brain is capabable of ignoring it's own noise, so must have some concept of margin of error.

Still, I agree with you, mostly. :)

Will

Share this post


Link to post
Share on other sites
Personally, I feel that to build a believable model of parts of the brain is not very hard. For example, building a model of memory may actually be pretty simple. The hardest part, however, is the cognitive model.

For memory, the human brain is similar to a relational database. However, what is actually stored is fairly interesting. Human minds go through a process of abstraction when it comes to storage and interpretation. So, let's say if you saw a vase with a specific design, you may start out in short term memory remembering exactly what the vase looks like. However, due to the limites storage of short term memory, the memory is either eventually dumped, or if reinforced enough, abstracted and compressed. So, if the short term memory is the uncompressed data buffer, then long term memory is the abstracted relational database where information is abstracted into fundamental pieces and stored seperately. When data is required, an entry point is found and data reconstructed incrementally. So, this is why we're good at describing to people what things look like, but many time can't paint an exact image. This is partially because some details were lost in the abstract process. Since long term memory is also limited, overtime, memory pieces get further abstracted and merged with other entries or completely discarded. This is what I think the model of human memory is like, in general. Can a computer simulate it? Maybe. The catch is all in the representation and data abstraction, since these operations aren't just discrete, but symbolic. And interestingly enough, people don't seem to abstract things in the same wau either.

I guess now that I've rambled around a bit, I finally realized what I really wanted to say. The problem with simulating a real brain is that its not discrete or continuous. The human brain works on a symbolic level, which can be discrete or continuous, or neither.

Share this post


Link to post
Share on other sites
Sounds reasonable to me Fu. Though I'm a little bit fuzzy on what you mean by symbolic. I believe like you said earlier that it really depends on what parts you want to model and how accurately (what level of abstraction) etc. Cognitive, memory, or otherwise.

Share this post


Link to post
Share on other sites
Physically spoken it seems more and more that our world is not continous! It seems continous to us but it really is not. Like in Quantummechanics the energieniveu are not continous but discreet the new physical theories let us see that our world is discreet. Space and time are not continous but discreet. There exist the smallest unit of space and resulting of this there exists the smallest amount of time. There is no smaller unit! So everything around us and we too is discreet. Maybe not in terms of 0 and 1 but if we choose 0 and 1 to represent this smallest units it will work fine because it does not matter which system one uses to represent something.

Share this post


Link to post
Share on other sites
He is right about the phenomena that have been observed thus far. But we still haven't reached the bottom yet....

Sure, the brain can be simulated. But how can you say it is not complex? Ever try to write code that figures out what the brain figures out? I'm sure you'll say "Duh, look at the forum name". But think again. AI thus far has simulated intelligence in confined, simplified, trivial models of reality (i.e., games). Try to perform some of the continuous operations the brain performs outside of that game, in reality. It would take a physical modeler just to figure out how to apply forces to limbs just to walk, and if you haven't done any physics engine coding, you have no idea how monsterous the math and algorithms can get.

You can eventually understand the brain, but you must learn to look for the perfection in techniques and optimization it employs. Treating it as poorly laid out garbage is not going to prepare you to for the rigors of the endeavor of simulating this masterpiece.

Share this post


Link to post
Share on other sites
We can simulate flying pretty well, so far as I know that's more complex than ourselves. Anyway, I have thought that myself at times. I don't think it's true, but maybe it is depending on how we define complexity.

And as far as a counterexample to discrete units of real things, I can't think of a counterexample.

Share this post


Link to post
Share on other sites
Continuous or discrete, its more a problem with perspective. Our brain is capable of directly "processing" analog and continuous information through our senses. HOWEVER, all storage is done in a discrete form to save space. The "sampling rate" for the storage actually varies directly depending on how important the thing is. The reconstruction process then interpolates the discrete info into continuous info.

A computer, on the other hand, deals completely in discrete information. All information going in, are discretized before processing. Then storage is discrete as usual. The reconstruction process is done in complete discrete form and then outputted into a discrete medium. Yes, all monitors are discrete as well. Its just that most of the time the information is refreshed fast enough such that we can't tell (our senses can't keep up). Case and point, some people still see scan lines on CRT monitors even at refresh rates of up to 120Hz.

As for my earlier post about the brain being symbolic in nature, I was referring to the general storage unit in the brain. The fact is, the brain is capable of storing all sorts of information as one storage unit. A color, a number, a letter, an idea, an object can all fall into one storage unit. And then those units become the things we work with and manipulate. So, say someone shows you a room. Instead of remembering exactly what the room looks like photgraphically, we tend to remember where specific objects were located, their relative size and color, and maybe a rough estimate of the dimensions. So, when you're redecorating, you may not know what the whole room will look like in the end, you always start out by moving the pieces around mentally until you get something you like. So, fundamentally, the unit of storage and manipulation for the brain is very symbolic and piece-wise, not to mention slightly random.

As for the brain being an optimized system, well, I have to disagree. Have you tried to keep a coherent continous stream of thought about one single thought without thinking of something else for like 5 - 10 minutes? It takes quite some concentration. Its called an attention span, which by last research I saw, shows that the average internet user only has an attention span of 6 seconds (the equivalent of a goldfish as they say). The brain is neither optimized nor efficient, but it works pretty well.

Share this post


Link to post
Share on other sites
Off topic but slightly related: Has anyone else downloaded the Human Genome "chr#.fa" files and tried converting them to optimal binary storage (taking the ascii A, C, G, T and converting each quartet into a byte)?

Try dumping the beginning of such a conversion out on the screen in ASCII, old-school DOS style. Now do the same thing for a .ZIP, .RAR or any other binary compressed data file.

Share this post


Link to post
Share on other sites
Hi.

Mayby you might find the following link useful as well.
It is a paper that describes "A Theory of Visual Attention" and a mathematical implementation of TVA.

http://www.psy.ku.dk/cvc/TVA/Theory/index.htm

(best read with a freshly brewed cup of coffee...milk and sugar optional)

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
That study is stupid.

Of course our brains can anticipate appropriate words, if we hear the syllable "can-". Even Google can do it, check out Google Suggest: http://www.google.com/webhp?complete=1&hl=en

And all the signals from our senses are trasmitted to the brain as electric current, where individual electrons can be counted. Totally discrete.

Share this post


Link to post
Share on other sites
Quote:
Original post by Anonymous Poster
And all the signals from our senses are trasmitted to the brain as electric current, where individual electrons can be counted. Totally discrete.


Wrong, signals from our senses are sent to the brain as cascading shifts in the ionic concentrations of the axonic membrane, with a few purely chemical relays along the way. There are no single electrons involved in movement (only ions) and even then, the only movement they ever do is traversing the axonic membrane.

Also, it's wrong to say that if you can count the elements of an object, it must be discrete. Any analogical object around (microphones, phonographs, oscilloscopes, wrist watch, CRT) is comprised of a finite number of particles, and yet it is not discrete: this is because in addition to the discreteness of particles, there is also the continuum of spatial positions.

The most interesting thing is that the human brain is microscopicallycontinuous in its functioning, while the computer is, by design, microscopically discrete.

Before moving on, I want to mention the fact that I am not using here a concept of absolute continuity, but rather of relative continuity. For instance, to the naked eye, a high-resolution numeric photograph is continuous, but it is discrete if you look at it closer. A good definition of continuity is therefore that a function is continuous with regards to a means of observation if it does not appear discrete to the observer, as limited by the resolution of its senses.

Therefore, both brain and high-level computer operations can appear as continuous to unequipped humans: we lack the capacity to distinguis pixels from a given distance, or to determine that sounds played by a computer are not real, just as much as we lack the ability to distinguish discrete steps in the evolution of someone's personality or behavior.

However, on a microscopical scale, the brain is still continuous, while computers are discrete: they handle limited amounts of bits of information, which can be either on or off, and only process them at fixed intervals of time. Brain cells rely very heavily on spatial and temporal continuity of the action potentials: while there is only a finite amount of action potentials moving around at any given moment, it is impossible at this very large scale to distinguish discrete steps in their spatial positions or their times of arrival.

On a fundamental scale, though, both space and time become discrete: objects only occur at discrete positions in space, and things only happen at discrete positions in time.

What is really important to us then? Obviously, we want computers to appear MORE continuous on a high-level scale. How? Discreteness is all about "holes" in the set of observed properties. For computers, there is no possible value between "0" and "1" for any given bit. However, it is possible, by adding more discrete data, to make the holes smaller. For instance, a fixed-point value of 32 bits only has holes of size (1 / 4000000000) if used to represent numbers between 0 and 1. If the size of a hole becomes lower than the precision of the observation tool, the function will appear discrete.

- Above 60Hz, the human eye cannot distinguish discreteness in movement. Computers can show us moving things!
- Below a hundredth of a millimeter, the human eye cannot distinguish discreteness in texture. Giant screens and high-resolution projectors, and computers can really show us movies.
- Below (insert correct figure) Hz and Watt, the human ear cannot distinguish frequencies. Therefore, if the standard error in replaying a sound is higher in frequency and lower in power than these figures, we cannog distinguish a real sound from a computer-played one.

Due to the very low precision of our sensory organs, computers can ALREADY appear continuous to us, even though through precise enough tools (computers, books, knowledge) we can determine that they are in fact discrete.

When will we be able to simulate human behavior, then? (Note, I do not mean simulating the brain, with artificial neurons and such, I only mean passing the turing test on a significative scale) Why is it so difficult to imitate a human? Remember what I said above about making discreteness appear continuous? Everything comes from making holes smaller. While holes in movement, color or sound are easy to measure and diminish, holes in personality, knowledge, reasoning or emotion are very hard to define, let alone isolate and shrink! Making computers act like humans would require us to first understand how humans react (not necessarily what their internal mechanism of reaction is: only external observations), and to find ways to measure objectively and quantitatively the difference between a human's reactions, and computer-simulated ones. Trying to simulate human behavior without first being able to understand it is like a painter trying to create a resembling representation while blindfolded.

And this is where appears the main difficulty of this approach: by becoming able to measure the difference between simulated and natural behavior, we acquire a new observation tool, to which the computer reactions will appear discrete, just like a modern-day camera can distinguish between real images and images on a monitor (the latter flicker, because the "simulation" is good enough for human eyes only). In the same way, the human behavior simulation would only appear continuous to humans, but we'd still know, when we're told, that everything in there is still discrete.

Of course, there are other approaches: one could create a human simulator iteratively: therefore, each step of the sequence would be able to tell that the next one is discrete, but only the last step would be known to humans, and to us it would appear continuous. But still, how to evaluate, even with human means, that the sequence is "getting close" ? Or we could dump the idea altogether and go with continuous, analogic computers...

</rant>

Share this post


Link to post
Share on other sites
I don't think it was a rant. I thought it was darn good. However, I am still skeptical that space is continuous. There are a lot of assumptions in this world we take for granted. Anyway, good post.

Share this post


Link to post
Share on other sites
I believe, that the world is predictable. Firstly, there is no such thing as random... Everything happens as a result of millions of little variables interacting with each other to create this result yes. So if you were able to somehow "universe save" or get the base variables at the time the universe was created and chuck them into your computer and allow it to grow/evolve just as if it were the real world based on some rules of the universe then if you got every little thing correct it would turn out just as what has happened. Everything is just based on numbers and even the human brain can be accurately represented, just we have not got the power yet to do so. So what I'm saying is that it would be hard to model just ONE brain... our computer brain will always be limited until we are able to set it up into a virtual world it can explore. Uhh pardon if my statement is a little obscure or foolish as I haven't attended university nor am I doing science in highschool. But feedback welcome

Share this post


Link to post
Share on other sites
Quote:
So what I'm saying is that it would be hard to model just ONE brain... our computer brain will always be limited until we are able to set it up into a virtual world it can explore.


And even the way it would explore that virtual world would depend on prior experience. And what about upbringing ? how should that be simulated ?
perhaps with a bit of motherly soft computing ? :P

Quote:

Uhh pardon if my statement is a little obscure or foolish as I haven't attended university nor am I doing science in highschool. But feedback welcome


LOL
As we all know only people who have attented an university can have an original idea/thought...not!! ;)

Share this post


Link to post
Share on other sites
Quote:
Original post by Jemburula
Firstly, there is no such thing as random... Everything happens as a result of millions of little variables interacting with each other to create this result yes. So if you were able to somehow "universe save" or get the base variables at the time the universe was created and chuck them into your computer and allow it to grow/evolve just as if it were the real world based on some rules of the universe then if you got every little thing correct it would turn out just as what has happened.


The very basis of quantum mechanics says this is not true. According to classical physics alone (relativity) this is how the universe works, but modern physics says tat this is not be the case after all.

Share this post


Link to post
Share on other sites
Quote:
Original post by Promit
The very basis of quantum mechanics says this is not true. According to classical physics alone (relativity) this is how the universe works, but modern physics says tat this is not be the case after all.


You could easily describe the quantum model in a way that says invisible faireys, that have no mass and no energy, push around particles and other quanta, and could still satisfy the quantum model. faireys != random.

To say these things appear random would be more accurate.

There is a great quote by Hawking on this matter-- if my memory was a bit better I'd recite it. lol.






Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this