My Simple Theory On Modelling A Brain..

Started by
36 comments, last by mnansgar 19 years, 9 months ago
Quote:Original post by Senses777
Although I see it commonly passed off as a matter of the "facts of the universe" here, I highly doubt creating a simulated sentient being actually creates a sentient being.

And how would you define the difference between the two? What's to say that *we* don't merely simulate sentience through the flow of electrons in our brains?

Quote:Original post by Senses777
If so, at what point does it because inhumane? At what series of electronic signals in a computer built by men does the machine or program or section of a program become alive enough that it would be cruel to terminate it: just as we have created it.

Now *that's* the doozie, and unfortunately I don't think it's something we will ever be able to define without reaching and exceeding that point in AI technology. Of course, by this time it will already be too late.

Quote:Original post by Senses777
Perhaps you don't like the idea that we are special, and that what we have in our self awareness cannot be duplicated by mere simulations: wake up to reality.

Well that was a bad way to end a good post. This is a discussion of the technology involved in improving AI, don't try and contort it into something it isn't.
"Voilà! In view, a humble vaudevillian veteran, cast vicariously as both victim and villain by the vicissitudes of Fate. This visage, no mere veneer of vanity, is a vestige of the vox populi, now vacant, vanished. However, this valorous visitation of a bygone vexation stands vivified, and has vowed to vanquish these venal and virulent vermin vanguarding vice and vouchsafing the violently vicious and voracious violation of volition. The only verdict is vengeance; a vendetta held as a votive, not in vain, for the value and veracity of such shall one day vindicate the vigilant and the virtuous. Verily, this vichyssoise of verbiage veers most verbose, so let me simply add that it's my very good honor to meet you and you may call me V.".....V
Advertisement
To reply to some of the above topics:

1) "I heard or read somewhere that the average person only uses 10% of their brain." Some people do use more than 10% of their brain at a time ... it's called epilepsy (seizure). Only having electrical signals pass through 10% of the brain is a *good thing* ... indeed, if every neuron fired at once, we would have little capability to process information. This is an example of a misinterpreted research result.

2) Neurons encode information in three basic ways: temporally (time), spatially (location), and with "amplitude" (average number of neurons firing at once in a specific location). When we touch our arm, many mechanoreceptors are tied to a single nerve cell, so the signal is averaged then sent using the above three means to the thalamus then to the somatosensory cortex which runs parallel to the plane of our face, across the top middle of our brain. This is a predictably mapped area such that scientists can stimulate certain portions of the brain tissue and have a predicted skin area be stimulated.

3) Quantum computing is only faster than traditional computers at a few operations. Unbelievably, there have only been several dozen algorithms discovered for it that operate faster than modern computers (such as cryptography and search)!

4) Neural networks are *terrible* approximations of neurons. They are so "dumbed down" from actual neuron models that they don't really have anything to do with how an actual brain processes data. Actual neuron models such as Hodgekin-Huxley or FCM (Fohlmeister et al) are systems of differential equations that take into consideration the applied voltage and the concentrations of ions (although these are still imperfect due to the system's complexity).

5) Neurons are *much* slower than electrical circuits. It takes approximately 2 ms to propagate an action potential down an axon. Myelin sheath speeds this, but the chemical transmissions are a very slow way of transmitting data. There are, however, direct electrical methods in the brain for transmitting data -- through pores that directly connect adjacent neurons. The strength of these pores and whether they are active is a tightly modulated process that is not well understood. Further, gray matter is *unmyelinated* ... the majority of the brain is *unmyelinated* ... myelin is typically only used on long axons in the peripheral nervous system.

6) Entirely new models for interpreting signals must be constructed to accurately interpret data, for instance instead of responding to the traditional EE amplitude vs. time, we need to respond to the frequency of spike trains over time. Current research in the field of Biomedical Engineering is working to address this.

h20, member of WFG 0 A.D.
ehm... can't you begin with something "easy" like an ants brain or something?
Quote:Original post by joanusdmentia
And how would you define the difference between the two? What's to say that *we* don't merely simulate sentience through the flow of electrons in our brains?

And how can you say that sentience is truly created, when there is no known way of testing it, nor will there likely ever be?

Perhaps I was too biased in my opinion, and I should changed my argument to "We can never know".

Quote:Now *that's* the doozie, and unfortunately I don't think it's something we will ever be able to define without reaching and exceeding that point in AI technology. Of course, by this time it will already be too late.


Interesing point. Not easy to solve either, but you can't just give up hope if you really think this.

If you and others truly believe that we can make sentient AI, then it is immoral of you to stand by, or even promote AI research, without a serious effort to spread the idea that AI can become sentient and must be given some rights.

Quote:Well that was a bad way to end a good post. This is a discussion of the technology involved in improving AI, don't try and contort it into something it isn't.


I was only bringing up the general feeling that I got by this thread and others that I have seen before. It frustrates me that people so quickly jump to the conclusion that we can make truly sentient beings yet they have no evidence to back it up. It is just assumed. I'd like to throw the alternative idea out there. Hopefully this tangent, which is related to game AI, can be tolerated at least for now while it is civil.

You're right though, I was being an ass, and I apologize, I definately could have brought up my alternative views and yet not insulted those with opposing views. Hopefully I did that in this post. Although I do not share it, I do respect the opposing opinion, and you definately made some good points. :)
"I want to make a simple MMORPG first" - Fenryl
Quote:Original post by Senses777
And how can you say that sentience is truly created, when there is no known way of testing it, nor will there likely ever be?

Perhaps I was too biased in my opinion, and I should changed my argument to "We can never know".

Now that I fully agree with and it has a huge bearing on the morality arguments related to sentient AI. The fact the we can probably never know whether an AI is genuinely sentient is going to be one of the biggest arguments against 'robot rights' as it were in the future (I really watch too much anime [smile]).

Quote:Original post by Senses777
Interesing point. Not easy to solve either, but you can't just give up hope if you really think this.

If you and others truly believe that we can make sentient AI, then it is immoral of you to stand by, or even promote AI research, without a serious effort to spread the idea that AI can become sentient and must be given some rights.

I'm actually undecided about whether or not we can actually achieve a sentient AI, although do lean more towards can than can't. However I don't think that's really the question we should be asking ourselves, but rather whether or not we *should*. Sure there are an oodle of benefits for *us*, but at the same time scenarious such as that seen in I, Robot really aren't all that unbelievable assuming the technology is in place (albeit more extreme than what could happen in real life, it being a movie and all). The basic idea that a sentient AI could rebel against it's creators is far too believable for my liking.

Having said that, you're absolutely right in that the ideas of giving rights to sentient AI should be planted well before the tech is developed, and I think that most people would agree. Just what those rights should be however is much more difficult as you will always have the "but it's only a machine" contingent that is the in-built superiority complex known as being human.

Quote:Original post by Senses777
I was only bringing up the general feeling that I got by this thread and others that I have seen before. It frustrates me that people so quickly jump to the conclusion that we can make truly sentient beings yet they have no evidence to back it up. It is just assumed. I'd like to throw the alternative idea out there. Hopefully this tangent, which is related to game AI, can be tolerated at least for now while it is civil.

Much better put (is that even a sentence!?). I think in order to make advances towards a sentient AI we have to make the assumption that we can, otherwise we'd just give up because we wouldn't be able to see the point in trying. However, the idea that we can't is simply another viewpoint and can easily and must be tolerated in order to keep peoples feet on the ground.
"Voilà! In view, a humble vaudevillian veteran, cast vicariously as both victim and villain by the vicissitudes of Fate. This visage, no mere veneer of vanity, is a vestige of the vox populi, now vacant, vanished. However, this valorous visitation of a bygone vexation stands vivified, and has vowed to vanquish these venal and virulent vermin vanguarding vice and vouchsafing the violently vicious and voracious violation of volition. The only verdict is vengeance; a vendetta held as a votive, not in vain, for the value and veracity of such shall one day vindicate the vigilant and the virtuous. Verily, this vichyssoise of verbiage veers most verbose, so let me simply add that it's my very good honor to meet you and you may call me V.".....V
Quote:Original post by mnansgar
To reply to some of the above topics:

1) "I heard or read somewhere that the average person only uses 10% of their brain." Some people do use more than 10% of their brain at a time ... it's called epilepsy (seizure).


One should also ask the question, '10% of what'? Processing power? Active neurons? When people hear this statement they think it means only 10% of the matter in our brains is used for our current functionality. This is false. As best as can be determined through modern imaging (particulary fMRI and SPECT) we use all of our brains to achieve our current functionality. It's simply that we don't need to use all of it at the same time, since we don't do every possible processing action continuously.

As for Epilepsy, it's not necessarily the case that more neurons are active during a seizure... it depends on the seizure type (focal or generalised) and the scale at which the seizure is considered. Certainly, in generalised Epilepsies, entrainment of many areas of the brain occurs and the total energy usage of the brain is increased. Focal epilepsies are quite often so localised that they don't cause an overall increase in energy usage... but typically do so just in and around the seizure focus.

Quote:Original post by mnansgar
2) Neurons encode information in three basic ways: temporally (time), spatially (location), and with "amplitude" (average number of neurons firing at once in a specific location).


That's a little misleading... domain information is encoded by spike frequency and count (duration). Neuronal clusters can encode slight perturbations of the same information set (think of it as a localised density function in information space) and the number of neurons involved in a computation within a cluster can encode information (which is what I think you meant by 'amplitude'). Further to this though, in both the occipital and frontal lobes it has been demonstrated that domain information is also stored in the average phase synchronisation between clusters of neurons.

It is quite conceivable that there are other mechanisms for encoding information within neuronal populations that we are not yet aware of.


As for the comment made by joanusdmentia about distributed memory throughout the brain. This is not true. Indeed, one of the key issues in performing surgery as a treatment for focal epilepsy (particularly where the seizure focus is located in the temporal lobe) is the effect that removal of some or all of the temporal lobe will have on memory and language skills. It is certainly the case that removal of the dominant lobe causes memory loss (I think the proportion is something like 50% of patients suffer severe memory loss of certain types). Patients can usually cope with removal of the less dominant lobe and function has been shown (through fMRI studies) to be taken up by other parts of the brain (particularly the dominant lobe). One of the big reasons for this is that the hippocampus is located in the temporal lobe and there is a lot of evidence to support the thesis that the hippocampus is a dominant structure for the encoding and decoding of memories (in conjunction with other parts of the brain - particularly in the frontal lobe). So, it's not true that if you chop out a part of the brain memories are completely unaffected.

Cheers,

Timkin
Quote:Original post by Timkin
As for the comment made by joanusdmentia about distributed memory throughout the brain. This is not true. Indeed, one of the key issues in performing surgery as a treatment for focal epilepsy (particularly where the seizure focus is located in the temporal lobe) is the effect that removal of some or all of the temporal lobe will have on memory and language skills. It is certainly the case that removal of the dominant lobe causes memory loss (I think the proportion is something like 50% of patients suffer severe memory loss of certain types).
Timkin


My father smashed his head open and had to have extensive brain surgery. When he recoverd a few years later, he discovered he'd lost almost all his knowledge of computers (he'd spent 30 years working in the industry in big corporates on everything from building computers to selling them).

He was quite amused when I pointed out that what he'd lost was worthless anyway - it was all that knowledge about computers that don't even exist any more, and OS's that few people today have even heard of. I mean, FFS, I can still remember the syntax for MS-DOS 3.22 and how to use Windows 2.0. And how to run RA (a BBS system) multi-threaded on Windows 3.1. What chance I'll ever need *that* info again ?!?
Quote:Original post by mnansgar

3) Quantum computing is only faster than traditional computers at a few operations. Unbelievably, there have only been several dozen algorithms discovered for it that operate faster than modern computers (such as cryptography and search)!


As my head of studies never tired of saying (he was *really* into this stuff...), the power of quantum computing is limited to situations where the answer is non-deterministic: i.e. the only way of discovering the answer is to guess, but once you have the right answer you *know*.

This scares the heck out of crypto people because that describes perfectly the process of cracking a key...

...but then again, other people in the dept used to say that this was based on a fundamental misunderstanding of the nature of quantum computing, and since I didn't major in Physics I take it with a pinch of salt :).
Quote:Original post by joanusdmentia
As for memory, it's not nearly as simple as RAM. If you pull a stick of RAM out of your machine then that memory is lost. However, not true of the brain. Memory is dispersed over the entire brain (or is it over entire section of the brain?) such that if you cut a chunk out, you don't actually loose any memory.


This is how data CD's work. If you're interested, look up group-theory. It's a bit like RAID5, only cleverer :) (RAID5 is rather primitive, only working in binary; group-theory explains how to achive the same result in an arbitrary radix).

I've seen someone demo this by cutting holes in a CD with a pair of scissors, and not losing the data...at least, of course, until the holes get too big ;).
Quote:Original post by Extrarius
I have read about a 'genetic programming' type simulation(that sounded at least somewhat physically accurate)


There's two things that share that name. One of them I coincidentally just started a topic on in this forum.

The other IIRC is a process of making a computer using biochemistry - you can use certain primitive organisms to simulate a simple binary-based computer (a von-Neumann IIRC).

What I find ironic is that the one invented by computer scientists has more in common with biology, and the one invented by biologists has more in common with CS ;).

This topic is closed to new replies.

Advertisement