Quick question on neural networks

Started by
4 comments, last by cypherx 19 years, 2 months ago
Hi, just to let you know, I hardly know anything about neural networks. What I'm wondering is, with each neuron, I don't get how the decision making is done. ie is it all the same decisions, or slightly randomized, or do you have to manually give each neuron the rules to make the decision? Thanks
Advertisement
I think you're a bit confused about neural networks. Try reading some of the fine intro articles on www.generation5.org

Artificial neurons don't make decisions, they accumulate numerical inputs and use some function to generate outputs. If you're asking about the accumulation or output functions...those are specific to the type of neural network you're using, the most common pair being a weighted-sum accumulator ( add each input times its weight) and sigmoidal output function (1/1-e^(-x) I think).


-Alex
Neurons don't make the decisions.

They go and take the inputs, do some mathematical operations on them, the output them to other neurons... And so on.

Now, the reson where things can be said to have "learned" something, is where the weights inside the neurons, have been altered, so that the network responds in the way it was meant to.

Because of the way the neurons work, they generalise pretty well, thats why there used.

iirc, the foruma is something like
Tw = (Sigma J = 0, J = N, Ij Wj) + bais
w = 1/1-e^(-tw)
Ow = w, If w => thr
Ow = 0, if W < thr.

Thats basically it.

Thr = The threshold of the neuron
Ij = Input, number J
Wj = Weight, number J
Bias = Neuron bias
Ow = Output, that is the thing which gets given to the other neurons which link to it.
W = the weighting, after the sigmoid
Tw = total weighing, which is before the sigmoid.

Look into some gen5 tutorials.

From,
Nice coder
Click here to patch the mozilla IDN exploit, or click Here then type in Network.enableidn and set its value to false. Restart the browser for the patches to work.
Actually, a perceptron can be considered to make a kind of decision we call a classification. This is not the same sort of decision an agent may make that takes into account the agent's values. It is rather, a decision hard-coded in either a logical or functional form that maps inputs to a partitioned output space.

When we typically talk about a decision though, we are usually referring to an event in which an 'intelligent' thing weighed up two or more options and chose one or more according to a system of beliefs and values held by that thing. So in that sense, no, a perceptron does not make a decision.

Cheers,

Timkin
It is interesting to look into how a biological neuron works.

The Biological Neuron

A neuron's dendritic tree is connected to a thousand neighbouring neurons. When one of those neurons fire, a positive or negative charge is received by one of the dendrites. The strengths of all the received charges are added together through the processes of spatial and temporal summation. Spatial summation occurs when several weak signals are converted into a single large one, while temporal summation converts a rapid series of weak pulses from one source into one large signal. The aggregate input is then passed to the soma (cell body). The soma and the enclosed nucleus don't play a significant role in the processing of incoming and outgoing data. Their primary function is to perform the continuous maintenance required to keep the neuron functional. The part of the soma that does concern itself with the signal is the axon hillock. If the aggregate input is greater than the axon hillock's threshold value, then the neuron fires, and an output signal is transmitted down the axon. The strength of the output is constant, regardless of whether the input was just above the threshold, or a hundred times as great. The output strength is unaffected by the many divisions in the axon; it reaches each terminal button with the same intensity it had at the axon hillock. This uniformity is critical in an analogue device such as a brain where small errors can snowball, and where error correction is more difficult than in a digital system.

Remember, what we are trying to do here is imitate nature. Current models of neural nets are based on layers, but this is clearly not how nature works. Also, current models don't operate the neurons in real time, and then allow the neurons to integrate the incoming signal spatially and temporally. Perhaps considering these ideas, you will be able to make a better neural network.
“[The clergy] believe that any portion of power confided to me, will be exerted in opposition to their schemes. And they believe rightly: for I have sworn upon the altar of God, eternal hostility against every form of tyranny over the mind of man” - Thomas Jefferson
" Current models of neural nets are based on layers, but this is clearly not how nature works. "

Tron3k: I'm confused what you mean by that. The paragraph you quoted talks about the organization and function of the single neuron (which in neither life nor computer models contains layers).

Layered groups of neurons occur in nature quite frequently though.

Look at the human visual system, where features of increasing complexity are extracted at each layer of processing (retina to primary visual cortex to other brain regions). Or look at the layers of the hippocampus (I'm not sure about their specific function, but structurally the hippocampus looks like a 3 layer artificial neural network with recurrent connections).

About temporal and spatial summation: Both are important aspects of how the brain processes information. Their absence from perceptron networks is one of the many reasons to NOT use perceptrons if you want to model the brain. Perceptrons are for pattern recognition, any project even trying to resemble biology should use some sort of spiking neuron. (Check out http://homepages.cwi.nl/~sbohte/pub_thesis.htm)

-Alex

This topic is closed to new replies.

Advertisement