Archived

This topic is now archived and is closed to further replies.

Weights in a Artificial Neural Network

This topic is 4947 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

My first question is: how does the neuron know whether to increase, or decrease it''s weights if it makes an error. i just tell the network if it is right or wrong. My second question is: if i have like 10 inputs per neuron, and want to have like 3 or 4 different outputs, is it possible with only one hidden layer? Thanx in advance! Sagar Indurkhya

Share this post


Link to post
Share on other sites
quote:
Original post by Sagar_Indurkhya
how does the neuron know whether to increase, or decrease it's weights if it makes an error. i just tell the network if it is right or wrong.



To be able to tell the network it was wrong, you must know what the correct output was that corresponded to the input. Thus, you have a set of input-output pairs for training. Given an input, the network produces a predicted output. Typically, some function of the difference between the predicted output and the desired (correct) output is used to adjust the weights of the network.

quote:
Original post by Sagar_Indurkhya
if i have like 10 inputs per neuron, and want to have like 3 or 4 different outputs, is it possible with only one hidden layer?



The short answer is yes and no. The answer actually depends on what functional mapping from inputs to outputs you are trying to learn. Many functions can be learned by a three layer network (input, hidden, output). The proviso here is that the number of nodes in each layer is sufficient to identify all of the classification boundaries in the input space and associate with each an output state. In other words, your network has to be the right size. Too small and it will over-generalise. Too large and it will overfit the data. While there are good heuristics to use for network structure design, the general principle is emperical investigation until you get a decent answer.

Cheers,

Timkin

[edited by - Timkin on June 1, 2004 8:44:55 PM]

Share this post


Link to post
Share on other sites
quote:
Original post by Sagar_Indurkhya
if i have like 10 inputs per neuron, and want to have like 3 or 4 different outputs, is it possible with only one hidden layer?




It may very well work, though I will recommend separate, single-output models. Most MLPs are trained via an iterative process, and it is unlikely that all output variables will achieve a good fit simultaneously. While multiple-output models are certainly economical, I would worry about one output variable achieving a good fit and going on to overfit, while another is still underfit.

-Predictor
http://will.dwinnell.com





[edited by - Predictor on June 2, 2004 9:48:18 AM]

Share this post


Link to post
Share on other sites
quote:
Original post by Timkin
...the number of nodes in each layer is sufficient to identify all of the classification boundaries in the input space and associate with each an output state. In other words, your network has to be the right size. Too small and it will over-generalise. Too large and it will overfit the data. While there are good heuristics to use for network structure design, the general principle is emperical investigation until you get a decent answer.


I agree that the network needs to be "large enough" to solve the problem (a network too small cannot only ever underfit the data), but overfitting in networks which are "too large" can be effectively controlled by early stopping.

-Predictor
http://will.dwinnell.com



[edited by - Predictor on June 2, 2004 9:47:32 AM]

Share this post


Link to post
Share on other sites