#### Archived

This topic is now archived and is closed to further replies.

# Alteration of Weight

This topic is 6180 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

Hi folks, I just started of reading some stuff on AI. I get the concept with receptors and input nodes and weights. However, i do not understand how to alter the weights of a input node to get the right result. Lets say we want to perform an AND operation. So, we have two input nodes, each initialized with weight and value equal to 0. Now, how do I have to alter the values if I input a number, get a result and compare it to the right result? Regards, AL

Success is not the position you stand but the direction in which you look

##### Share on other sites
The basic technique (details may vary according to actual implementation)

- You have your weights W0..Wn
- Create an error function F();
- Take the derivative w.r.t. weight Wk : dF/dWk
- Offset weight Wk by a fraction of - dF/dWk
- Lather. Rinse. Repeat as necessary.

##### Share on other sites
Fruny is correct, but im sure u would like a deeper answer.

This is for a feedforward, backpropagating net.

1. init the weights to some small random value.
2. while error > some small number
3. Feedforward:
a. each input unit gets its input from the input vector and sends it to the hidden layer.
b. each hidden sums up the weighted input signals:
in += Input[x] * weight[x]
c. apply the activation function (depends on the type of signal u need... 1 thru 0, 1 thru -1)
d. send output to output layer
e. sum the weighted inputs like the hidden layer and act. function. This is the output of the net.
4. Compute the error (output units):
a. for each output unit:
outputerror = (train[x] - output[x])*act.function derivative
b. change in the weights:
outputweightchange = learningrate*outputerror*outputsinput
5. Compute the error (hidden units):
a. for each hidden unit:
hiddenerror += outputweights*outputerror
b. then:
hiddenerror *= act.func.derivative
c. hiddenweightchange = learningrate*hiddenerror*hiddensinput
6. Update the weights:
a. for each output unit:
newweight = oldweight + outputweightchange
b. for each hidden unit:
newweight = oldweight + hiddenweightchange

I hope that helps. It might not be clear at first, but keep workin on it. And also, if you really want to use NNs, invest a little money in a book. I recomend "Fundamentals of Neural Networks" by Laurene Fausett. It is a great book for beginers. It is very well layed out with many different types of nets. It also shows you in steps (like above) how to do everything.

Take it easy,
Bob

1. 1
Rutin
48
2. 2
3. 3
4. 4
5. 5
JoeJ
19

• 11
• 16
• 9
• 10
• 13
• ### Forum Statistics

• Total Topics
633003
• Total Posts
3009846
• ### Who's Online (See full list)

There are no registered users currently online

×