Jump to content
  • Advertisement
Sign in to follow this  
chadjohnson

Neural Network Backpropagation - Details

This topic is 4844 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I've been searching Google and I've read a dozen or so pages on feed-forward neural networks, and I even wrote a program to test the concepts, but I'm still unclear on something. On one web site it says the following (_k = "sub k"; read e_k as "e sub k"): ----- First the input is propagated through the ANN to the output. After this the error e_k on a single output neuron k can be calculated as: e_k = d_k - y_k Where y_k is the calculated output and d_k is the desired output of neuron k. ----- It says pretty much exactly the same thing on every site I've been to. What I don't understand is how am I supposed to know what the desired output for each neuron is? If I have a network that has three layers - input, hidden, and output - with say 10 neurons in the hidden layer, and I'm trying to get the network to learn how to approximate any function I throw at it, how do I know what the output for each neuron should be? Am I just reading this wrong and is it talking about using the output value for the neuron(s) in the output layer?

Share this post


Link to post
Share on other sites
Advertisement
(I'll assume you are using the backpropagation algorithm for training your ANN. This is the most common way to do it)

The only layer you know the output for is the output layer. This you need to know while training your neural net. The desired output in layers behind the output layer is propagated from the output layer and backwards through to the other layers (thus the name backpropagation).

The general idea of this is that the output at layer n is caused by the input to layer n which in turn is caused by the output at layer n-1. This means that errors in layer n is caused by errors in layer n-1. By replacing "n" with "output" and "n-1" with "hidden" you will see how it works in a 3-layered feed-forward network like you are describing.

This link looks like a pretty good reference in regards to the back propagation algorithm.

Share this post


Link to post
Share on other sites
Thanks for the link.

I've been researching this the past few days and I've actually come across a couple pages that have completely different backgpropagation algorithms. Either they're like the one on the page you showed me or they're like this one on this page (see the section "Weight Change Equation"). I don't know which to use.

Also, what is the advantage of using a threshold vallue and only firing when the neurons reach a certain value?

*EDIT: wait, now that I look at it, it looks like the two methods are basically the same. The onyl thing is that one uses the equation e = z (1 – z) (y – z) to calculate the error, and the other uses difference between the desired output and the actual output. Is one better?

Lastly, on your page it says, "for each output neuron, calculate its error e, and then modify its threshold and weights using the formulas above." Why does it call this a threshold?

Thanks for your help!

[Edited by - chadjohnson on August 10, 2005 8:46:17 AM]

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!