Jump to content
  • Advertisement
Sign in to follow this  
johnnyBravo

Question about feed forward backpropagation networks

This topic is 4867 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi with the feed forward backpropagation neural networks, that I am currently learning, and the book that I am looking at seems to say you must have the desiredOutput vector for each layer to calculate how much to change each set of weights. But .. I thought that it would be kinda hard to know what the desired output is for each layer, but it would be easier to just know the desiredOutputs for the last output layer. So is the training of the network done by having a desiredOutput vector for each layer or just for the output layer? Thanks

Share this post


Link to post
Share on other sites
Advertisement
Yeah, when you first start the training, you only know the "desired output" for the output layer. You don't really care how the intermediate layers work, you just care about what the network spits out.

I haven't implemented backprop in a while, but I think the way it works is this: for every layer, it -calculates- the desiredOutput based on the current state of the network and based on the desiredOutput for the output layer.

So say we have layers 1 and 2. Updating layer 2 is easy: we just take the input that layer 2 got, compare it with the desiredOutput (which we already know) and adjust the values of 2 so that it will spit out something closer to desiredOutput.

But how do we modify layer 1? We take the desiredOutput of layer 2, and the values of 2, and use that to go backwards and calculate a vector that would have caused layer 2 to spit out the correct output. It's kind of like layer 2 is saying "Hey it's not my fault I was wrong, layer 1 gave me the wrong numbers!". Then this vector that *should* have been layer 2's input becomes the desiredOutput for layer 1. And you adjust the values of layer 1 to spit out something closer to it, the same way we adjusted layer 2's values.

Share this post


Link to post
Share on other sites
If I am understanding your question correctly:

This is usually how it is done in backprop training.
1. You calculate the output layer errors (you know what you expect the output should be given an input).
2. You use the previous information to calculate the "errors" in the hidden layers.

Hope this wasn't too vague. I have a single hidden layer C# code in my journal.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!