Neural networks: hidden Layer Bias

Started by
9 comments, last by kirkd 18 years, 5 months ago
Nick,

What are your thoughts on the need for bias nodes in hidden layers? Intuitively, I'm certain that they should be there for the exact same reason we want them in the input layer, but I can't come up with any supporting evidence. Everywhere I've looked and found discussions on bias nodes, they are included in all the hidden layers, but I've never seen a discussion of what would happen if they weren't there. Again, I can intuitively see that they will provide the same flexibility as they do in the input nodes - move the hyperplane away from the origin. Is it possible, however, that without a bias node in the hidden layer, the transform that occurs in the hidden layer will merely need to adjust itslef through the input-hidden layer weights?? Does a bias node in the hidden layer merely allow us to find "a" solution rather than "a specifc" solution???

Hmmmm.....


-Kirk


This topic is closed to new replies.

Advertisement