# *Implementing* A Neural Network

This topic is 4790 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

Well, I understand the theory allright. The implementation seems simple enough as far as the network itself. I can create a class represensing a neuron, with an std::map of pointers to inputs in other layers, where each entry is a pair containing a pointer to the neuron and a float for the weight. In fact, I can probably even get rid of the pointer if I just assume a fixed indexing order (although it may be practical if I delete links with very reduced weight after training). What I find difficult to understand is how to program the algorithm to train the network. I found no tutorial that explained this clearly enough. I'm not really looking for source code either, but for a *clear* description of how the algorithm works. Clear enough to be able to implement it.

##### Share on other sites
Which algorithm do you want to use, backpropagation?

##### Share on other sites
Quote:
 Original post by NicoDeLuciferiWhich algorithm do you want to use, backpropagation?

Yes Indeed. But I could not find a clear explanation of how the algorithm works. The tutorials I found basically just said:

"Well, it's the process you use to train your network, I hope you can learn it from somewhere else folks, since I've never implemented it and I just wanted to write an incomplete tutorial about neural networks because I think they're cool".

##### Share on other sites
There's an excellent book called, "Neural Networks In Computer Intelligence," which explains it very clearly. One additional thing I found that helps though is to have a bias value for each and every neuron, not just for the network as a whole.

I'll send a scan from the book to the e-mail address on your profile.

##### Share on other sites
Quote:
 Original post by chadjohnsonThere's an excellent book called, "Neural Networks In Computer Intelligence," which explains it very clearly. One additional thing I found that helps though is to have a bias value for each and every neuron, not just for the network as a whole.I'll send a scan from the book to the e-mail address on your profile.

Thank you :)

##### Share on other sites
Basically the backpropagation algorithm is just a form of the gradient descent method. Here's MathWorld on gradient descent: http://mathworld.wolfram.com/MethodofSteepestDescent.html

##### Share on other sites
Backpropagation works as follows:
- Determine the output of the network for a certain input. For training purposes you need to also have the correct output for this sample;
- Compare the output to the known correct output and determine the error from this. The error is usually simply a number between 0 and 1 for every output neuron;
- 'Backpropagate' this error to every neuron as if the direction of the network was reversed, all connections reversed and treat this error as input.
- This will give you an error for every neuron in your network, this is a measure of the neuron's part in the overall error. Now adjust all connections proportionally to the height of their contribution to the error.

I hope this is clear enough, otherwise I think you should look up some of those links above, for the exact formulas etc.

Quote:
 Original post by chadjohnsonOne additional thing I found that helps though is to have a bias value for each and every neuron, not just for the network as a whole.

What we did when implementing a neural network was provide every neuron with a connection to the bias. This way the bias can be constant over the entire network functioning as a kind of neuron with constant output, but every neuron's connection to the bias can change.

1. 1
Rutin
67
2. 2
3. 3
4. 4
5. 5

• 21
• 10
• 33
• 20
• 9
• ### Forum Statistics

• Total Topics
633416
• Total Posts
3011780
• ### Who's Online (See full list)

There are no registered users currently online

×