Jump to content
  • Advertisement
Sign in to follow this  
Scrut

Bizar Artificial Neural Network

This topic is 3072 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I have a Neural Network that will not fire the correct outputs. The network output value will be approximately the same value regardless of the network input values. The network output will vary by a small amount I believe it was something around ~0.001. The network has input layer populated with 2 neurons, 2 hidden layers with 4 neurons, and the output layer contains 1 neuron. I have used two different implementations of the BackPropagation Algorithm used for the Training Algorithm; First BP Algorithm adjusted weight values, while the second BP Algorithm adjusted weight and bias values. Both with the same issue. I am attemping to train the bitwise exclusive or, the XOR operator.

you can download my source code here
Click on here for Source Code

Here is my training set i ran thru each sample in the set 10 times, so the network generation is 40
|--First Input Neuron---|---Second Input Neuron----|---Correct Output---|
|--------0.0-----------|----------0.0--------------|-------0.0----------|
|--------1.0-----------|----------0.0--------------|-------1.0----------|
|--------0.0-----------|----------1.0--------------|-------1.0----------|
|--------1.0-----------|----------1.0--------------|-------0.0----------|


And here is my code I am using implementing the backpropagation training algorithm and I knew 0 calculus so i bet its prob the training algorithm.

void BackPropagation( CNeuralNetwork *pcNeuralNetwork, double *pdbExample, double dbMomentum, double dbLearningRate ) {
int iOutputLayerID = pcNeuralNetwork->ReturnNeuralNetworkLayerCounter() - 1;
int i, k, j = pcNeuralNetwork->ReturnNeuralNetworkLayer( iOutputLayerID ).ReturnNeuronCounter();


for ( i = 0; i < pcNeuralNetwork->ReturnNeuralNetworkLayer( iOutputLayerID ).ReturnNeuronCounter(); i++ ) {
pcNeuralNetwork->ReturnNeuron( iOutputLayerID, i ).SetDelta( pcNeuralNetwork->ReturnNeuron( iOutputLayerID, i ).ReturnOutput() * ( 1.0 - pcNeuralNetwork->ReturnNeuron( j, i ).ReturnOutput() ) * ( 1 - pcNeuralNetwork->ReturnNeuron( iOutputLayerID, i ).ReturnOutput() ) * ( pcNeuralNetwork->ReturnNeuron( iOutputLayerID, i ).ReturnOutput() - pcNeuralNetwork->ReturnNeuron( iOutputLayerID, i ).ReturnOutput() ) );
for ( j = iOutputLayerID - 1; j > 2; i-- ) {
for ( k = 1; k <= iOutputLayerID; k++ ) {
pcNeuralNetwork->ReturnNeuron( j, k ).SetDelta( pcNeuralNetwork->ReturnNeuron( j, k ).ReturnOutput() * ( 1 - pcNeuralNetwork->ReturnNeuron( j, k ).ReturnOutput() ) * pcNeuralNetwork->ReturnNeuron( j + 1, i ).ReturnWeight( k ) * pcNeuralNetwork->ReturnNeuron( j + 1, i ).ReturnDelta() );
}
}
}

for ( i = iOutputLayerID; i > 2; i-- ) {
for ( j = 1; j < pcNeuralNetwork->ReturnNeuralNetworkLayer( i ).ReturnNeuronCounter(); j++ ) {
pcNeuralNetwork->ReturnNeuron( i, j ).SetBias( pcNeuralNetwork->ReturnNeuron( i, j ).ReturnBias() + ( dbLearningRate * 1.0 * pcNeuralNetwork->ReturnNeuron( i, j ).ReturnDelta() ) );
for ( k = 1; k < pcNeuralNetwork->ReturnNeuralNetworkLayer( i - 1 ).ReturnNeuronCounter(); k++ ) {
pcNeuralNetwork->ReturnNeuron( i, j ).SetWeight( k, pcNeuralNetwork->ReturnNeuron( i, j ).ReturnWeight( k ) + ( dbLearningRate * pcNeuralNetwork->ReturnNeuron( i - 1, k ).ReturnOutput() * pcNeuralNetwork->ReturnNeuron( i, j ).ReturnDelta() ) );
}
}
}
}




If will respond with any additional info. Thanks for your help.

Share this post


Link to post
Share on other sites
Advertisement
That is one huge unreadable line there :>

I can't spot anything obvious but I've written set of neural net classes 6 years ago, using the same concept (neurons in layers und training through backpropagation). If you wish, you can compare the code to see if you can find any differences:

neuralnet.cpp, neuronlayer.cpp, neuron.cpp

The relevant methods are NeuralNet::train(), NeuronLayer::backpropagateError() and Neuron::computeError().

Good luck!

Share this post


Link to post
Share on other sites
You should replace your identifiers with shorter, more readable ones. `ReturnNeuronCounter()' could simply be `num_neurons()' or even `size()' and everyone would understand what it's doing.

You should also store parts of your expressions in local variables, especially those that appear several times and can be given reasonable names.

When I started doing those two things in your code, some problems became obvious. For instance, this line:
    pcNeuralNetwork->ReturnNeuron( iOutputLayerID, i ).SetDelta( pcNeuralNetwork->ReturnNeuron( iOutputLayerID, i ).ReturnOutput() * ( 1.0 - pcNeuralNetwork->ReturnNeuron( j, i ).ReturnOutput() ) * ( 1 - pcNeuralNetwork->ReturnNeuron( iOutputLayerID, i ).ReturnOutput() ) * ( pcNeuralNetwork->ReturnNeuron( iOutputLayerID, i ).ReturnOutput() - pcNeuralNetwork->ReturnNeuron( iOutputLayerID, i ).ReturnOutput() ) );


became
    Neuron & output_neuron = nn->neuron(output_layer_id, i);
double output = output_neuron.output();
output_neuron.SetDelta(output * (1.0 - nn->neuron(j, i).output()) * (1 - output) * (output - output));


Since you are multiplying by `(output - output)', this is a convoluted way of setting delta to 0. Probably not what you wanted.

You are also initializing `j' to a value that never gets used, you never use two of your parameters, the loop that initializes and checks `j' decrements `i'...

There are too many things wrong with the code. You can't just write a whole bunch of code and then complain about the highest level behavior not working, when you haven't verified that any of the intermediate things are being done correctly. You have to test each little thing individually, and not write any more code until you got that right.

Also, learn how to use a debugger. It would have told you that delta is being set to 0, for instance, and it may have made you realize that the looping is messed up.

Share this post


Link to post
Share on other sites
Since the graphs that represent ANNs are typically so dense, I wouldn't even bother with all this OO stuff for representing them; the only object you really need to keep around and manipulate is a weighted adjacency matrix. I.e.,

const int nNeurons = 10;
double w[nNeurons][nNeurons];
//...


or the equivalent with nicer array classes.

In other words, my personal opinion is that there's no point extending the (already strained) biological metaphor to your code when all you really need is a matrix (which I would probably make a private member of an ANN class, which would contain evaluation and training methods).

Plus, think about what you're doing in your current representation: You've got pointers and indirection everywhere, taking up 4 bytes of memory at a pop and scattered all over the place through a billion calls to new. The matrix representation, by comparison, leaves all connectivity information implicit, and takes up just one nice contiguous block of memory. Seems better all-around for a dense graph.

Share this post


Link to post
Share on other sites
yeah i am making two more, one with simpler OOP and with as few function calls as possable thus less jumps, and clump it up together then optimize it prob do forward propagation in asm should be easy. anyways i know the oop design is retarded to say the least to much really

Share this post


Link to post
Share on other sites
FINALLY first neural network trained! Cannot begin to explain the amount of errors. Thanks for your responses everyone. Cygon thanks for the code posted, because your code explained really well what was happening in the backpropagation algorithm. I really cant believe I got it working with as litle as I know about neural networks. Later.

[Edited by - Scrut on July 14, 2010 12:36:45 AM]

Share this post


Link to post
Share on other sites
Here is the code for the Neural Network which finally completed training. It preforms multiplication; not as exact as I would like still experimenting with learning rate and momentum which unexpectedly, but obvious really, has dramatic effects on the over all.

Simple Artificial Neural Network

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!