Sign in to follow this  

Back Propagation Error

This topic is 2661 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I'm trying to write a generic neural network library and things aren't working out too well. I must be missing something--I understand the concept, but my output is way off. I'm just trying to teach the network XOR.

Here is how I understand the algorithm:

1 Start with an input and the desired output.
2 Each input flows into each neuron in the first layer.
3 For each neuron, solve for e = sum x_i * w_i for i = 1 to num inputs, where w are the connection weights.
4 For each neuron, solve y = s(e), where s is a sigmoid function.
5 Repeat 3 and 4 for each subsequent layer, passing the y values on as the next layer's input
6 Solve for the output error E = desired output - y, where y is the final layer's output
7 For each neuron in the previous layer, solve for error as s'(e) * E
8 Propagate sum error_i * w_i for i = 1 to num current layer's inputs, for each neuron.
9 Repeat 7 and 8 for all previous layers
10 For each layer, update all weights per w_ij += error_i * a * y_j, where a is the learning rate.

OK, that's pretty crappy, hopefully someone's followed along!

It's kind of a lot of code, so I'm only posting the core of it; maybe someone will spot some glaring errors. Any help is appreciated.

First, the train method on the NeuralNetwork class:

void NeuralNetwork::train( vector<TrainingPoint> set, uint iterations )
{

//dumbly iterate x times
for( int i = 0; i < iterations; i++ )
{

cout << "==================================" << endl;

//iterate over each input/output pair in the training set
for( int j = 0; j < set.size(); j++ )
{

int k;

//forward step--training input goes into the first layer
//previous layer's output goes into each subsequent layer
for( k = 0; k < layers.size(); k++ )
{
layers[ k ].update( k == 0 ? set[ j ].inputs : layers[ k - 1 ].output );
}

vector<double> signal;

//calculate output error: e = desired - actual
for( k = 0; k < set[ j ].outputs.size(); k++ )
{
signal.push_back( set[ j ].outputs[ k ] - layers[ layers.size() - 1 ].output[ k ] );
}

//propagate the error back through the network.
//each layer calculates all its neuron's errors
for( k = layers.size() - 1; k > -1; k-- )
{
layers[ k ].propagateError( k == layers.size() - 1 ? signal : layers[ k + 1 ].signal );
}

//step back through and update the weights accordingly
for( k = 0; k < layers.size(); k++ )
{
layers[ k ].updateWeights( k == 0 ? set[ j ].inputs : layers[ k - 1 ].output );
}

cout << "target: " << set[ j ].outputs[ 0 ] << " output: " << layers[ layers.size() - 1 ].output[ 0 ] << endl;

}

}

}



Next, the update, propagateError and updateWeights methods on the NeuralLayer class:


//solve for all neuron activations
//nn is num neurons, ni is num inputs (equal to previous layer's num neurons)
void NeuralLayer::update( const vector<double> input )
{

for( int i = 0; i < nn; i++ )
{

double x = 0.0f;

for( int j = 0; j < ni; j++ )
{
x += input[ j ] * weights[ i ][ j ];
}

output[ i ] = 1.0f / ( 1.0f + exp( -x ) );

}

}

//solve for each neuron's error given the following layer's propogated error terms
//and produce those same terms for the previous layer
void NeuralLayer::propagateError( const vector<double> propagation )
{

int i;

for( i = 0; i < nn; i++ )
{
error[ i ] = output[ i ] * ( 1.0f - output[ i ] ) * propagation[ i ];
}

for( i = 0; i < ni; i++ )
{

signal[ i ] = 0.0f;

for( int j = 0; j < nn; j++ )
{
signal[ i ] += weights[ j ][ i ] * error[ j ];
}

}

}

void NeuralLayer::updateWeights( const vector<double> input )
{

for( int i = 0; i < nn; i++ )
{
for( int j = 0; j < ni; j++ )
{
weights[ i ][ j ] += error[ i ] * rate * input[ j ];
}
}

}

Share this post


Link to post
Share on other sites

This topic is 2661 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this