Bogus Neural Network Output

Started by
1 comment, last by Trillian 15 years, 8 months ago
Hello! I've just completing recoding my old neural network code but, seeing the results of my first tests, I did something wrong. I'm currently using three layers : 12 input neurons, 30 hidden neurons, 12 output neurons. I use the following set of input values : [1 0 0 0 1 0 0 1 0 0 0 0] I randomize all output weights of the neurons between 0.0f and 1.0f and use the following code to do the forward propagation :

for (int i = 0; i < next.NeuronCount; ++i)
{
    float sum = 0.0f;
    for (int j = 0; j < NeuronCount; ++j)
        sum += values[j] * outputWeights[j, i];
    next.values = normalization(sum);
}

// ... Where "normalization" is currently the following sigmoid function :
public static float Sigmoid(float value)
{
    return (float)(1.0 / (1.0 + (System.Math.Exp((double)-value))));
}


The values I get in the output layers are all 0.99999924598f or something stupid like that. I don't know if the sigmoid function should be applied at each propagation step but that's how it was done in a sample I've found. Anyone has an idea of what I'm doing wrong?
Advertisement
Your weights are all positive, which results in some huge values in the output layer. You should probably initialize your weights to be random numbers between -1.0 and 1.0, or something like that.

Quote:Original post by alvaro
Your weights are all positive, which results in some huge values in the output layer. You should probably initialize your weights to be random numbers between -1.0 and 1.0, or something like that.


Aaahhh, of course it had to be so simple. Thanks a lot for pointing that out. It now produces much more realistic results!

This topic is closed to new replies.

Advertisement