Quite funny actually, I made the simpliest neural net imageinable, two inputs, one output, the goal to learn XOR. However, after around 20 000 training runs, it had learned this:
1 ^ 1 = 1
1 ^ 0 = 0
0 ^ 1 = 0
0 ^ 0 = 1
Not quite what was intented. :P
I setup the initial weights like this:
Neuron::Neuron() {
srand( (unsigned)time( NULL ) );
w1 = (float)((rand()%2000)-1000)/1000;
w2 = (float)((rand()%2000)-1000)/1000;
wb = (float)((rand()%2000)-1000)/1000; // bias
cout << w1 << endl << w2 << endl << wb << endl;
}
I train it like this:
int Neuron::Train(int i1, int i2, int correctOutput) {
float output = (i1*w1) + (i2*w2) + (1*wb);
output = hardlimiter(output); // sum up inputs and weights
// training!
int error = correctOutput - output;
//cout << error << endl;
w1 = w1+(LEARNRATE*error*i1);
w2 = w2+(LEARNRATE*error*i2);
wb = wb+(LEARNRATE*error*1);
return output;
}
LEARNRATE is defined as 0.0001.
Then in main I do:
for(int i = 0; i < TRAINTIMES; i++) {
cout << "(1, 1) --- " << Net.Train(1, 1, 0) << endl;
cout << "(1, 0) --- " << Net.Train(1, 0, 1) << endl;
cout << "(0, 1) --- " << Net.Train(0, 1, 1) << endl;
cout << "(0, 0) --- " << Net.Train(0, 0, 0) << endl;
}
So... anyone can tell me why my net got the meaning of XOR in the complete opposite way? :P It isnt really much more except that in the code but ask and I shall post the entire thing.