Strange back propagation behaviour (picure inside)
Hi!
The picture below shows the output from my program trying to learn a neural network the XOR function.
EDIT: what it shows is the error rate (sqrt(output - expected)**2) for the 4 different inputs of the XOR func.
the strange thing is that it only "learns" for one of the outputs...all the others dont adapt.
I know this is quite i shot in the dark, but does anybody have any idea about what might be going on?
I have checked and double checkde my code and i dont find anything wrong.....any other suggestions about what i can try to test my ANN on?
thank you...
--Spencer
--Spencer
"Relax, this dragon is sleeping..."
[edited by - spencer on November 11, 2003 8:47:28 AM]
I am using one hidden layer with 2 neurons and of course one output neuron and 2 inputs...
--Spencer
"Relax, this dragon is sleeping..."
--Spencer
"Relax, this dragon is sleeping..."
Are you using an extra input set to 1 in all neurons (I think this is known as "bias")? If not, try that.
yes i am....doing that...and this is what really confuses me since it _should_ work..:/
--Spencer
"Relax, this dragon is sleeping..."
--Spencer
"Relax, this dragon is sleeping..."
You should try to find a Java applet that performs BP and compare your weights with the applet. I remember that on Generation5, there was one step of BP algorithm with weights and outputs, could be worth a look.
i will look at generation5 thanx
the software i use is gnuplot very good
--Spencer
"Relax, this dragon is sleeping..."
the software i use is gnuplot very good
--Spencer
"Relax, this dragon is sleeping..."
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement