I am trying to teach my NN with Levenberg-Marquardt. But my problem is, that instead of error decreasing, its increasing. For classic XOR I start eg. with error 1.07 and end with 1.99975.
Classic BP is working just fine. I used LMA according to this paper: Efficient algorithm for training neural networks with one hidden layer
My code: http://pastebin.com/8LmDMpzU
Can anyone help me to understand, what´s wrong ?