Levenberg-Marquardt NN learning

Started by
1 comment, last by Martin Perry 11 years, 2 months ago

Hi,

I am trying to teach my NN with Levenberg-Marquardt. But my problem is, that instead of error decreasing, its increasing. For classic XOR I start eg. with error 1.07 and end with 1.99975.

Classic BP is working just fine. I used LMA according to this paper: Efficient algorithm for training neural networks with one hidden layer

My code: http://pastebin.com/8LmDMpzU

Can anyone help me to understand, what´s wrong ?

Thanks

Advertisement

I haven't looked at your code. However, the first thing I do whenever I write any local optimization routine is to check the derivatives with finite differences. Unless something more obvious pops out, I'd recommend starting with that. (And it's a useful test to have anyway.)

Derivates should be fine. The same are used in BP and it works.

This topic is closed to new replies.

Advertisement