• Advertisement
Sign in to follow this  

Levenberg-Marquardt NN learning

This topic is 1777 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi,

 

I am trying to teach my NN with Levenberg-Marquardt. But my problem is, that instead of error decreasing, its increasing. For classic XOR I start eg. with error 1.07 and end with 1.99975.

 

Classic BP is working just fine. I used LMA according to this paper: Efficient algorithm for training neural networks with one hidden layer

 

My code: http://pastebin.com/8LmDMpzU
 

 

Can anyone help me to understand, what´s wrong ?

 

Thanks

Share this post


Link to post
Share on other sites
Advertisement

I haven't looked at your code.  However, the first thing I do whenever I write any local optimization routine is to check the derivatives with finite differences.  Unless something more obvious pops out, I'd recommend starting with that.  (And it's a useful test to have anyway.)

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement