Jump to content

  • Log In with Google      Sign In   
  • Create Account

Levenberg-Marquardt NN learning


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
2 replies to this topic

#1 Martin Perry   Members   -  Reputation: 1324

Like
0Likes
Like

Posted 04 March 2013 - 06:49 AM

Hi,

 

I am trying to teach my NN with Levenberg-Marquardt. But my problem is, that instead of error decreasing, its increasing. For classic XOR I start eg. with error 1.07 and end with 1.99975.

 

Classic BP is working just fine. I used LMA according to this paper: Efficient algorithm for training neural networks with one hidden layer

 

My code: http://pastebin.com/8LmDMpzU
 

 

Can anyone help me to understand, what´s wrong ?

 

Thanks



Sponsor:

#2 Emergent   Members   -  Reputation: 971

Like
0Likes
Like

Posted 09 March 2013 - 12:33 AM

I haven't looked at your code.  However, the first thing I do whenever I write any local optimization routine is to check the derivatives with finite differences.  Unless something more obvious pops out, I'd recommend starting with that.  (And it's a useful test to have anyway.)



#3 Martin Perry   Members   -  Reputation: 1324

Like
0Likes
Like

Posted 09 March 2013 - 02:00 AM

Derivates should be fine. The same are used in BP and it works.






Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS