Jump to content

View more

Image of the Day

Inventory ! Va falloir trouver une autre couleur pour le cadre D: #AzTroScreenshot #screenshotsaturday https://t.co/PvxhGL7cOH
IOTD | Top Screenshots

The latest, straight to your Inbox.

Subscribe to GameDev.net Direct to receive the latest updates and exclusive content.


Sign up now

Levenberg-Marquardt NN learning

4: Adsense

Old topic!

Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.


  • You cannot reply to this topic
2 replies to this topic

#1 Martin Perry   Members   

1546
Like
0Likes
Like

Posted 04 March 2013 - 06:49 AM

Hi,

 

I am trying to teach my NN with Levenberg-Marquardt. But my problem is, that instead of error decreasing, its increasing. For classic XOR I start eg. with error 1.07 and end with 1.99975.

 

Classic BP is working just fine. I used LMA according to this paper: Efficient algorithm for training neural networks with one hidden layer

 

My code: http://pastebin.com/8LmDMpzU
 

 

Can anyone help me to understand, what´s wrong ?

 

Thanks



#2 Emergent   Members   

982
Like
0Likes
Like

Posted 09 March 2013 - 12:33 AM

I haven't looked at your code.  However, the first thing I do whenever I write any local optimization routine is to check the derivatives with finite differences.  Unless something more obvious pops out, I'd recommend starting with that.  (And it's a useful test to have anyway.)



#3 Martin Perry   Members   

1546
Like
0Likes
Like

Posted 09 March 2013 - 02:00 AM

Derivates should be fine. The same are used in BP and it works.






Old topic!

Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.