NN performance dependence on training method

Started by
2 comments, last by sidhantdash 18 years, 3 months ago
Different training techniques (like Conjugate Gradient, Simulated Annealing, GAs etc) help in avoiding local minima, but do they have any other effect on the performance of the NN? Like, if I used Time Series Data, and compared the performance of the Nets that had been trained using different methods, will I see any appreciable difference in their predictions? And no, this is not homework. This is something that I plan to carry out as a UG project, and it will be very helpful if I can get some experts to give me an opinion on this, before I take it to my prof. Thanks in advance. Sidhant
Advertisement
What do you mean by "performance"? How well the neural nets generalize to novel data? In my experience, getting a neural net to generalize well has been something of a black art...no rules to follow, since different training algorithms will do better with different data sets.

The best way to find out (especially if this is your UG project), is to try all the algorithms and compare the results.

-Alex
Well, sure they do, inasmuch as they are better or worse at finding global minima. As to which one is most likely to find a global minimum for a particular problem: "Who knows."
By 'performance' I meant the accuracy of the NNs predictions when they are trained using Time Series Data. But thanks, I think I got my answer. Given the methods use different techniques to get to the global minimum, it should make sense that their abilities to fine-tune a particular net will vary. So I think investigating such algorithms should make an interesting project. Thanks a lot! I intend to use Time-Series Data, just to narrow down the application domain.

This topic is closed to new replies.

Advertisement