How are NNs used in NEAT trained?

Started by
9 comments, last by daniel_i_l 17 years, 12 months ago
Hi Can someone please tell me how NNs used in NEAT are trained. Just a pointer plz. You can find the paper here They use a speciated GA to evolve a population of NNs. I am working similiar to this, and find this intersting but havent been able to find out how such networks are trained. Is is recurrent BP?
Advertisement
Your link is broken and I don't know what paper you have read, but the home page of NEAT is at http://nn.cs.utexas.edu/keyword?neat

Omae Wa Mou Shindeiru

Oh sorry. Its here now. Messed it up the last time!
It isn't recurrent BP because it doesn't use backpropagation. The genetic algorithm evolves the weights, although the paper doesn't go into much detail on this area. It just says that the weights can be perturbed, which probably means they can be changed by a small amount when a mutation occurs.
But do you think recurrent BP would be a good idea to do train the individual nets in the population? And using the GA only to evolve structure.
I don't believe any form of backpropagation is used in NEAT at all. All training of weights and structure of the NEAT networks comes from a very specialized genetic alogorithm. The paper you mentioned does a fairly good job of explaining the details behind NEAT. Ken Stanley's homepage is http://www.cs.ucf.edu/~kstanley/ - he has a number of publications on NEAT some of which go into more depth on the method itself. His dissertation probably gives you the most complete picture of NEAT.

-Kirk
Quote:Original post by sidhantdash
But do you think recurrent BP would be a good idea to do train the individual nets in the population? And using the GA only to evolve structure.

You are trying to combine two methods here: backpropagation to adjust the weights, and genetic algorithm to evolve the structure. From the paper you linked, it doesn't seem that NEAT use backpropagation at all. NEAT use genetic algorithm to adjust weights, weight connectivity, and structure.
So NEAT doesn't use BP at all, I really hadn't been able to understand that from the paper. Thanks a lot everyone.

But nevertheless, assuming that I try to write my own version of the program (using ideas like speciation and minimal starting populations) and use BP (recurrent) to train the individual nets in the population and use the GA only to evolve structure, do you think it will work? Actually I am in the process of preparing a research proposal for a summer undergraduate research program, and want to work on evolving neural nets. I have already done a course on NNs, and am familiar with most forms of neural nets (except the evolving ones).

My proposal as of now is to apply a neuro-genetic evolutionary process to predict currency exchange rates and compare the results with those obtained using standard statistical processes (like ARCH, GARCH). For the neuro-genetic method, I referred to the NEAT paper, and because I will be writing my own code, I thought of this recurrent BP to train the individual nets ('coz I have already done some BP at college). Also, my project will be much simpler than NEAT (quite naturally, given that NEAT happens to be a PhD dissertation). Do you think it is a good (read FEASIBLE) idea? Will it work, or is it like one has to stick to what the original paper says regarding the training for the process to work?

Thanks!
You can definitely deviate from the NEAT methodology and attempt new methods. There are a number of publications out there that cover a wide variety of ways to evolve neural net structure and weights. NEAT is one of the newest methods out there and Ken Stanley has gotten a lot of press lately, which is well deserved, in my opinion. But this doesn't not mean that other methods shouldn't be attempted.

If you're interested in other publications, you should look to Google Scholar (http://scholar.google.com/) and CiteSeer (http://citeseer.ist.psu.edu/). I've found numerous papers through CiteSeer on this topic. I haven't tried Google Scholar on this particular topic, but I've found it useful in many other cases.

-Kirk

Quote:Original post by sidhantdash
But nevertheless, assuming that I try to write my own version of the program (using ideas like speciation and minimal starting populations) and use BP (recurrent) to train the individual nets in the population and use the GA only to evolve structure, do you think it will work? Actually I am in the process of preparing a research proposal for a summer undergraduate research program, and want to work on evolving neural nets. I have already done a course on NNs, and am familiar with most forms of neural nets (except the evolving ones).

It's a huge research subject worthy of a Ph.D. Think about it, if a neural network needs to be adjusted, and you have backpropagation and genetic algorithm, how should you adjust it? Should you adjust the weight, or should you adjust the structure and weight connectivity? How do you decide? Deciding which method to use is a tricky part. There might even be interaction between these two methods. How does BP affect the evolution and vice versa? Chances are you might need to insert backpropagation into the evolving structure and let the evolution decides which backpropagation that works.

This topic is closed to new replies.

Advertisement