Neural nets plus Genetic algorithms

Started by
58 comments, last by Kylotan 22 years, 1 month ago
Can anyone explain how to combine these two, and more important;y why it would be done? I got the impression that you can use a genetic algorithm to speed up the training of the neural nets, by breeding better weight sets. Is this right? Is taking some crossover of 2 n.nets that have each been through 1 epoch going to learn more quickly than going through 2 epochs with one net? [ MSVC Fixes | STL | SDL | Game AI | Sockets | C++ Faq Lite | Boost ]
Advertisement
I think this type of technique is useful when you know what the neural net has to do (measured by a fitness function) but it is difficult to work out the numerical causes of error in the output (which is needed to use training techniques).

You would use a combination of NN and GA''s to evolve what is called ''artifial life''. You can use this technique whenever you know very little about the fitness function. For example you would only know: live = good, die = bad.
A good example is the quake bot. Each bot will have its own neural net to determine his action. At first these neural nets are random. You let a number of these bots battle a while in some sort of level. After this you pick the best two bots (least damage on itself, most kills), perform some sort of crossing over/mutation on their neural nets, and let the bots battle again.

After a thousend generations or so you will have a bot that will shoot in the path you''re walking in, and evade incoming missiles. (well, at least it should be).

This algorithm does however depend on numerous parameters.

search for neuralbot in google.

Edo
Edo
My website is a good introduction to using GAs to optimize ANNs.

you can find it here:

http://www.btinternet.com/~fup/Stimulate.html

I''ve also just added a message board for the purposes of discussion related to the tutorials.
quote:Original post by edotorpedo
A good example is the quake bot. Each bot will have its own neural net to determine his action. At first these neural nets are random. You let a number of these bots battle a while in some sort of level. After this you pick the best two bots (least damage on itself, most kills), perform some sort of crossing over/mutation on their neural nets, and let the bots battle again.

Ok, so it could technically accelerate the rate of learning this way. But I still wonder whether taking the crossover of 2 neural nets is going to give you better performance than just evolving a single neural net for twice as long. (In other words, is there any point doing this when you''re not just arbitrarily assigning 1 NN to each entity?) Are there any examples of why this might be so?

quote:Original post by Fup
My website is a good introduction to using GAs to optimize ANNs.

Why is it that nobody can seem to present a compelling example for the use of GAs? Not to criticise your site, as the tutorials themselves are very good... it''s just that the ''sample application'' for GAs is usually something very artificial, whereas the sample app for a NN is often something that makes sense, such as character recognition.

[ MSVC Fixes | STL | SDL | Game AI | Sockets | C++ Faq Lite | Boost ]
Quote: it''s just that the ''sample application'' for GAs is usually something very artificial
:End Quote


Actually I have found that GA''s are used in more real situations. I have a friend that worked for a company that shipped several manufactured goods. The executives hired som CS GA professionals and had them engineer a more efficient way to make, move, store, and sell their goods. The process invovled too many variables to be done using standard algorythms, so a GA was used, and worked perfectly to to create a "specie" (a list of good settings for most of the variables in the system) that was 70% more effective than the system they were currently using. That seems pretty real to me.
GAs are often used in industry to solve large scale optimisation problems. Some examples: GAs have been used to schedule trains in many cities around the world (including here in Melbourne, Australia); for optimising traffic light timings for traffic density control; optimisation of flow control parameters for utility companies; and the list goes on.

Timkin
NN+GA are not necessarily anything to do with artificial life, and you can know as much or as much or as little about the fitness function as you like: that doesn''t matter.

GA allow to train the NN in a form of UNSUPERVISED learning: i.e. you don''t know what the right behaviour is at each stage, but you can measure its result (short or long term). GA are just an optimisation tool that allow this, among many other options.

Pure NN solutions are usually SUPERVISED: i.e. you know what the answer should be at each step -- that''s the training data.



Artificial Intelligence Depot - Maybe it''s not all about graphics...

Join us in Vienna for the nucl.ai Conference 2015, on July 20-22... Don't miss it!

Once you start using GA''s you''ll realize there are countless uses for them.


PS> How do you get the hyperlinks in your sigs guys? (I do not know the mysteries of html)
quote:Original post by fup
How do you get the hyperlinks in your sigs guys? (I do not know the mysteries of html)


Try pasting this:

/// first attempt - drat that parser!

Second attempt:

Use the quote option to the far right on this post and then cut out the snippet of html that follows and paste it into your sig.



Stimulate





Edited by - lessbread on February 22, 2002 8:28:34 AM
"I thought what I'd do was, I'd pretend I was one of those deaf-mutes." - the Laughing Man

This topic is closed to new replies.

Advertisement