Genetic algorithms[RESOLVED]

Started by
6 comments, last by WeirdoFu 18 years, 4 months ago
Hello. I was looking into genetic algorithms and i think i get the general idea but i am getting messed up on the cost function when i try to implement one, or something to do with mating also screws up some of my code. I was wondering if there is an articles that has example genetic code in it as well as somewhat details explanations(above something like this:"this funciton does the mution."):D. I looked on the net and i can;t seem to find that. I got alot of stuff on genetics algorithms, as well as code samples but they were never explained, and in most cases i get lost around the cost function and selectiong individuals to mate. Any tutorials that have some code in it(can be completly simple)? Thx [Edited by - rever on December 18, 2005 8:12:07 AM]
Advertisement
Quote:Original post by rever
I was looking into genetic algorithms and i think i get the general idea but i am getting messed up on the cost function when i try to implement one, or something to do with mating also screws up some of my code.
Let me ask this first: what is the problem you are trying to solve and why are you thinking about genetic algorithms (GAs) specifically? (My experience is that GAs are rarely the best solution for a given problem.)
Well, i am just trying to learn GA's, and i do not have a problem rite now but i am thinking of something to do with chess. Doesn;t have to be the best solution i just wanna see how the stuff works.
I heard that the 8-queen problem can be solved using GA's quite nicely, and i think there is even some source for it that i saw when i was doing research but the source had really shitty comments and it was not the best place to start with GA's.
Any problem that can be solved with GA's and has some explanation on how it works and why it works will suffice. After i learn GA's i think i will try to do the chess thingy.....or maybe make a tic-tac-toe AI based on GA's.
Thx for any references.
Genetic algorithms are just random searches with heuristics. You can repeatedly generate an individual, test it, then keep it if its score is higher than the previous best individual. Continue looping for some number of iterations, or when the fitness score is higher than some value, or maybe when no better solutions have been found for a while, or some combination.

Maybe it would be a good idea for beginners to implement this random search before trying genetic algorithms. The code for generating a random individual and rating its fitness is directly applicable to genetic algorithms, and the random search is much easier to implement and understand. The only parameter is the rule for deciding when to end the search, as opposed to genetic algorithms which have additional parameters for mutations, crossovers, selection, etc.
Quote:Original post by Anonymous Poster
Genetic algorithms are just random searches with heuristics. You can repeatedly generate an individual, test it, then keep it if its score is higher than the previous best individual. Continue looping for some number of iterations, or when the fitness score is higher than some value, or maybe when no better solutions have been found for a while, or some combination.

Maybe it would be a good idea for beginners to implement this random search before trying genetic algorithms. The code for generating a random individual and rating its fitness is directly applicable to genetic algorithms, and the random search is much easier to implement and understand. The only parameter is the rule for deciding when to end the search, as opposed to genetic algorithms which have additional parameters for mutations, crossovers, selection, etc.


Now that first statement will get you in alot of trouble if you say it in front of anyone in the field of evolutionary computation. The random search approach you state is even worse than a hill-climber. To be exact, its pretty much what people in DNA computing are doing with DNA molecules. If you really want something good and easy, beam search is much better than a total random search, which almost doesn't qualify as a search in the first place.

For Genetic Algorithms, one of the best references would probably be the book "An Introduction to Genetic Algorithms" by Melanie Mitchell. It has solid explanations and examples of GAs and their implementation.
thx alot, i will try to get my hand on that book.
SLight problem that it ain;t sold around where i live so i will have to wait a week or so for it to ship in.
Quote:Original post by WeirdoFu
The random search approach you state is even worse than a hill-climber. To be exact, its pretty much what people in DNA computing are doing with DNA molecules. If you really want something good and easy, beam search is much better than a total random search, which almost doesn't qualify as a search in the first place.


When sorting algorithms are taught, usually the first ones covered are the nonrecursive, easy to understand ones like selection sort and insertion sort. Selection sort is a terrible sorting algorithm, but it works, and it is possibly one of the easiest to understand.

Why should probabilistic optimization strategies be taught any differently? I can't think of a simpler, easier to understand probabilistic optimization strategy than the random search I described. Beam search is more complicated, and as far as I know it is usually presented as a graph search algorithm, not an optimization algorithm.
Quote:Original post by Anonymous Poster
Quote:Original post by WeirdoFu
The random search approach you state is even worse than a hill-climber. To be exact, its pretty much what people in DNA computing are doing with DNA molecules. If you really want something good and easy, beam search is much better than a total random search, which almost doesn't qualify as a search in the first place.


When sorting algorithms are taught, usually the first ones covered are the nonrecursive, easy to understand ones like selection sort and insertion sort. Selection sort is a terrible sorting algorithm, but it works, and it is possibly one of the easiest to understand.

Why should probabilistic optimization strategies be taught any differently? I can't think of a simpler, easier to understand probabilistic optimization strategy than the random search I described. Beam search is more complicated, and as far as I know it is usually presented as a graph search algorithm, not an optimization algorithm.


If you're comparing to sorting, then random search is more like bubble sort. Because a random search, statistically, works just as well as an exhaustive iterative search.

Actually, I think you misunderstand the concept somewhere. There are no real probabilistic optimization startegies to be exact. There are deterministic methods with stochastic elements. For a GA, if you are using a fixed single point crossover, no mutation, and a selection rule of just taking the two best individual to perform cross over and have offspring replace the two worst individuals in the current population, then the only probabilistic element is when the population is initially generated. Then everything after that is all deterministic. Warrant it might not be the best GA possible, but it'll work just like any other GA. So, the proper teaching method is to start with something deterministic and then adding more and more stochastic elements, not the other way round.

Beam search is just a general idea. A discrete search space, for example, can be easily modeled as a graph, if wanted to. So, beam search can easily be done in discrete optimization. For real value space, you have evolution strategies which in many ways is a beam search in real space if you use a population.

....Well, we've completely gone off topic.....so I'll stop...

This topic is closed to new replies.

Advertisement