Question about genetic algorithms

Started by
23 comments, last by Nathaniel Hammen 19 years, 6 months ago
Hi, Continuing my reading on AI, I've come across genetic programming. As I understand it, it works as follows: a} Loop over n generations. b) Each generation has m members. Each member of a generation has a single realisation of each of a number of characteristics. c) Calculate a fitness function for each member; keep the top few members and evolve the next generation from pairs drawn from this set. Also allow for the possibility of mutation, to keep from finding local features of the fitness function response surface. The final 'best' members of the final generation provide our genetically superior, trained, set. I would assume that if you wanted an 'easier' set, you could just draw realisations from an earlier generation. This seems very similar to problems that are encountered in statistical analysis very frequenty (ie multi-dimensional characteristic set, evaluable response). In statistical analysis, I would solve this problem using a response-surface analysis design (linear model or generalised additive model, built using splines). Essentially you generate members with given charateristics and use them to build up a mathematical profile of the fitness function, allowing characterisation both of where the 'optimal' region is, but also how changing individual (or sets of) characteristics changes the fitness function. At this stage (and admittedly, given my still limited knowledge of the AI field), I can't see the advantages that a genetic algorithm gives - what am I missing? Is anyone aware of any literature comparing techniques? Would anyone be interested in results from this if I played with it for a bit? Regards, Jim.
Advertisement
genetic algorithms are a useful tool where the solution space is massive, or poorly understood, or known not to be smooth with several optima. They are also pretty good at finding solutions where the fitness function is noisy.

PS. genetic algorithms are not the same thing as genetic programming, although they do share similarities.
Look back one page of threads and there's a long thread on genetic programming.
OK - my apologies - I'll go back and look over the previous thread.

Thanks,
Jim.
Quote:Original post by JimPrice, regarding genetic algorithms
This seems very similar to problems that are encountered in statistical analysis very frequenty (ie multi-dimensional characteristic set, evaluable response). In statistical analysis, I would solve this problem using a response-surface analysis design (linear model or generalised additive model, built using splines). Essentially you generate members with given charateristics and use them to build up a mathematical profile of the fitness function, allowing characterisation both of where the 'optimal' region is, but also how changing individual (or sets of) characteristics changes the fitness function.

At this stage (and admittedly, given my still limited knowledge of the AI field), I can't see the advantages that a genetic algorithm gives - what am I missing? Is anyone aware of any literature comparing techniques?



Genetic algorithms (GA) and response surface (and realted design of experiments techniques) are optimizers. The difference boils down to this:


GA Pros:
- Can handle a wide variety solution representations and data types
- Can handle very large number of solution parameters
- Can deal with very complex objective functions
- Searches for a globally near optimal solution


GA Cons:
- Being generic, can be very slow to operate


Conventional Optimizer Pros:
- For appropriate problems, are very efficient


Conventional Optimizer Cons:
- Often quite limited in the number of parameters which can be accommodated
- Often quite limited in the complexity of objective function which can be accommodated
- Many (most) search for locally optimal solution


There is a substantial area of overlap, but response surface techniques would certainly have a difficult (impossible?) time with the kind of high-deception problems which GAs are typically thrown at. For a good reference, see Genetic Algorithms + Data Structures = Evolution Programs
, by Zbigniew Michalewicz (ISBN: 3540606769) for a solid introduction.


-Will Dwinnell
http://will.dwinnell.com


[Edited by - Predictor on September 6, 2004 6:02:18 AM]
Thanks for the replies - very useful.


So, it seems the key points are the potential scale of the covariate space (probably the wrong terminology!), and the form of the outcome space. I think I was getting hung up on two points :

firstly, most of the examples I had looked at currently were relatively simple, and I hadn't extrapolated to more detailed ideas.
secondly, I was getting hung up on the lack of speed issue highlighted by will.

I think one of the best things that helped was the example exercise on the ai-junkie website (the 'finding the largest circle') - I couldn't immediately see a solution to this using techniques that I have been using over the last few years, but a GA solution just jumped out at me straight away.

An analogy for me is probably from my own pet field. Many problems in bayesian analysis can be solved using conjugate families of distributions (and therefore potentially making simplifying distributional assumptions about the form of the data and parameters) to simplify an analytical solution. However, using MCMC (numerical) techniques allows expansion to a wider range of problems, but at the cost of speed.

So, basically, a few more tools to add to the toolbox.

I love learning new stuff!

Jim.
Quote:Original post by fup
PS. genetic algorithms are not the same thing as genetic programming, although they do share similarities.


Why not? What difference? Imho, the difference is only in fitness-function. But any tasks are different by FF, so.. the main ideas are same.. the main operations are same.. ;)
Quote:Original post by Yuri Burger
Quote:Original post by fup
PS. genetic algorithms are not the same thing as genetic programming, although they do share similarities.

Why not? What difference?

Your GA can't evolve complexity or simplicity to solve the problem at hand.
If a plant cannot live according to its nature, it dies; so a man.
Quote:Original post by Yuri Burger
Quote:Original post by fup
PS. genetic algorithms are not the same thing as genetic programming, although they do share similarities.


Why not? What difference? Imho, the difference is only in fitness-function. But any tasks are different by FF, so.. the main ideas are same.. the main operations are same.. ;)


Why is programming in assembler any different from programming in ML? It's the same hardware, and it all gets converted to the same bytes and runs on the same OS...

To offer some example answers ;) ...

- look at how complex high-level algorithms are implemented differently in different languages. There is very little similarity between the two algorithms, because the languages are so dissimilar that you actually need literally different algorithms to achieve the same end result.

- look at how much change there is in the generated code when a small change is made in a GA compared to in a GP (you'll need to sketch out example designs to see this). Sometimes, the GA encoding is actually just a GP - in which case there's little difference - but the GA encoding could be radically different, and so a single bit-flip in the GA will have an effect that the equivalent GP could only achieve by doing, say, 4 complex mutations at precise points in the tree.

- (if I inderstand woodsman correctly, this is the one he's referring to) because GP works at a higher level (using prog-lang terminology: i.e. more abstracted, more structured) it's been easy for GP'ers to add modern high-level concepts as trivial features of the GP. For instance, mutators that create arbitrary non-terminals (IIRC called "Automatically Defined Functions (ADF)" in Koza and in the GADS=EP book) that didn't exist in the the original set by iterating over a tree, picking a random sub-tree, declaring it a new NT, and replacing the sub-tree with the symbol / reference for that new NT. Doing the same thing in GA's is a lot less trivial. In fact, it's pretty easy to take a standard OOP code-refactoring library, and hook it up to a GP simulation directly, exposing all the types of modern refactoring that humans have identified as useful as high-level mutation techniques that the GP can utilise!

PS GADS=EP is a great starter book - it's nice and small but has a lot of info. It's a bit close to being an academic text, though - very very terse on many things, and one para might contain concepts that elsewhere other books would say the same info over two pages. A lot of stuff is skimmed on the assumption that the reader is already well versed in related disciplines.
"Why not? What difference?"

Well technically you're right. GP is just a form of GA. In practice though GP is a very different tool... this is the reason it has become a separate branch of evolutionary computing.

GAs typically encode solution candidates as strings of binary or floating point numbers. GPs encode solution candidates as trees of operators consisting of function nodes and terminal nodes. Each tree represents a program, which may be run directly in a language such as Lisp or inside a virtual machine.

"Your GA can't evolve complexity or simplicity to solve the problem at hand."

They can actually. Here's an example:

http://www.cs.utexas.edu/users/kstanley/

This topic is closed to new replies.

Advertisement