• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.
Sign in to follow this  
Followers 0
AndyTang

Genetic Algorithm and Sudoku

64 posts in this topic

Quote:
Original post by RPGeezus
I do not understand how cross-over solves the hill-climbing dilema. It seems just as locked in to this problem as mutation alone is.


Mutation produces a new solution fundamentally similar to the old solution. If it is near a local maxima, then it is likely to approach that local maxima and survive, or move away from it and die.

Crossover produces a new solution, which is - for some definition of distance appropriate to your representation - located somewhere between 2 existing solutions, which may be very distant in terms of search space. Both may be in local maxima, but if those maxima are distinct then the offspring may be in neither, meaning you've 'broken free'.

Now, you might say "but I can do that with large mutations too". The problem there is that large variations as a result of mutation make it hard to exploit a local maximum (which might actually be the global one). Imagine your whole population, or a large part of it, is already near the global maximum: wild mutation will send their offspring all over the place. Crossover won't do that. On top of that, crossover introduces changes that are generally good (because the changes come from good parent solutions), whereas mutation introduces changes that are neutral.

Quote:
What if, instead of doing 1 mutation, every so often you would mutate an entire 10% of a member. How would this differ from cross-over with regards to the above?


Although the implementation obviously can be tweaked to suit different circumstances, it makes no difference how you implement mutation to the theory of it all. Crossover/recombination performs a distinct task from mutation and in most cases, both are useful.

Quote:

Sure, for the first few generations (100 isn't very many).... and after that???


After that, crossover may not benefit you so much as the solutions start to converge, but it will also make fewer 'mistakes' than mutation does because it is recombining good values with other good values. Imagine you have 2 solutions on the global maximum: crossover will give you another maximal solution, yet mutation cannot. (Obviously, the reverse applies when you're at the global minimum... but that can be eliminated in time for the 2nd generation by the selection process so it's not a factor.)

Quote:
i.e.
Is there a requirement to take the data from member B and put it in to member A? Could we, just as effectively, copy a section from member A and paste it in to member A? Would this be considered cross-over?


No, it's mutation.
0

Share this post


Link to post
Share on other sites
Quote:
Original post by Kylotan
Crossover produces a new solution, which is - for some definition of distance appropriate to your representation - located somewhere between 2 existing solutions, which may be very distant in terms of search space. Both may be in local maxima, but if those maxima are distinct then the offspring may be in neither, meaning you've 'broken free'.


But only early on.... We agree? :)


Quote:
Original post by Kylotan
wild mutation will send their offspring all over the place. Crossover won't do that. On top of that, crossover introduces changes that are generally good (because the changes come from good parent solutions), whereas mutation introduces changes that are neutral.


That does not make sense to me. Adding too many copies of the same chromosome in an animal can be fatal. Biology aside, I don't see how randomly insterting duplicate copies of existing 'code' (code can mean anything) is somehow inclined to be a good thing. I understand that the 'code' segment itself maybe proven to work, but what evidence is there to suggest that it will work in a different context (i.e. different location in the 'code').

i.e. If I gave you a third arm, how would it be useful without changes to your nervous and circulatory system? It would be a hinderance without a full reworking of your brain.



Quote:
Original post by Kylotan
Imagine you have 2 solutions on the global maximum: crossover will give you another maximal solution,


Do you mean 'Crossover _may_ give you another maximal solution'? In that case, so may mutation. Is there a higher chance of a cross-over giving you solution closer to the global maximum? If so, how does it know it's not just around a local maximum?

Quote:
Quote:
i.e.
Is there a requirement to take the data from member B and put it in to member A? Could we, just as effectively, copy a section from member A and paste it in to member A? Would this be considered cross-over?



No, it's mutation.


So pulling _identical_ pieces during cross-over is not crossover. Only pulling different pieces is cross-over? Does it matter if the I member A and member B and 99.9999% identical, and that every piece I copy from member A already exists in member B?

I ask because this is exactly the case in a mature population. Members will be 99.9999% identical. I hate to use biology as an example, but, look at people.

I'm still not convinced. Sexual reporoduction is very rare in biology.... Even in humans it's something that very rarely ever happens.


0

Share this post


Link to post
Share on other sites
I won't pretend to know GA in such detail. I only know their general purpose and a bit of how they work so don't any inspirational AI idea on my front.

I am a pretty big fan of Sudoku and I noticed something while looking at a Sudoku puzzle book. All the locations of the revealed starting numbers are symmetrical around the the diagonal from bottom-left to top-right.

Therefore you only need to decide half of the numbers you want to show in order to generate the puzzle. Just have a look at Sudoku.com, since I can't post a replication of the Sudoku puzzle.(Isn't that stupid!) The current puzzle is like this but I'm not sure about the others.

It also doesn't 'logical' to say that the placement of all the numbers has any effect on the difficulty. Then again. a lot isn't but still is. Looking at the book you see that as the difficulty increases, the number of visible numbers decrease(obviously), yet the number of different puzzles in each section seem to be the same.

The book I have is 'The Ultimate Book Of Sudoku' by Pete Sinden. I think he owns the SudokuMania website as well.

0

Share this post


Link to post
Share on other sites
Quote:
Original post by RPGeezus
But only early on.... We agree? :)

Agree on what? That crossover and mutation are the same? No, they're not. They are both search operators but they achieve different results in terms of the exploration of not only the state space but also of the objective function. Mutation guarantees that the solution generated by applying that operator to a current operator has a 'fitness' within bounds defined by the local gradient of the objective function and the mutation rate. For this reason, mutation can only ever perform gradient ascent/descent using a random walk. Crossover makes no such promises and is therefore not constrained to producing only hill climbing search of the objective space.

Quote:

Adding too many copies of the same chromosome in an animal can be fatal.


As with most search algorithms, premature convergence is a problem. The assumption that a single run of a search algorithm will give you a globally optimal solution for an objective function with unknown topology is naive. The point of GAs is that one can 'balance' the application of the canonical operators to minimise the likelihood of premature convergence, so that when the population becomes flooded with (near-) identical solutions, they are all within tight bounds of the optimal solution.

Quote:

Biology aside, I don't see how randomly insterting duplicate copies of existing 'code' (code can mean anything) is somehow inclined to be a good thing. I understand that the 'code' segment itself maybe proven to work, but what evidence is there to suggest that it will work in a different context (i.e. different location in the 'code').

No one is suggesting structural crossover. Think of crossover from its genetic counterpoint. If I take two humans and from them produce an offspring, the usual genetic processes will select for each gene, two of the 4 allele values (because we have haploid coding), one from the male parent and one from the female. Putting mutation aside, this is crossover performed at every site between genes. GAs simply limit this to one crossover site on the chromosome, to ensure that the search is not purely random. The affect of this is to preserve significant subsets of schema represented by the parents. The best way to understand GA operators is by their affect on the schema that a population instantiate. If you haven't read Holland's original work, I highly recommend it.


Quote:

If so, how does it know it's not just around a local maximum?


No search algorithm (other than brute force) can unequivocally know this, unless you know the objective function topology... and in which case, why did you need to search in the first place?

Quote:

So pulling _identical_ pieces during cross-over is not crossover.
I ask because this is exactly the case in a mature population. Members will be 99.9999% identical. I hate to use biology as an example, but, look at people.


This is the cloning that I was talking about. Crossover produces clones in nearly-converged populations. Unfortunately, it also produces clones in initial populations. Clones do not harm the population, only the convergence speed of the algorithm. They are a wasted opportunity.

Cheers,

Timkin
0

Share this post


Link to post
Share on other sites
Quote:
Original post by RPGeezus
Sexual reporoduction is very rare in biology.... Even in humans it's something that very rarely ever happens.


Teh wat? I take "sexual reproduction" as the combination of chromosomes from two individuals to form a third.

So far I know, most in/vertebrates/insects (even plants) reproduce themselves that way.

What do you mean by saying "is very rare in biology" ??
0

Share this post


Link to post
Share on other sites
Quote:
Original post by Timkin
Crossover makes no such promises and is therefore not constrained to producing only hill climbing search of the objective space.


This is simply un-true. The change you are making to the alogorithm is essentially 'random', and you do not know what the outcome will be before hand. The population is either moving towards the solution, or it's not... Just because the sequence of data happened to exist somewhere else within the population is of little consequence.

Quote:

As with most search algorithms, premature convergence is a problem. The assumption that a single run of a search algorithm will give you a globally optimal solution for an objective function with unknown topology is naive.


I dont recall seeing anyone say it wasn't.

Quote:

The point of GAs is that one can 'balance' the application of the canonical operators to minimise the likelihood of premature convergence, so that when the population becomes flooded with (near-) identical solutions, they are all within tight bounds of the optimal solution.


It doesn't take much to flood a population. I guess this is the crux of what I'm saying. As soon as the odds swing in one particular direction then they are likely to stay that way.

Quote:

No one is suggesting structural crossover. Think of crossover from its genetic counterpoint.


Since you bring it up.. Crossover in the real world is nothing like the cross-over being attempted here. Real cross-over is very specific as to what gets crossed, by whom, and where it happens. Cross-over is an evolved operation, not something that happens willy-nilly. Willy-nilly is how GA/EP applies it. And that is why I question it's place as a problem solving tool.


Quote:

No search algorithm (other than brute force) can unequivocally know this, unless you know the objective function topology... and in which case, why did you need to search in the first place?


I agree with that statement.


Quote:

So pulling _identical_ pieces during cross-over is not crossover.
I ask because this is exactly the case in a mature population. Members will be 99.9999% identical. I hate to use biology as an example, but, look at people.


Quote:

This is the cloning that I was talking about. Crossover produces clones in nearly-converged populations. Unfortunately, it also produces clones in initial populations. Clones do not harm the population, only the convergence speed of the algorithm. They are a wasted opportunity.


Why wasted? Why can't pure duplication be just as advantageous? As you mentioned earlier we cannot know the outcome until we try it.



OWL:
Quote:
Original post by Owl
Teh wat? I take "sexual reproduction" as the combination of chromosomes from two individuals to form a third.

So far I know, most in/vertebrates/insects (even plants) reproduce themselves that way.

What do you mean by saying "is very rare in biology" ??


Yes, it is very rare. Of all of the cells in your body, Owl, none of them reproduce sexually. YOU do not reproduce yourself sexually. You may faciliate sexual reproduction, but never do it yourself. Think how RARE it is for the sexual process to occure, compared to the number of times asexual reproduction occures. How many times did you asexually create new sperm before sexually reproducing?

I know it's nice to think of yourself as ONE, but, you are a vast collection of single celled organisms. We are giant robotic biospheres.

Even if you don't buy that, there are _WAY_ more species that reproduce asexually than there are ones that reproduce sexually. Don't forget that we're really really big-- we cannot see th

So, yeah, it's quite rare.

I have to ask this: what are we tring to do? Simulate humans, or simulate DNA?
0

Share this post


Link to post
Share on other sites
Just because the sequence of data happened to exist somewhere else within the population is of little consequence.

Not true.

The cool thing about crossover is that you can select the good attributes from an already successful member. The data didn't "happen to exist" (except for the first generation), it has been slowly changing and storing up the good genes while filtering out the bad ones. Maybe having green eyes and red hair is very successful, but only when there are found together. Maybe it's also important to be tall and heavy, but being just heavy or just tall is a bad thing. Crossover allows you to take chunks of correlated good data and combine them with other chunks of correlated good data.

With crossover you can have an irish weakling and a blond haired brute combine and have a decent chance of producing an irish brute. Mutation would simply keep trying to mutate that irish weakling to become both stronger and heavier, but he's so far away from that solution that it's very unlikely he'll ever get there - especially if there's some sub-optimal weight/heigh combination he's already achieved.

Micro-mutation is good from making something good even better, but crossover is good at taking the best genes from different population pools and finding a third, more dominate member.

It doesn't take much to flood a population. I guess this is the crux of what I'm saying. As soon as the odds swing in one particular direction then they are likely to stay that way.

Depends on the problem and the algorith you use - the equations I was working on for my graduate thesis would often form several dozen "population pools" that would quite happily sit there until it performed crossover with a member from another 'tribe'. I had to set the maximum mutation low since I wanted to get a very acurate answer, so I relied heavily on crossover and a large initial population to provide my genetic diversity.

Why [ is a clone ] wasted? Why can't pure duplication be just as advantageous? As you mentioned earlier we cannot know the outcome until we try it.

You've either never implimented a GA or have worked on very different problems than I have. A clone may be advantageous to the *individual*, I suppose, but we're looking for an answer to a problem! You run every individual through some sort of fitness test ... running the same vector through the same test is kind of redundant, don't you think? I'd rather have a random guess than run the same number twice.

Yes, it is very rare. Of all of the cells in your body, Owl, none of them reproduce sexually.

Trick statement. ;) Kind of like saying water is rare compared to the vastness of space where there is no water. Silly. :)

0

Share this post


Link to post
Share on other sites
Quote:
Original post by RPGeezus
Even if you don't buy that, there are _WAY_ more species that reproduce asexually than there are ones that reproduce sexually.
So, yeah, it's quite rare.


When you say species I guess you're including simple organisms (as sperm, virus, bacteria). If not, then I got to disagree. I belive most (complex) animal life (which is what most people refer to when saying "species") reproduce sexually (crossing over DNA).

Quote:
Original post by RPGeezus
I have to ask this: what are we tring to do? Simulate humans, or simulate DNA?


These people are trying to immitate the reproduction of complex organisms with the hope of generating complex algorithms to solve problems, based on the premise that if nature can solve problems that way, it should be possible to replicate it computationally.

I belive the problem with GA is the "random" element, which I don't think it really occurs in nature.
0

Share this post


Link to post
Share on other sites
Quote:
Original post by RPGeezus
This is simply un-true. The change you are making to the alogorithm is essentially 'random', and you do not know what the outcome will be before hand.


I think you're missing the point of what I'm saying. Of course the outcome of mutation and crossover are both random, because they are stochastic operators, however they are both selected from a countably finite set of possibilities. The possibile outcomes of mutation are found within a small hypershpere of the state space, centered on the current chromosome. The radius of this sphere is a function of the encoding scheme. Mutation can only generate local perturbations to the solution hypothesis represent by the chromosome. The outcomes of crossover are not so constrained, since solutions will lie within a hypersphere that encompasses both parents. If these parents are distant within the state space, then the set of possible offspring spans more of the objective function than those of mutation alone. The possibility of finding new optima of the objective function within this offspring set is far higher than for the outcome set of mutation.

Quote:
The population is either moving towards the solution

The population (at least in a GA) is always moving toward a solution.

Quote:
Just because the sequence of data happened to exist somewhere else within the population is of little consequence.
I'm not sure what you're trying to say here...could you explain it please.


Quote:
I dont recall seeing anyone say it [ premature convergence ] wasn't.


No. But you made the comment about the fatality of adding too many copies of the same chromosome to a population, implying that we were arguing against this point. I was just making it clear that I wasn't.

Quote:

It doesn't take much to flood a population. I guess this is the crux of what I'm saying. As soon as the odds swing in one particular direction then they are likely to stay that way.


To a certain extent that is true, however I can say (based on research analysis and experience) that this is very much problem and encoding dependent. Additionally, there are some very simply techniques that overcome this problem and while they don't prevent premature convergence, they do ensure that the algorithm is optimally efficient in getting there by avoiding cloning and hitchhiking, so you have plenty of time to restart with another random population and try again. On those problems where ensembles of solutions are needed to find the global optima, a GA will work at least as well as other methods I have tried (e.g., simulated annealing, simplex, random walk).



Quote:

Since you bring it up.. Crossover in the real world is nothing like the cross-over being attempted here.


No one ever suggested they were the same. Indeed, I thought I was making it clear that they're not the same. Perhaps I wasn't.

Quote:
Why wasted? Why can't pure duplication be just as advantageous? As you mentioned earlier we cannot know the outcome until we try it.


Clones are a waste of an algorithmic iteration. The solutions already existed in the population so producing the clone offspring did nothing to change the state of the population. Hence, that iteration of reproduction was a waste for the search of the state space.

Quote:

I have to ask this: what are we tring to do? Simulate humans, or simulate DNA?


Neither. GAs are a blind search algorithm, inspired by genetic operators. They make no claims that they simulate anything. Anyone suggesting otherwise is ill-informed.

Cheers,

Timkin
0

Share this post


Link to post
Share on other sites
Quote:
Original post by RPGeezus
Quote:
Original post by Timkin
Crossover makes no such promises and is therefore not constrained to producing only hill climbing search of the objective space.


This is simply un-true. The change you are making to the alogorithm is essentially 'random', and you do not know what the outcome will be before hand.


No, no, no... this is just a side-effect of the way you have implemented your candidate solutions. Don't confuse genetic/evolutionary programming, and in particular your implementation of it, with genetic algorithms, which is a more abstract concept. With a genetic algorithm, crossover is deliberately combining traits from two existing solutions. It's only random if your implementation of such does not enforce sensible constraints.

Quote:
Cross-over is an evolved operation, not something that happens willy-nilly. Willy-nilly is how GA/EP applies it.


No it's not...

Quote:
And that is why I question it's place as a problem solving tool.


...and that is why we're questioning your questioning.

Quote:
Why wasted? Why can't pure duplication be just as advantageous?


Because then you may as well just have a population size of 1. The principal value of genetic algorithms over other similar techniques is that the population shares information among itself. If you have no crossover, this information is not getting shared on any significant level. And if you have duplicate solutions, then there is no information to share anyway.

Quote:
Think how RARE it is for the sexual process to occure, compared to the number of times asexual reproduction occures.


Just so you know, this isn't necessarily relevant to the discussion of genetic algorithms as a search method. They were inspired by biological parallels but, as with neural networks, they are just an analogy and not a simulation. The search method is not so much about genetics and biology as it is about emulating the effects of natural selection.

Quote:
Quote:
Original post by Kylotan
Imagine you have 2 solutions on the global maximum: crossover will give you another maximal solution


Do you mean 'Crossover _may_ give you another maximal solution'?


No, I mean will. The idea of crossover - in the typical context of GAs - is to exchange traits of the solutions. If both solutions are on the global maximum then any combination of their traits will be. By extension - and more relevant to the algorithm in general - given a continuous fitness function, the nearer both parent solutions are to the maximum, the nearer the child solution will be. It is guided where mutation is not. Yes, if you implement the solutions and crossover in such a way that there is little coherence before and after crossover, then it will be somewhat random and less effective. But that's down to the implementation, not the algorithm itself.
0

Share this post


Link to post
Share on other sites
OP: you're right about the water bit. :) I still hold that most life forms are asexual. Complex lifeforms are a different matter....

Quote:

The cool thing about crossover is that you can select the good attributes from an already successful member. The data didn't "happen to exist" (except for the first generation), it has been slowly changing and storing up the good genes while filtering out the bad ones.


You can chose only the good sequences in their entirety, sure. Anything is possible. But you likely wont.

The term 'gene' is deceptive as it implies discreet blocks of self-contained function. It says nothing about 'gene' dependancy, etc... I think in the case of an algorithm, code, or lego structure, we do not have knowledge of dependancy. If we did then we would probably have designed it in to the operators being used.

i.e.

j = 1
k = 10
for i = j to k
a = a + 1;
b = b * 2
if a > 5
c = b / a;
next j




How would you know which parts of the above code are required to keep the others working?

DNA isn't random, and neither is biological cross-over. Both are complex structures.

Maybe Kylotan has an answer. :)


Quote:
Original post by Timkin
I think you're missing the point of what I'm saying. Of course the outcome of mutation and crossover are both random, because they are stochastic operators, however they are both selected from a countably finite set of possibilities.
The possibile outcomes of mutation are found within a small hypershpere of the
state space, centered on the current chromosome.


I agree with you.

Quote:
Original post by Timkin
The radius of this sphere is a function of the encoding scheme. Mutation can only generate local perturbations to the solution hypothesis represent by the chromosome. The outcomes of crossover are not so constrained, since solutions will lie within a hypersphere that encompasses both parents.


Two things:
There is no law that states that the actual solution space will encompass the area between both points. It may be that each solution is so drastically different, or sufficiently complex, that combining the two can only result in no solution. I realize this is an extreme, but saying that the combination of the two covers such a broad area is also extreme. I am not arguing that it is possible, because I believe it is, rather that it's the exception and not the rule.

That said, you don't have to restrict mutation to only one change per run. This way you have method by which the spheres radius is variable.

Quote:
Original post by Timkin
If these parents are distant within the state space, then the set of possible offspring spans more of the objective function than those of mutation alone. The possibility of finding new optima of the objective function within this offspring set is far higher than for the outcome set of mutation.


It's hard to argue with words like 'possible'. :) While it is possible you will get a perfect solution, by chance, it's is unlikely. I also find it unlikely that taking a chunk of 'code' from one member and pasting it in to another is likeley to have much of an advantagous affect.

If I took half of kernerl32.dll and mixed it with half of netapi32.dll, I'd get a crashing computer. :)

Quote:
Original post by Timkin
Quote:
Just because the sequence of data happened to exist somewhere else within the population is of little consequence.


I'm not sure what you're trying to say here...could you explain it please.


If I get a=b from random mutation, or randomly extract it from the 'code' of another member is of little consequence. It got there randomly. a=b, via crossover, could just have easily been a=c from a different member, or a=d from mutation.

I think I'm repeating myself here, but, just because a pattern works well in one member, it is no gaurantee that the same pattern will work well in a different member. In most cases (sorry I cant be more specific) 'crossover' will have the highest chance of working when crossing two nearly identical members. If this is the case then it's really not much different than mutation. The spheres that you talk about start taking on a much different shape..

i.e. Mixing a human with a bananna gives us mush. Mixing a human with a human gives us (usually) a human, though slightly different from either parent.


Quote:
Original post by Timkin
Quote:

It doesn't take much to flood a population. I guess this is the crux of what I'm saying. As soon as the odds swing in one particular direction then they are likely to stay that way.


To a certain extent that is true, however I can say (based on research analysis and experience) that this is very much problem and encoding dependent.


Can you think of a way where you could get the same (or similar) results using only mutation?

I have no problems with seeing early convergence (I kind of encourage it), and have no problem with a temporarily stagnant population. I have methods which keep things moving.


Quote:
Original post by Timkin
Quote:
Why wasted? Why can't pure duplication be just as advantageous? As you mentioned earlier we cannot know the outcome until we try it.


Clones are a waste of an algorithmic iteration. The solutions already existed in the population so producing the clone offspring did nothing to change the state of the population. Hence, that iteration of reproduction was a waste for the search of the state space.


I do not agree.

i.e.


Member A:
i++;
i--;

crossed with itself:

i++;
i++;


Something new made from something old.. Maybe +2 is better than +0, maybe not.

When you say 'solution' I get the impression that you feel each 'chunk' of data in a member is 'solution'. I dont feel that way. Each 'chunk' may be part of a solution, but on it's own may be (and likely is) nothing.


Quote:
Original post by Kylotan
Quote:Cross-over is an evolved operation, not something that happens willy-nilly. Willy-nilly is how GA/EP applies it.

No it's not...


How do you enforce cross-over so that it always makes sense? I assume this must be very specific to a certain applications.

Quote:
Original post by Kylotan
Because then you may as well just have a population size of 1. The principal value of genetic algorithms over other similar techniques is that the population shares information among itself. If you have no crossover, this information is not getting shared on any significant level. And if you have duplicate solutions, then there is no information to share anyway.


Without cross-over you can still have variety, and the shareing of information. Cloning with mutation ensures sharing.

I can have 50 members, each 1 or two operations away from each other. 99% identical. If I prefer 'new over old', and I get in to a flat spot, it is conceivable that I could have 50 members, all derived from the same ancestor, but 50 operations apart. By operation I mean a mutation (or cross-over). We can also increase the rate of mutation and modify the selection process if stagnation becomes a factor; this elminates the need to reset the system.


Quote:
Original post by Kylotan
If both solutions are on the global maximum then any combination of their traits will be.


Maybe I misunderstand you.. Are you saying 'if two hands of poker are equally strong, any combination of the cards contained in those hands must be as strong'? I could have two flushes, but with different suits. Two full-houses with different cards, etc..

Tim, some of what you've been saying doesn't match up with some of what Kylotan has been saying. I understand neither of you agree about my final view with regards to crossover, but you seem to be disagreeing for exlusivley different reasons.

Will
0

Share this post


Link to post
Share on other sites
Good discussion guys... something to enjoy over morning coffee! 8)

Quote:
Original post by RPGeezus
The term 'gene' is deceptive as it implies discreet blocks of self-contained function. It says nothing about 'gene' dependancy, etc...

In a good encoding scheme (for a GA) your attributes should be as orthogonal as possible. This has implications on how effectively you can search the objective function surface and how effective your operators are in the GA.

Quote:
DNA isn't random, and neither is biological cross-over. Both are complex structures.

...and to reiterate, GA crossover is not biological crossover.

Quote:

There is no law that states that the actual solution space will encompass the area between both points.


Correct. This is why the success of any search algorithm rests on whether or not the optimal state lies within a region of the state space accessible via the operators of the algorithm, given the initial guess at the solution. Thus, for a GA, it is important to seed the initial population with a selection of solution guesses that span the state space, just as, when training an ANN, you need a test data set that spans the expected operational set. You're right in that GAs cannot generalise (extrapolate), but they don't need to if you use them as they were intended.

Quote:
While it is possible you will get a perfect solution, by chance, it's is unlikely. I also find it unlikely that taking a chunk of 'code' from one member and pasting it in to another is likeley to have much of an advantagous affect.

And yet it does! ;)

Quote:
If I took half of kernerl32.dll and mixed it with half of netapi32.dll, I'd get a crashing computer. :)

Thats not a valid example. If I took 10 dlls each of the same code length and broke them up into n segments of code, then jumbled those segments to make m 'randomised' dlls, I could still recover my original 10 dlls using crossover alone. I could do it faster if I permitted selection and mutation as well.


Quote:
just because a pattern works well in one member, it is no gaurantee that the same pattern will work well in a different member.

True. But then that is what selection is for. Even in nature there is no guarantee that an offspring is fit for its environment. Selection pressure ensures that, on average, the strong survive. If no weaklings survived to produce offspring, then the population would loose its genetic diversity and would fail if the optimal solution were changed. In a GA, we don't typically change the optimal solution, so we can afford to be more brutal with regards to culling off weaklings via strong selection pressure.


Quote:
Can you think of a way where you could get the same (or similar) results using only mutation?


Indeed. Mutation rates are typically so low in a GA (about 5% of total genes) that it has been shown unequivocally that mutation alone cannot move you out of an optima once your population resides within it. Selection pressure will obliterate any favourable mutation before it can take hold in the population.

Quote:


Member A:
i++;
i--;

crossed with itself:

i++;
i++;


Something new made from something old.. Maybe +2 is better than +0, maybe not.


In a diploid coding scheme, you would never cross one chromosome with the other. GAs are not asexual.

Quote:
Tim, some of what you've been saying doesn't match up with some of what Kylotan has been saying.

I didn't realise I was supposed to agree with Kylotan! ;)

Cheers,

Timkin
0

Share this post


Link to post
Share on other sites
Quote:
Original post by RPGeezus
How do you enforce cross-over so that it always makes sense? I assume this must be very specific to a certain applications.


99% of GA applications will encode the solutions as homogeneous lists of values. Crossover will exchange values in one with the corresponding value in the other, which are appropriately constrained by definition.

Quote:
Without cross-over you can still have variety, and the shareing of information. Cloning with mutation ensures sharing.


1 solution, cloned and mutated, just carries information from 1 previous solution plus some random value. Whereas 1 solution based on crossover carries information from 2 previous solutions.

Quote:
Quote:
Original post by Kylotan
If both solutions are on the global maximum then any combination of their traits will be.


Maybe I misunderstand you.. Are you saying 'if two hands of poker are equally strong, any combination of the cards contained in those hands must be as strong'? I could have two flushes, but with different suits. Two full-houses with different cards, etc..


No, because 'the' global maximum implies the same position, which isn't actually possible with 1 set of playing cards. The situation you refer to would be two equally beneficial local maxima.

What I am saying however, is that the 'hand of poker' or 'bytecode algorithm' representations have very interdependent values that do not lend themselves very well to crossover. Change one card or one byte and the value changes dramatically. This is not an inherent problem with crossover itself, as you seemed to have been suggesting. In fact, the examples given don't lend themselves amazingly well to mutation or any other similar search operator since the search space is close to being discontinuous. If a small change in state is likely to lead to a large change in fitness, then you're approaching a situation where a random or linear search is just as good, if not better, than a directed search.

If the representations were instead 'x, y, and z coordinates', or 'expenditure on science, luxuries, and warfare', or 'list of (doctor/patient/time) tuples', then the values are interchangable and always meaningful. This is more common with GAs, and implementing a meaningful crossover is trivial.

Quote:
Tim, some of what you've been saying doesn't match up with some of what Kylotan has been saying.


My 2 paragraphs above are analogous to where Timkin said 'In a good encoding scheme (for a GA) your attributes should be as orthogonal as possible.' In fact, I don't see any significant disagreements between what the two of us have said over the last few posts, except in the very different ways in which we describe it.
0

Share this post


Link to post
Share on other sites
This is a good discussion. :)

Quote:
Original post by Kylotan
99% of GA applications will encode the solutions as homogeneous lists of values. Crossover will exchange values in one with the corresponding value in the other, which are appropriately constrained by definition


Quote:
Original post by Timkin
In a good encoding scheme (for a GA) your attributes should be as orthogonal as possible. This has implications on how effectively you can search the objective function surface and how effective your operators are in the GA.


Would you both agree with this statement:
"The effectiveness of cross-over is directly related to how statistically unrelated the discreet components, that make up the solution, are."

or...

"The effectiveness of cross-over is directly related to how flexible the operators, that make up the solution, are with regards to order and placement within said solution".


Maybe this is an ideal to strive towards but is it realistic? Growing lego, or growing code, both unfortunately have the requirement of fitting in to the 'context' of the solution.

I agree with what you both have said (except for the 99% part Kylotan), and am forced to change my tune. 'Cross-over is not effective for many problems, but is effective in a very limited set of cases'.

You have each moved me closer to your way of thinking, but I'm not a convert just yet. ;)

Quote:
Original post by Timkin
Quote:
While it is possible you will get a perfect solution, by chance, it's is unlikely. I also find it unlikely that taking a chunk of 'code' from one member and pasting it in to another is likeley to have much of an advantagous affect.

And yet it does! ;)


And so does mutation. :D


Quote:
Original post by Timkin
Quote:
If I took half of kernerl32.dll and mixed it with half of netapi32.dll, I'd get a crashing computer. :)

Thats not a valid example. If I took 10 dlls each of the same code length and broke them up into n segments of code, then jumbled those segments to make m 'randomised' dlls, I could still recover my original 10 dlls using crossover alone. I could do it faster if I permitted selection and mutation as well.


I think it's a valid example: Two members of a population at opposite ends of an extreme; granted, you would try to prevent this from actually occuring.

Part of the requirement for Cross-over is diversity, yet diversity _can_ be the very thing that keeps it from working. The 'orthagonal' parts of the encoding scheme is just one way to restricting diversity.

If you don't like the word restrict, then we could say that 'the orthangonal encoding limits the area of solution space along an axis'. Do you agree with that statement?


Quote:
Original post by Timkin
Quote:
just because a pattern works well in one member, it is no gaurantee that the same pattern will work well in a different member.

True. But then that is what selection is for. Even in nature there is no guarantee that an offspring is fit for its environment. Selection pressure ensures that, on average, the strong survive. If no weaklings survived to produce offspring, then the population would loose its genetic diversity and would fail if the optimal solution were changed. In a GA, we don't typically change the optimal solution, so we can afford to be more brutal with regards to culling off weaklings via strong selection pressure.


I agree. I dont think there was ever a disagreement in this regard. Your last sentence speaks the truth. :D


Quote:
Original post by Timkin
Quote:
Can you think of a way where you could get the same (or similar) results using only mutation?


Indeed. Mutation rates are typically so low in a GA (about 5% of total genes) that it has been shown unequivocally that mutation alone cannot move you out of an optima once your population resides within it. Selection pressure will obliterate any favourable mutation before it can take hold in the population.


I wanted to know how you would achieve the same solution using mutation alone. Could you elaborate?

It seems you are suggesting that the system will get unavoidably stuck with just mutation, which is not true.

In the same way cross-over has 'hidden' requirements so that it will work, I suppose there are requirements for mutation as well. The implementation must provided ways in which stagnation and plateaus can be worked through (without a reset). Allowing mutations to be cumulative over generations is one such way, and having a dynamic fitness measurement is another. There are probably hundreds of methods that would work.

Of course, saying this does not make it impossible for cross-over to get stuck.


Quote:
Original post by Kylotan
1 solution, cloned and mutated, just carries information from 1 previous solution plus some random value. Whereas 1 solution based on crossover carries information from 2 previous solutions.


I agree cross-over gives you information from two immediate parents. That fact has never come in to question. :)

I question the _utility_ of this. There are certain requirements that must be met for this to work properly, or even at all-- it would also seem (to me) that there is a relationship between the distance in solution space of the two parents, and the probablitity of a cross-over contributing to the solution. Mutation does not have this limitation.

If all of the operatives of the solution are 'orthagonal' then the soltuion space will be much more confined-- and so the distance between any two members 'laterally' will be smaller... Not all problems facilitate this.

There are definate constraints placed on cross-over... and therefore definate limitations. I'm saying mutation alone can achieve all of the same goals, with less contraints, in less cycles, and in more situations (is more flexible).


Quote:
Original post by Timkin
Quote:

Something new made from something old.. Maybe +2 is better than +0, maybe not.


In a diploid coding scheme, you would never cross one chromosome with the other. GAs are not asexual.


Who said anything about diploid? :) What if your GA's are of variable size? Do you restrict cross-over only to members of the same length? What if length is a variable for my members? If my environment is complex enough I'm sure to have a few operands that require 'context'.


Quote:
Original post by Kylotan
No, because 'the' global maximum implies the same position, which isn't actually possible with 1 set of playing cards. The situation you refer to [poker] would be two equally beneficial local maxima.


Kylotan, poker-hands are fine example. Uniform size, limited operands, many local minima, and easily defined global maximums.

Come now, if a problem only had ONE best solution you probably wouldn't be using something like GA's to find it...

Within a population there will be competition for who gets to represent the solution, and not just for the solution itself.. This is built in to evolution.


Quote:
Original post by Timkin
Quote:
Tim, some of what you've been saying doesn't match up with some of what Kylotan has been saying.

I didn't realise I was supposed to agree with Kylotan! ;)


Quote:
Original post by Kylotan
My 2 paragraphs above are analogous to where Timkin said



Is seems that there are some areas where you both see things differently. You may agree on the same ideas in the end, but for different reasons.

I was hoping the two of you might fire a few words at each other-- my posts are getting rather long. ;)

Will
0

Share this post


Link to post
Share on other sites
Quote:
Original post by RPGeezus

Would you both agree with this statement:
"The effectiveness of cross-over is directly related to how statistically unrelated the discreet components, that make up the solution, are."

or...

"The effectiveness of cross-over is directly related to how flexible the operators, that make up the solution, are with regards to order and placement within said solution".

Yes to the first, maybe to the second. The quality of the operator goes hand in hand with the representation of the problem.

Quote:
I agree with what you both have said (except for the 99% part Kylotan), and am forced to change my tune. 'Cross-over is not effective for many problems, but is effective in a very limited set of cases'.


I would suggest you read more on how GAs have been used, because 99% may not be the correct figure but the answer is certainly between 90% and 100%. This is simply because that is the representation that best lends itself to being solved in this way. If your problem looks like such a representation, you might use GAs. If it doesn't, you would probably use something else.

Quote:
Part of the requirement for Cross-over is diversity, yet diversity _can_ be the very thing that keeps it from working.


Of course. Similarly, allowing mutation to have large effects can either be the situation that allows it to find a better solution which is remote in solution space, or the situation that keeps it looking too far afield instead of performing local optimisation. Both have their place and both work together. Neither can be optimal in all situations, and there are infinitely many examples where one could be optimal and the other not; see the 'No Free Lunch' theorem.

Quote:
It seems you are suggesting that the system will get unavoidably stuck with just mutation, which is not true.

In the same way cross-over has 'hidden' requirements so that it will work, I suppose there are requirements for mutation as well. The implementation must provided ways in which stagnation and plateaus can be worked through (without a reset).


Basically, if the domain of changes that mutation can bring is smaller than the size of your local plateau, then you're stuck. You fix this by decreasing the selection pressure, allowing worse solutions to survive in the hope that eventually they will yield better ones. But then you converge more slowly due to having a worse population. Again, no free lunch. :)

Quote:
I agree cross-over gives you information from two immediate parents. That fact has never come in to question. :)

I question the _utility_ of this. There are certain requirements that must be met for this to work properly, or even at all-- it would also seem (to me) that there is a relationship between the distance in solution space of the two parents, and the probablitity of a cross-over contributing to the solution. Mutation does not have this limitation.


Of course, there are requirements. That is the way search algorithms work - you determine the requirements and then pick an algorithm that fulfils them. For example, A* is an optimal search algorithm for finding a path between 2 places, providing your paths are of monotonically increasing cost. That's a limitation or requirement, as you put it. Breadth first search does not have this requirement, but it's not better - it just suits a different type of problem. The same applies to crossover and mutation.

But, there is not so much of a difference between crossover and mutation as you seem to be implying. Both require that, in general, solutions that are 'close' to each other have fitness values that are close to each other, in order to be effective.

Quote:
If all of the operatives of the solution are 'orthagonal' then the soltuion space will be much more confined-- and so the distance between any two members 'laterally' will be smaller... Not all problems facilitate this.


Having the operands be orthogonal maximises the space rather than confines it. I'm not sure what you're trying to say here however. But the last point is valid; each problem requires a different solution. For some, crossover is not so good.

Quote:
There are definate constraints placed on cross-over... and therefore definate limitations. I'm saying mutation alone can achieve all of the same goals, with less contraints, in less cycles, and in more situations (is more flexible).


Mutation... can achieve all of the same goals? Yes.
... with less contraints? If defined that way, Yes.
... in less cycles? No.
... and in more situations (is more flexible)? Yes.

I say no to the 3rd because crossover - where applicable - makes better use of the information encoded in your population than mutation does. When you get 2 solutions which have evolved to be good, but in different ways, crossover provides the chance of yielding a solution that takes the best from both. Mutation does not do anything to explicitly facilitate that. It is like the difference between binary search and linear search.

Quote:
Quote:
Original post by Kylotan
No, because 'the' global maximum implies the same position, which isn't actually possible with 1 set of playing cards. The situation you refer to [poker] would be two equally beneficial local maxima.


Kylotan, poker-hands are fine example. Uniform size, limited operands, many local minima, and easily defined global maximums.


They're a poor example of what you'd use a GA for, because of the heavily discontinuous search space and impractical representation. Crossover would give you some benefit over random search, but not much. The same goes for mutation here.

Quote:
Come now, if a problem only had ONE best solution you probably wouldn't be using something like GA's to find it...


There's no reason why not. GAs don't care about how many best solutions there are, they care about the shape of the solution space.
0

Share this post


Link to post
Share on other sites
Quote:
Original post by Kylotan
Quote:
Original post by RPGeezus

Would you both agree with this statement:
"The effectiveness of cross-over is directly related to how statistically unrelated the discreet components, that make up the solution, are."

Yes to the first, maybe to the second. The quality of the operator goes hand in hand with the representation of the problem.


And "Cross-over alone, by neccessity, excludes many types of problems from the mix as encoding is a limiting factor."? :D

Quote:
Original post by Kylotan
I say no to the 3rd [less cycles] because crossover - where applicable - makes better use of the information encoded in your population than mutation does.


_IF_ all other requirements are met.


Quote:
Original post by Kylotan
When you get 2 solutions which have evolved to be good, but in different ways, crossover provides the chance of yielding a solution that takes the best from both.


ONLY if the operands that make up the solution are not bound by order of appearance. Then yes, there is a chance.

Quote:
Original post by Kylotan
Mutation does not do anything to explicitly facilitate that.


Mutation works with a single line of ancestors. A familiy rope -vs- a family tree. Concievably you are using half the code per ancestor (per ancestor) you could have used, BUT, are a gauranteed that the code you do have is statistically more significant. Sound reasonable?

Why limit cross-over to two parents? Why not 3, 4, 5 or 100? Why two? It would stand to reason that two should be the least you would want, but not the neccesarily the optimium number.


Quote:
Original post by Kylotan
They're [poker hands] a poor example of what you'd use a GA for, because of the heavily discontinuous search space and impractical representation. Crossover would give you some benefit over random search, but not much. The same goes for mutation here.


You could make mutation work without having to worry about how 'orthagonal' the encoding was. If your GA evolved a compressed respresentation of a poker hand (which would be later expanded during the evaluation), or your evaluation function was dynamic, you could reach a global maximum from any starting position. The draw back would be wasted space in your compressed representation of the solution (like unreachable code).

For this to happen cross-over would not work so well.


Quote:
Original post by Kylotan
There's no reason why not. GAs don't care about how many best solutions there are, they care about the shape of the solution space.


Cross-over cares about how many best solutions there are.

Maybe we're in disagreement because of our point-of-view. It would seem a system designed to work best with mutation will be a poor system for cross-over. It only would seem to work one way though-- would a system designed to work with cross-over limit the effectiveness of mutation?

Will

0

Share this post


Link to post
Share on other sites
Quote:
Original post by RPGeezus
would a system designed to work with cross-over limit the effectiveness of mutation?


Quite the opposite. Crossover enhances mutation by providing a rapid means of distributing beneficial mutations into the population and provides the mechanism to maximise the correlation between beneficial mutations and beneficial substrings (good quality partial solutions).

Will, I definitely think you should read up on Holland's schemata theorom.

Cheers,

Timkin
0

Share this post


Link to post
Share on other sites
Quote:
Original post by RPGeezus
And "Cross-over alone, by neccessity, excludes many types of problems from the mix as encoding is a limiting factor."? :D


Of course it does. But that applies to all methods. Some methods can be stretched to 'work' in any given situation at the expense of optimality. eg. Mutation can be broadened so that it becomes little better than a random search, then it is guaranteed to work...eventually..., but it is no better then than picking random solutions from a hat. All methods have situations they are good at and those they are bad at.

Quote:

Quote:
Original post by Kylotan
I say no to the 3rd [less cycles] because crossover - where applicable - makes better use of the information encoded in your population than mutation does.


_IF_ all other requirements are met.


Search is all about matching a method to the requirements. There is no one method that is statistically better than any other when measured over all possible problem types. Wolpert and Macready's work will elaborate on this if you're interested.

Quote:
You could make mutation work without having to worry about how 'orthagonal' the encoding was.


Yes, you could, but that's not the point. You can make a linear search work without worrying about the encoding either. Or a random search. But those are rarely optimal. Neither is relying purely on mutation in many situations. The question is, is this the best approach for the given problem? In your problem, with your representation, crossover is not much use. That's not a problem with crossover.

Quote:
Quote:
Original post by Kylotan
There's no reason why not. GAs don't care about how many best solutions there are, they care about the shape of the solution space.


Cross-over cares about how many best solutions there are.


No, it doesn't, and you've not produced evidence to back that assertion. The number of separate maxima are irrelevant. What is important is how the fitness landscape is shaped, as that dictates how the solutions will converge. As far as shape is concerned, it's assumed that there are numerous maxima (best or otherwise), each of which are handled identically. If there was only one maxima, and you knew this, then you'd just use hill-climbing or something simple like that.
0

Share this post


Link to post
Share on other sites
Quote:
Original post by Timkin
Quote:
Original post by RPGeezus
would a system designed to work with cross-over limit the effectiveness of mutation?


Quite the opposite.


My question was rhetorical. I've been saying all along that mutated clones are king. ;)

Quote:
Original post by Timkin
Crossover enhances mutation by providing a rapid means of distributing beneficial mutations.


Sure, I agree with you. It can, under the right circumstances. I'm not arguing about if it can work, just that the requirements for it to happen are so huge, and mutatation does not have those same requirements.

Again, I have to ask, why limit cross-over to two parents? Why not 5, 6, or 30?

I'll take a stab in the dark and propose that the number of dimensions in your solution space (if one could even know this) is also the optimal number of 'parents' per 'child'.


Quote:
Original post by Kylotan
Mutation can be broadened so that it becomes little better than a random search, then it is guaranteed to work...eventually..., but it is no better then than picking random solutions from a hat.


I think you're down playing the utility of mutation. Cross-over will not work without mutation-- as Timkin has said Cross-over enhances mutation.

Quote:
Original post by Kylotan
The number of separate maxima are irrelevant


I disagree, and here is why.

Seprate global maxima are more likely to have solutions which are NOT related. Taking components from a solution at one maxima and mixing it with another is very likely to only give you a solution that is at NEITHER maxima.

The 'shape' of your solution space will not be conducive to the crossing of solutions if there is more than one maxima.

Here is a poor ASCII picture of a hypothetical solution space:


A B

.'. .'.
...' '.........' '.....



Any point between maxima A and maxima B is more inclinded be further away from either maxima.

If, on the other hand, a mutation occures on a member already close to a maxima, it is more likely to still be close to that maxima.

The more maxima you have the worse this problem gets.

Maybe an easy way of thinking about it to look at it like image processing operators-- you can blur and you can sharpen. At some point blurring will be a bad thing... We want sharp.

Quote:
Original post by Kylotan
If there was only one maxima, and you knew this, then you'd just use hill-climbing or something simple like that.


That's what I said. :)

Quote:
Original post by RPGeezus
if a problem only had ONE best solution you probably wouldn't be using something like GA's to find it...



I brought this up because of the Poker example, to which you replied:

Quote:
Original post by Kylotan
They're a poor example of what you'd use a GA for, because of the heavily discontinuous search space


What makes the search space so discontinuous-- Multiple 'best' answers (aka global maxima).


Will
0

Share this post


Link to post
Share on other sites
Sorry for skimming over this thread :)

What about this instead of a genetic algorythm:

Start with a known, solved sudoku puzzle. You could have a list of these, but one is enough (even if it's a trivial one). Then iterate over a few hundred thousand modifications to this puzzle, the only constraint is that after each one the puzzle must still be considered solved. Things like switching around numbers and stuff. After this, chose some numbers to hide randomly and voila!

This is a good recipe for most puzzle games, I'm sure it will work just fine for Sudoku :)
0

Share this post


Link to post
Share on other sites
Quote:
Original post by Kylotan
Mutation produces a new solution fundamentally similar to the old solution. If it is near a local maxima, then it is likely to approach that local maxima and survive, or move away from it and die.

Crossover produces a new solution, which is - for some definition of distance appropriate to your representation - located somewhere between 2 existing solutions, which may be very distant in terms of search space. Both may be in local maxima, but if those maxima are distinct then the offspring may be in neither, meaning you've 'broken free'.

Now, you might say "but I can do that with large mutations too". The problem there is that large variations as a result of mutation make it hard to exploit a local maximum (which might actually be the global one). Imagine your whole population, or a large part of it, is already near the global maximum: wild mutation will send their offspring all over the place. Crossover won't do that.


I wouldn't propose making all mutations massive. Instead I would replace crossover with a mutation operator which had a random size. So some mutations would be slight allowing exploitation and some mutations would be large allowing exploration.

The only unique benefits I can see with crossover is first the potential that crossing the best attributes of two parents will produce even fitter offspring. But I wonder how likely is this compared to the chance of macromutation producing fitter offspring. Second crossover allows changes to spread faster (but only if crossover is producing anything useful).


0

Share this post


Link to post
Share on other sites
Quote:
Original post by RPGeezus
I've been saying all along that mutated clones are king. ;)


You're missing the point. Clones occur most frequently when the population is near convergence, irrespective of what point in the state space the population has converged upon.

Quote:

Quote:
Original post by Timkin
Crossover enhances mutation by providing a rapid means of distributing beneficial mutations.


Sure, I agree with you. It can, under the right circumstances.


You seem to be convinced that crossover works only under very limiting constraints and in very narrow contexts. This point of view simply isn't supported by the literature, nor the mathematics. Crossover is a more effective operator for searching the state space in a directed manner (where the direction is provided by the population, which represents the successful aspects of the search so far) than mutation is. Mutation is not directed at all and thus MUST rely on some other operator, such as selection, for it to work. Otherwise, mutation is just drawing rabits out of the hat, hoping to get the right one.

Quote:

Again, I have to ask, why limit cross-over to two parents? Why not 5, 6, or 30?


Because crossover between two parents preserves as many schemata presently in the population as possible, while possibly forming new schemata which can be evaluated during selection.

Quote:
I'll take a stab in the dark and propose that the number of dimensions in your solution space (if one could even know this) is also the optimal number of 'parents' per 'child'.

That doesn't make sense to me. The 'solution' is a point in the state space (or any point within a finite volume). This point (or volume) will have as many dimensions as the state space (unless it is a submanifold of the state space, in which case you encoded too much information into the problem and should have mapped the input space first to a lower dimension).

Quote:

I think you're down playing the utility of mutation. Cross-over will not work without mutation-- as Timkin has said Cross-over enhances mutation.

You're putting words into my mouth that I never intended. You're suggesting that crossovers role is only to enhance mutation, which you suggest is doing the grunt work of the search. Quite the opposite is true. Crossover does the grunt work, mutation just adds the spice (and another stochastic element in the generation of candidate solutions).

Quote:

Quote:
Original post by Kylotan
The number of separate maxima are irrelevant


I disagree, and here is why.

Seprate global maxima are more likely to have solutions which are NOT related. Taking components from a solution at one maxima and mixing it with another is very likely to only give you a solution that is at NEITHER maxima.

That totally depends on the volume of state space that the maxima encompasses.
Quote:

The 'shape' of your solution space will not be conducive to the crossing of solutions if there is more than one maxima.

Again not true. GAs work very well on multimodal objective functions. In fact, they work better than many other techniques. The rate of convergence on the true global maxima will be proportional to the gradient on the submanifold that connects each of the peaks of the local maxima.

Quote:
Here is a poor ASCII picture of a hypothetical solution space:


A B

.'. .'.
...' '.........' '.....



And if A were slightly higher than B, or vice versa, mutation would typically fail to solve this problem, since selection pressure would mitigate any beneficial mutations that enabled a single candidate solution to hop into the other maxima. The probability of reproducing that single chromosome would be insignificant unless the population was already well populated by candidates within the confines of the region housing the global optima. Again, in this situation, crossover will find the maxima more quickly than mutation will.

Quote:
Any point between maxima A and maxima B is more inclinded be further away from either maxima.

Any candidate solution generated that fell outside of A or B would be rejected very quickly by selection. The population consists of more than 1 candidate solution for just this reason... so that failed attempts don't significantly affect convergence speed.

Cheers,

Timkin
0

Share this post


Link to post
Share on other sites
Quote:
Original post by RPGeezus
Seprate global maxima are more likely to have solutions which are NOT related. Taking components from a solution at one maxima and mixing it with another is very likely to only give you a solution that is at NEITHER maxima.

...

Any point between maxima A and maxima B is more inclinded be further away from either maxima.


Yes, but for every pairing of solutions that are near the top of different maxima, there is likely to be several pairings that are on either side of the same maxima, and end up exploiting that one as a result. Just as with mutation, the selection operator is necessary here to augment the procedure.

Quote:
If, on the other hand, a mutation occures on a member already close to a maxima, it is more likely to still be close to that maxima.


On the other other hand, if you're on a local maxima, and the global maxima is distant from the greater one in search space, a mutation is less likely to find that global maxima. Neither method is 'better'; neither works well without the other.

Quote:

Quote:
Original post by Kylotan
If there was only one maxima, and you knew this, then you'd just use hill-climbing or something simple like that.


That's what I said. :)


Maybe I wasn't clear. GAs are perfectly well suited to problems with one single maximum value, but often, if you know in advance that there is only one, it may be worth trying something else, especially if you also know that the fitness landscape is fairly smooth. Quite often it might be the case that there is only 1 global maximum but you don't know about that beforehand - you just want to find as good an answer as possible in a certain time.

Quote:
Quote:
Original post by Kylotan
They're a poor example of what you'd use a GA for, because of the heavily discontinuous search space


What makes the search space so discontinuous-- Multiple 'best' answers (aka global maxima).


No... a problem can have an infinite number of best answers and yet still be totally continuous. The mathematical definition of continuous is important here. eg. What value of x gives you the highest result from sin(x)? In this example, a tiny change in x yields a tiny change in sin(x). Poker is totally different - the smallest possible change, such as changing one card up or down a suit or number, changes the value of the hand significantly in most cases.
0

Share this post


Link to post
Share on other sites
Quote:
Original post by RunningInt
The only unique benefits I can see with crossover is first the potential that crossing the best attributes of two parents will produce even fitter offspring.


Thats the way I see it too. There is some potential, if everything is in place.

Quote:
Original post by RunningInt
But I wonder how likely is this compared to the chance of macromutation producing fitter offspring.


Good question.



Quote:
Original post by Timkin
You seem to be convinced that crossover works only under very limiting constraints and in very narrow contexts.


For cross-over to work we must satisfy some not-so-trivial criteria. I thought we were all on agreement in this regard, at least semantically.


Quote:
Original post by Timkin
This point of view simply isn't supported by the literature, nor the mathematics.


It certainly is, as you have pointed out. Just because you would think of using GA's for a non-orthagonally encoded problem as faux-pas doesn't mean those problems cease to exist.

Maybe you disagree with me because you feel like I'm hitting a thumb-tac with a sledeg-hammer (i.e. wrong tool for the job)? :)

Quote:
Original post by Timkin
Mutation is not directed at all and thus MUST rely on some other operator, such as selection, for it to work. Otherwise, mutation is just drawing rabits out of the hat, hoping to get the right one.


We need selection anyway. How is this a bad thing?

Quote:
Original post by Timkin
Quote:

Again, I have to ask, why limit cross-over to two parents? Why not 5, 6, or 30?


Because crossover between two parents preserves as many schemata presently in the population as possible, while possibly forming new schemata which can be evaluated during selection.


That doesn't make any sense to me, Tim. Why would 3 parents be less effective than 2? If I cut a pie in 2 or in 10, it's still all part of the same pie. The number '2' would be suspicious to someone looking for evidence of anthropomorphic biases...

Quote:
Original post by Timkin
Quote:

I think you're down playing the utility of mutation. Cross-over will not work without mutation-- as Timkin has said Cross-over enhances mutation.

Crossover does the grunt work, mutation just adds the spice (and another stochastic element in the generation of candidate solutions).


My apologies. Didn't mean to take your words out of context.

Will cross-over, without mutation, ever find a solution?

Will mutation, without cross-over, ever find a solution?


Quote:
Original post by Timkin
Again not true. GAs work very well on multimodal objective functions. In fact, they work better than many other techniques. The rate of convergence on the true global maxima [.....]


We're talking about a problem with more than one global maxima. In the hypothetical example there is more than one true global maxima.


Quote:
Original post by Timkin
And if A were slightly higher than B, or vice versa, mutation would typically fail to solve this problem,


There are ways to allow mutation alone to climb out of local maxima. I think thats what alleles, in biology, would facilitate.


Quote:
Original post by Kylotan
Yes, but for every pairing of solutions that are near the top of different maxima, there is likely to be several pairings that are on either side of the same maxima


I agree with you. I was narrowing my view to a specific case of two maxima to illustrate a point.

Would you agree that, as the number of global maxima increases the odd's of a correct pairing go down? I'm not sure where I'm going with this. lol. :) So being to the left, or the right of a maxima could still put you to the left or the right of yet another maxima.... Does any of what I'm saying make any sense? :D


Quote:
Original post by Kylotan
On the other other hand, if you're on a local maxima, and the global maxima is distant from the greater one in search space, a mutation is less likely to find that global maxima. Neither method is 'better'; neither works well without the other.


Yes. I agree with you, except that you can use mutation without cross-over. Cross-over will spread the mutation across members, at the beginning, until the population starts to settle on one solution. Selection / cloning will spread the information too, just no between different members.

Will




0

Share this post


Link to post
Share on other sites
Quote:
Original post by RPGeezus
For cross-over to work we must satisfy some not-so-trivial criteria. I thought we were all on agreement in this regard, at least semantically.

No, I don't see agreement on this point. From my experience, crossover will always be effective on problems that are solveable by a directed random walk and crossover will perform better than said random walk. On problems that are not solveable by a directed random walk, then a random walk is your only option and so the point is moot (nothing other than mutation, which is semantically equivalent to randomly generating a solution), will work. Such problems are typically identified by very narrow, sparsely distributed optima in the objective space, which have always been very difficult to solve using optimisation techniques. If you want a good solution for such problems, check out Terence Kwok's work on combining simulated annealing and self organising maps. He uses edge-of-chaos behaviour to generate all solutions of problems such as N-queens very rapidly.


Quote:
Original post by Timkin
It certainly is, as you have pointed out. Just because you would think of using GA's for a non-orthagonally encoded problem as faux-pas doesn't mean those problems cease to exist.


I never said you couldn't or shouldn't use them on such problems. Only that such problems will be more conducive to being solved by GAs that use single point crossover.


Quote:
Original post by Timkin
Because crossover between two parents preserves as many schemata presently in the population as possible, while possibly forming new schemata which can be evaluated during selection.


That doesn't make any sense to me, Tim. Why would 3 parents be less effective than 2? If I cut a pie in 2 or in 10, it's still all part of the same pie. The number '2' would be suspicious to someone looking for evidence of anthropomorphic biases...
[/quote]

Okay, a brief introduction to Schema Theory...

Consider the two binary strings

101100110
110101100

If we consider the set of possible offspring generated by crossover, we get the following, where '#' means 0 or 1:

1##10#1#0

This is a schema (a template) for the result of crossover. If we add a third member to the popultion, say '010111000', then the set of schemata represented by the population, taken pairwise, is

1##10#1#0
###1####0
#101#1#00

These schemata represent the possible next generation of candidate solutions (ignoring for the moment mutation) that the GA can produce using pairwise crossover (they are a roadmap as to where one might find an optima). From this perspective, we can see how crossover preserves associations of bits present in a schema. Holland showed that it is indeed schema (and their relative fitness) that represent the qualtiy of knowledge regarding the optimal solution. It should be fairly obvious that '#########' represents a purely random solution set spanning the entire state space, which holds no useful knowledge of where to look for the optimal solution, since the fitness of members of this set encompasses the entire objective function.

So, now take our original 3 candidate solutions as a triplet. The schema resulting from this tuple is '###1####0', which, as you can see, is the worst (most general) schema from the pairwise set. Candidate solutions generated by producing offspring from a 3 way crossover would be randomly selected from this set and thus, would randomly select one of the possible fitness values spanned by this set. The probability of producing good quality offspring would be less than if pairwise crossover were performed.

GAs preserve good quality schema and indeed, theysearch through the schemta present in the intial population to find an optimal solution (so you can see the importance of the initial population).

Mutation disrupts a schema. However, if used at a low level, it does not generally remove a schema from the population. Furthermore, mutation can help to produce schema that might not have been present in the initial population, or that might have died out along the way due to unintended selection biases.

Understanding schemata is the key to understanding how the canonical operators affect the search of the state space... and thus, how changing those operators will affect the efficiency of the search.

Quote:

Will cross-over, without mutation, ever find a solution?

If will find the best solution available given the initial information contained in the population. See above.
Quote:

Will mutation, without cross-over, ever find a solution?

Possibly... and that is the best that can be said. There are no guarantees that you will stumble across the solution. Indeed, if your random number generator is poor, or your state space large (or both), you might go round in circles and never find it.

Quote:

We're talking about a problem with more than one global maxima. In the hypothetical example there is more than one true global maxima.

If they are both (all) global maxima, then you cannot care which global maxima you attain, since from the point of optimisation, you found the global maxima whenever you find any of the global maxima. If your task was to find ALL global maxima then this isnt, strictly speaking, an optimisation problem and very few optimisation algorithms will be useful, since such problems are typically NP-hard (and I think in that case the discussion is moot). See the aforementioned work of Terry Kwok for how to tackle such problems effectively.


Cheers,

Timkin
0

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now
Sign in to follow this  
Followers 0