Jump to content

  • Log In with Google      Sign In   
  • Create Account


genetic programming or evolutionary algo for game AI


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
38 replies to this topic

#1 noppanit   Members   -  Reputation: 100

Like
0Likes
Like

Posted 18 July 2010 - 08:47 PM

I'm just curious that has anybody used genetic programming for Game AI before? I mean I'm doing my project and I just need some experience.

Sponsor:

#2 Steadtler   Members   -  Reputation: 220

Like
-1Likes
Like

Posted 19 July 2010 - 02:35 AM

I will count to 3, and you will snap out of it and realize that a GA is just a sugar-coated gradient descent.

1
2
3...

#3 Álvaro   Crossbones+   -  Reputation: 12918

Like
0Likes
Like

Posted 19 July 2010 - 02:45 AM

Quote:
Original post by Steadtler
I will count to 3, and you will snap out of it and realize that a GA is just a sugar-coated gradient descent.

1
2
3...


Hmmm... I don't think that's true. You can use genetic algorithms in situations where the fitness can only be evaluated with lots of noise, where computing the partial derivatives of the fitness function is impossible and where the only thing you can even try to measure is the relative fitness of a setting of the parameters with respect to another setting of the parameters. As an example, tuning parameters in a poker bot has all of these difficulties that make gradient descent basically useless in this case.

That having been said, I have never actually used GAs for anything more than toy problems, and I don't see much use for them.

Although its approach is not very scientific, this program did use GAs to learn an evaluation function for checkers. It's the closest thing I know to what the OP was asking.

#4 IADaveMark   Moderators   -  Reputation: 2406

Like
0Likes
Like

Posted 19 July 2010 - 06:12 AM

Notice he said "genetic programming" rather than "genetic algorithm". One modifies data, the other modifies the code itself.
Dave Mark - President and Lead Designer of Intrinsic Algorithm LLC

Professional consultant on game AI, mathematical modeling, simulation modeling
Co-advisor of the GDC AI Summit
Co-founder of the AI Game Programmers Guild
Author of the book, Behavioral Mathematics for Game AI

Blogs I write:
IA News - What's happening at IA | IA on AI - AI news and notes | Post-Play'em - Observations on AI of games I play

"Reducing the world to mathematical equations!"

#5 Álvaro   Crossbones+   -  Reputation: 12918

Like
0Likes
Like

Posted 19 July 2010 - 06:25 AM

Quote:
Original post by InnocuousFox
Notice he said "genetic programming" rather than "genetic algorithm". One modifies data, the other modifies the code itself.


Oh, I see. "Genetic programming" is GA where the parameter space consists of programs. I don't have any examples of this in Game AI, and I doubt there are any.



#6 Steadtler   Members   -  Reputation: 220

Like
0Likes
Like

Posted 19 July 2010 - 08:15 AM

Quote:
Original post by InnocuousFox
Notice he said "genetic programming" rather than "genetic algorithm". One modifies data, the other modifies the code itself.


Yes, because doing a gradient descent over the space of all possible code makes much more sense that over the data space :) But yeah, I shouldnt have written GA, but my comment stands.

Quote:
Original post by Alvaro
where computing the partial derivatives of the fitness function is impossible and where the only thing you can even try to measure is the relative fitness of a setting of the parameters with respect to another setting of the parameters


Doesnt that sound like a finite difference? :)

#7 Álvaro   Crossbones+   -  Reputation: 12918

Like
0Likes
Like

Posted 19 July 2010 - 09:25 AM

Quote:
Original post by Steadtler
Quote:
Original post by Alvaro
where computing the partial derivatives of the fitness function is impossible and where the only thing you can even try to measure is the relative fitness of a setting of the parameters with respect to another setting of the parameters


Doesnt that sound like a finite difference? :)


Actually, no. Imagine you are trying to optimize the strength of a poker bot. The only measure of strength you have is matching one bot against another for a number of hands and seeing who makes money off of whom. I don't see what a finite difference have to do with anything.

#8 Steadtler   Members   -  Reputation: 220

Like
0Likes
Like

Posted 19 July 2010 - 10:58 AM

Quote:
Original post by alvaro
Quote:
Original post by Steadtler
Quote:
Original post by Alvaro
where computing the partial derivatives of the fitness function is impossible and where the only thing you can even try to measure is the relative fitness of a setting of the parameters with respect to another setting of the parameters


Doesnt that sound like a finite difference? :)


Actually, no. Imagine you are trying to optimize the strength of a poker bot. The only measure of strength you have is matching one bot against another for a number of hands and seeing who makes money off of whom. I don't see what a finite difference have to do with anything.


Because of the mutation step in GAs, the other bot will most likely be a neighbor in the search space. I say most likely because GA are very loosely defined and thus can end up being just about anything.

In its most basic form, a GA takes a point, find points in the neighborhood ("mutation"), then compare those points and keep the best(s) according to some test ("fitness"/"selection"). Like I said, a sugar-coated gradient descent. The more they deviate from this formula and the more they look (and perform!) like a random search.

#9 noppanit   Members   -  Reputation: 100

Like
0Likes
Like

Posted 19 July 2010 - 11:54 PM

Thanks for you all replies. So, there is no evidence that GP has been used in Game AI before?

Hmm, That's rough for me now. Thanks everyone.

#10 Kylotan   Moderators   -  Reputation: 3338

Like
1Likes
Like

Posted 20 July 2010 - 12:07 AM

Quote:
Original post by Steadtler
In its most basic form, a GA takes a point, find points in the neighborhood ("mutation"), then compare those points and keep the best(s) according to some test ("fitness"/"selection"). Like I said, a sugar-coated gradient descent. The more they deviate from this formula and the more they look (and perform!) like a random search.

I'd say that's a miscategorisation.

Firstly, you missed crossover - without that, it's not a GA. Crossover attempts to exploit correlations between the various dimensions of the data. It requires some domain-specific knowledge to write a good crossover operator and like any domain-specific knowledge, if it's accurate, it will improve the search over a naive method. (And vice versa: no free lunch, and all that.) It means that parallel searches over two different parts of the landscape can end up being combined usefully, even if the area is completely discontinuous between them.

Secondly, generic algorithms exploit information shared across the population, not just in the crossover, but in the selection system. For this reason running gradient descent 100 times is not likely to be as effective as running one (similarly specified) GA with a population of 100. In this sense a GA has far more of the characteristics of beam search. Gradient ascent is typically the optimisation of 1 value. Repeating that a few times with the best of several successor values is still just optimising 1 value. GAs attempt to optimise many in parallel. You wouldn't ever reduce your population to just 1 value until the final generation of the search.

Thirdly, what you are calling "random search" is what another person calls "avoidance of local minima/maxima". Unadorned gradient ascent is pretty useless on all but the most continuous of fitness landscapes. Adding a random sample of perturbations help to get over that, and several useful algorithms do that, not just GAs.

So yeah, there's "sugar-coating", but it all serves a purpose. GAs are a pretty good match for a problem where you have a large number of dimensions to optimise, can see there are broad relations between them, and don't have a more specific algorithm that is provably better for the fitness landscape in question.

#11 Steadtler   Members   -  Reputation: 220

Like
0Likes
Like

Posted 20 July 2010 - 03:09 AM

crossover just mean you look for values in the neigborhood of two samples - and gradient descent/hill climbing was never limited to only running one at a time. Im not saying hill climbing is good, Im just saying GAs are not any better. It runs pretty much the same and attempt to patch the same flaws with doubtful hacks.


noppanit, sorry about the crossfire here. Im suggesting you study the basis - including operational research and numerical algorithms - until you see why its a bad idea. Besides the argument above about whether its anything better than a hill climbing, there are other reasons not to use it in game AI - notably the difficulty (some would say impossibility) of debugging and tuning the behavior of the AI when its all based on a regression.

#12 willh   Members   -  Reputation: 160

Like
0Likes
Like

Posted 22 July 2010 - 02:48 AM

Quote:
Original post by Kylotan
Firstly, you missed crossover - without that, it's not a GA.


Crossover is just another form of mutation. While I agree that you need some domain specific knowledge to make crossover work effectively, the same can be said of mutation operators-- a bit of domain specific knowledge can make mutation operations much more likely to be effective.



#13 kirkd   Members   -  Reputation: 505

Like
0Likes
Like

Posted 22 July 2010 - 09:59 AM

To say that crossover is only another form of mutation is close, but an over-simplification. And, yes, proper domain knowledge is essential to designing a proper crossover operator.

To say that crossover is just looking for values within the neighborhood is a gross misunderstanding of the crossover operator.



#14 EJH   Members   -  Reputation: 314

Like
0Likes
Like

Posted 22 July 2010 - 10:11 AM

Quote:
Original post by noppanit
I'm just curious that has anybody used genetic programming for Game AI before? I mean I'm doing my project and I just need some experience.


Do you specifically mean "genetic programming" only? As in (quoted from genetic-programming.org, which is probably where you should start looking):

Quote:
"Genetic programming (GP) is an automated method for creating a working computer program from a high-level problem statement of a problem. Genetic programming starts from a high-level statement of “what needs to be done” and automatically creates a computer program to solve the problem."


But in the thread title you say "or evolutionary algo" which opens it up to neuroevolution, genetic algorithms, etc... for which there are probably hundreds of AI-related research papers out there and a few games/demos.

http://en.wikipedia.org/wiki/Evolutionary_algorithm

^ Lots of approaches fall under the general category of evolutionary algorithms.







#15 willh   Members   -  Reputation: 160

Like
0Likes
Like

Posted 22 July 2010 - 11:19 AM

A bit of a stretch, but there does exist an example of a GA (thats not what he called it) used in game AI.

http://www.aliciapatterson.org/APF0704/Johnson/Johnson.html

To quote from the article:
"Then, through a process not unlike genetic evolution, it modifies and combines them into more complex ideas. As structures develop, the most useful and interesting ones-judged according to standards encoded in the program-survive"

Interesting story either way. :)

#16 kirkd   Members   -  Reputation: 505

Like
0Likes
Like

Posted 22 July 2010 - 11:41 AM

You could also take a look at the many different techniques developed by Ken Stanley - http://www.cs.ucf.edu/~kstanley/

Everybody around here should LOVE his work - evolution of neural networks. </sarcasm>

Regardless, he presents much of his work within a game domain and has shown some very interesting results.



#17 Kylotan   Moderators   -  Reputation: 3338

Like
1Likes
Like

Posted 23 July 2010 - 05:08 AM

Quote:
Original post by willh
Quote:
Original post by Kylotan
Firstly, you missed crossover - without that, it's not a GA.


Crossover is just another form of mutation.

No, it's not.

Crossover and mutation both change the solutions to explore the search space but they are completely distinct in their purpose, which is why you can't leave one out and talk as if you're still using a GA, any more than you could leave out a heuristic and claim you're still using A*.

Mutation alters part of a candidate solution in arbitrary ways, without any reference to other members of the population. It is there for overcoming local extrema and to improve the current value in gradual or limited ways. It adds new information into the population and is necessary for exploration and exploitation of the fitness space.

Crossover generates no new information as such, but replaces part of the current solution with a corresponding part from another member of the algorithm's population. That member of the population is chosen using some routine that favours members with higher fitnesses. Therefore the parts of the solutions that are better persist unchanged into future generations. This is critical as it speeds up convergence, which is its primary purpose - it takes 2 solutions which have often optimised different dimensions of the problem and combines them into 1 solution. Mutation alone cannot do this.

Genetic algorithms require crossover, mutation, and selection. Without implementing crossover properly you've pretty much crippled selection too and all you're left with is a mostly random form of beam search, which of course is going to be worse than pretty much any other algorithm you might throw at it. If that is how people use GAs then it is not at all surprising that they get poor results.

#18 kirkd   Members   -  Reputation: 505

Like
0Likes
Like

Posted 23 July 2010 - 05:45 AM

Quote:
Original post by Kylotan

No, it's not.

Crossover and mutation both change the solutions to explore the search space but they are completely distinct in their purpose, which is why you can't leave one out and talk as if you're still using a GA, any more than you could leave out a heuristic and claim you're still using A*.

Mutation alters part of a candidate solution in arbitrary ways, without any reference to other members of the population. It is there for overcoming local extrema and to improve the current value in gradual or limited ways. It adds new information into the population and is necessary for exploration and exploitation of the fitness space.

Crossover generates no new information as such, but replaces part of the current solution with a corresponding part from another member of the algorithm's population. That member of the population is chosen using some routine that favours members with higher fitnesses. Therefore the parts of the solutions that are better persist unchanged into future generations. This is critical as it speeds up convergence, which is its primary purpose - it takes 2 solutions which have often optimised different dimensions of the problem and combines them into 1 solution. Mutation alone cannot do this.

Genetic algorithms require crossover, mutation, and selection. Without implementing crossover properly you've pretty much crippled selection too and all you're left with is a mostly random form of beam search, which of course is going to be worse than pretty much any other algorithm you might throw at it. If that is how people use GAs then it is not at all surprising that they get poor results.


^^^^^this X2

#19 Daerax   Members   -  Reputation: 1207

Like
0Likes
Like

Posted 23 July 2010 - 06:38 AM

noppanit do you mean genetic programming or genetic algorithm? One is really cool and probably not very practical. yet. The other is a pretty straight forward technique with a cool name.

Quote:
Original post by Kylotan
Quote:
Original post by willh
Quote:
Original post by Kylotan
Firstly, you missed crossover - without that, it's not a GA.


Crossover is just another form of mutation.

No, it's not.

Crossover and mutation both change the solutions to explore the search space but they are completely distinct in their purpose, which is why you can't leave one out and talk as if you're still using a GA, any more than you could leave out a heuristic and claim you're still using A*.

Mutation alters part of a candidate solution in arbitrary ways, without any reference to other members of the population. It is there for overcoming local extrema and to improve the current value in gradual or limited ways. It adds new information into the population and is necessary for exploration and exploitation of the fitness space.

Crossover generates no new information as such, but replaces part of the current solution with a corresponding part from another member of the algorithm's population. That member of the population is chosen using some routine that favours members with higher fitnesses. Therefore the parts of the solutions that are better persist unchanged into future generations. This is critical as it speeds up convergence, which is its primary purpose - it takes 2 solutions which have often optimised different dimensions of the problem and combines them into 1 solution. Mutation alone cannot do this.

Genetic algorithms require crossover, mutation, and selection. Without implementing crossover properly you've pretty much crippled selection too and all you're left with is a mostly random form of beam search, which of course is going to be worse than pretty much any other algorithm you might throw at it. If that is how people use GAs then it is not at all surprising that they get poor results.


excellent post! one qualm, GA without cross over is not really crippled and is really not so different from Simulated Annealing - the latter of which I have used to tune an SVM with good result - just with a slightly less interesting way of providing a mutation probability. The superiority of maintaining more than one solution is not so clear cut and is arguably detrimental in certain cases. but I agree with you and definitely smart construction of a crossover operator is very key and non trivial.

If your genetic algorithm implementation appears no different to gradient descent then you have thrown out the baby with the bath water, especially since basic gradient descent is so sickly.

#20 Daerax   Members   -  Reputation: 1207

Like
0Likes
Like

Posted 23 July 2010 - 07:09 AM

Quote:
Original post by alvaro
Actually, no. Imagine you are trying to optimize the strength of a poker bot. The only measure of strength you have is matching one bot against another for a number of hands and seeing who makes money off of whom. I don't see what a finite difference have to do with anything.

off topic but are you making a poker bot?




Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS