Jump to content

  • Log In with Google      Sign In   
  • Create Account

We're offering banner ads on our site from just $5!

1. Details HERE. 2. GDNet+ Subscriptions HERE. 3. Ad upload HERE.


when are genetic algorithms useful?


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
17 replies to this topic

#1 Storyyeller   Members   -  Reputation: 212

Like
0Likes
Like

Posted 19 June 2011 - 10:26 AM

I've seen hype about genetic algorithms, but no actual examples of problems where they are useful. Based on reading the threads here, it looks like the requirement for a GA to be useful is

* Problem is noisy and complex enough to make other algorithms fail
* Problem is understood well enough to determine good mutation and crossover operators.

But it seems to me like the requirements are somewhat contradictory. If you know a lot about a problem, surely there's a better method than just randomly moving population of search points around and praying for emergent behavior? Are there any concrete examples where genetic algorithms are actually useful?
I trust exceptions about as far as I can throw them.

Sponsor:

#2 SimonForsman   Crossbones+   -  Reputation: 6293

Like
1Likes
Like

Posted 19 June 2011 - 11:08 AM

I've seen hype about genetic algorithms, but no actual examples of problems where they are useful. Based on reading the threads here, it looks like the requirement for a GA to be useful is

* Problem is noisy and complex enough to make other algorithms fail
* Problem is understood well enough to determine good mutation and crossover operators.

But it seems to me like the requirements are somewhat contradictory. If you know a lot about a problem, surely there's a better method than just randomly moving population of search points around and praying for emergent behavior? Are there any concrete examples where genetic algorithms are actually useful?


In general there are better solutions, GAs however are easy to implement and can be used to optimize almost any process or design (If you can assign a relative fitness to the result then a GA will work reasonably well), It basically trades the need for human expertise for a need of computing power.

Making machines do "creative" work is interesting and even if they suck at it they will work 24/7 without complaining, they never ask for a raise(unless the cost of power goes up) and you can chain them all in the basement without having the authorities come after you, does it really matter if its practical ?

That said however there are tons of papers around discussing practical applications of GAs that you could take a look at and it has been used in the real world in fields like medical research for example.
I don't suffer from insanity, I'm enjoying every minute of it.
The voices in my head may not be real, but they have some good ideas!

#3 Álvaro   Crossbones+   -  Reputation: 13880

Like
0Likes
Like

Posted 19 June 2011 - 11:57 AM

I feel that GAs are a solution in search of a problem. Just like Artificial Neural Networks, they have a very sexy name and they promise a solution to almost anything, so every generation will find them fascinating for a little while. But then you find that for most problems there are better ways to do things.

In particular these two techniques have virtually no application to AI in games.

#4 CableGuy   Members   -  Reputation: 943

Like
0Likes
Like

Posted 19 June 2011 - 12:09 PM

I agree with alvaro. I've taken a class in GA at uni and can say that they aren't very useful(IMHO).
All the toy problems we have seen were solvable(sometimes barely) with a GA but I don't see a reason to use it an not a hand crafted algorithm.
They are easy to implement and most of the code is reusable ( you need to change only a few operators), and maybe for a simple problem they might be a good choice
if you've already got a framework running and don't have the time to invest in an ad hoc algorithm. Also fine-tuning all the parameters can take more time than actually writing the whole thing.
In general the idea is nice but I personally haven't come across a problem where GA was a good solution.

#5 TheUnbeliever   Members   -  Reputation: 961

Like
3Likes
Like

Posted 19 June 2011 - 12:14 PM

In particular these two techniques have virtually no application to AI in games.


GAs have been used to find strong build orders for the opening stages of Starcraft 2. (I'm not saying this is evidence for anything more general, just seems relevant to the OP.)
[TheUnbeliever]

#6 Druzil   Members   -  Reputation: 627

Like
1Likes
Like

Posted 19 June 2011 - 05:58 PM

I've seen an AI player for the board game Puerto Rico (written in excel no less) using a GA which is almost unbeatable. It can be found on the Puerto Rico page of BoardGameGeek.

#7 EJH   Members   -  Reputation: 314

Like
1Likes
Like

Posted 20 June 2011 - 11:44 AM

There's many of applications of GA and ANN solving real-world physics, engineering, and computing problems.

http://en.wikipedia.org/wiki/List_of_genetic_algorithm_applications
http://en.wikipedia.org/wiki/Neural_network#Applications_of_natural_and_of_artificial_neural_networks

Most of them have nothing to do with game AI. There are a few games that use them though:

- Forza (ANN drivers)
- Colin McRae (ANN drivers)
- Black and White (some type of learning?)
- Creatures (not sure ...)

#8 Emergent   Members   -  Reputation: 971

Like
0Likes
Like

Posted 20 June 2011 - 01:10 PM

The reason for using a GA to train an ANN is that multilayer ANNs are hideous. The mapping from weights to training error is an ugly nonconvex mess. Since there's not much you can do, you throw a sample-based optimizer like an ANN at it.

But remember that the reason you were stuck using a GA was your choice of ANNs in the first place. Had you just stuck with a linear architecture -- a sum of basis functions -- you'd have gotten a standard linear least squares problem that you could have solved quickly with QR factorization or the conjugate gradient method. So why didn't you just choose that from the outset?

Good question.

#9 willh   Members   -  Reputation: 160

Like
1Likes
Like

Posted 20 June 2011 - 03:58 PM

But it seems to me like the requirements are somewhat contradictory. If you know a lot about a problem, surely there's a better method than just randomly moving population of search points around and praying for emergent behavior? Are there any concrete examples where genetic algorithms are actually useful?


Avlaro kind of summed it up. I've used GA's before, with some success, but there have always been better alternatives and I was just 'playing'.

Their strength is that they are not problem specific, and in theory could be used to 'solve' just about any problem given enough computing time.

In reality this is mostely pointless. GAs, and ANNs, were dreamt up in a time when processing power (and raw data) was very limited. These days it's feasible to do a somewhat exhaustive search of a space using millions of data samples, so....

As far as games go, I can think of a few uses, but so far haven't seen anyone use them.

#10 IADaveMark   Moderators   -  Reputation: 2531

Like
0Likes
Like

Posted 20 June 2011 - 04:22 PM

- Black and White (some type of learning?)
- Creatures (not sure ...)


Black & White simply used reinforcement learning. All that does is tweak weight coefficients.
Creatures used a GA for the actual genetics (so not really a GA per se... just a gene sequence that was combined and mutated from parents). They also used a form of NN for the learning, iirc. Not really typical gameplay, however. Again, both of these were, as Alvaro said, the algorithms looking for problems and finding them.



Dave Mark - President and Lead Designer of Intrinsic Algorithm LLC

Professional consultant on game AI, mathematical modeling, simulation modeling
Co-advisor of the GDC AI Summit
Co-founder of the AI Game Programmers Guild
Author of the book, Behavioral Mathematics for Game AI

Blogs I write:
IA News - What's happening at IA | IA on AI - AI news and notes | Post-Play'em - Observations on AI of games I play

"Reducing the world to mathematical equations!"

#11 Daerax   Members   -  Reputation: 1207

Like
0Likes
Like

Posted 22 June 2011 - 09:03 AM

The reason for using a GA to train an ANN is that multilayer ANNs are hideous. The mapping from weights to training error is an ugly nonconvex mess. Since there's not much you can do, you throw a sample-based optimizer like an ANN at it.

But remember that the reason you were stuck using a GA was your choice of ANNs in the first place. Had you just stuck with a linear architecture -- a sum of basis functions -- you'd have gotten a standard linear least squares problem that you could have solved quickly with QR factorization or the conjugate gradient method. So why didn't you just choose that from the outset?

Good question.


stochastic gradient descent is better. Convex loss is overrated you know.

#12 Daerax   Members   -  Reputation: 1207

Like
1Likes
Like

Posted 22 June 2011 - 09:05 AM


- Black and White (some type of learning?)
- Creatures (not sure ...)


Black & White simply used reinforcement learning. All that does is tweak weight coefficients.
Creatures used a GA for the actual genetics (so not really a GA per se... just a gene sequence that was combined and mutated from parents). They also used a form of NN for the learning, iirc. Not really typical gameplay, however. Again, both of these were, as Alvaro said, the algorithms looking for problems and finding them.




dont you think its a bit disingenuous to call reinforcement learning as just tweaking weight coefficients? Most of machine learning can be described such.

#13 Emergent   Members   -  Reputation: 971

Like
0Likes
Like

Posted 22 June 2011 - 12:05 PM

stochastic gradient descent is better. Convex loss is overrated you know.


Stochastic gradient descent is better than what, for what?

And what's wrong with convex loss functions? Are you coming from a robust statistics point of view? I.e., you want to stop penalizing errors as they get very large as a way to ignore outliers? That makes sense, and it can be applied to ANNs, but it is not a part of what is typically done. ANNs typically use a loss function that's nonconvex in the weights but just quadratic in the data. It a way, it's the worst of both worlds. You don't get convexity in the weights so you can't find globally-optimal ones, and at the same time you don't get outlier rejection. That's why I ask, "what's the point?"

As for loss functions, some more robust ones are still convex, e.g. Hubert's. Tukey's isn't, but at least it is quasiconvex (its sublevel sets are convex).

Back to the subject of stochastic optimizers, I kind of like the cross-entropy method nowadays.

#14 IADaveMark   Moderators   -  Reputation: 2531

Like
0Likes
Like

Posted 22 June 2011 - 05:46 PM

dont you think its a bit disingenuous to call reinforcement learning as just tweaking weight coefficients? Most of machine learning can be described such.

I was a bit brief in the post... what I meant is that they are explicit knobs for each item and decision... not a bunch of anonymously vague perceptrons.



Dave Mark - President and Lead Designer of Intrinsic Algorithm LLC

Professional consultant on game AI, mathematical modeling, simulation modeling
Co-advisor of the GDC AI Summit
Co-founder of the AI Game Programmers Guild
Author of the book, Behavioral Mathematics for Game AI

Blogs I write:
IA News - What's happening at IA | IA on AI - AI news and notes | Post-Play'em - Observations on AI of games I play

"Reducing the world to mathematical equations!"

#15 braincell   Members   -  Reputation: 115

Like
1Likes
Like

Posted 05 July 2011 - 11:12 AM

I see GA as very useful when used appropriately in turn-based strategy games. They can be coded so as to not require much computing power at all. Depending on the goals, you can set several extreme cases, and then fine tune the outcomes between them by a form of subdivision as I call it. This reduces the amount of computation exponentially by a huge amount, depending on how many parameters are tied together and how they influence each other. I'm currently experimenting with this, and I think it will be implemented with success. It will also allow the AI of my game to be able to play any mod thrown at it, which is a bonus.

#16 LorenzoGatti   Crossbones+   -  Reputation: 2761

Like
0Likes
Like

Posted 08 July 2011 - 08:38 AM

I see GA as very useful when used appropriately in turn-based strategy games. They can be coded so as to not require much computing power at all. Depending on the goals, you can set several extreme cases, and then fine tune the outcomes between them by a form of subdivision as I call it. This reduces the amount of computation exponentially by a huge amount, depending on how many parameters are tied together and how they influence each other. I'm currently experimenting with this, and I think it will be implemented with success. It will also allow the AI of my game to be able to play any mod thrown at it, which is a bonus.

What do you tweak to make the GA evolve an interesting, fun opponent? How strong are your evolved opponents at this time? (I'm not sure evolving them sufficiently is going to be computationally cheap.)
Produci, consuma, crepa

#17 IADaveMark   Moderators   -  Reputation: 2531

Like
-1Likes
Like

Posted 08 July 2011 - 11:27 AM

I see GA as very useful when used appropriately in turn-based strategy games. They can be coded so as to not require much computing power at all. Depending on the goals, you can set several extreme cases, and then fine tune the outcomes between them by a form of subdivision as I call it. This reduces the amount of computation exponentially by a huge amount, depending on how many parameters are tied together and how they influence each other. I'm currently experimenting with this, and I think it will be implemented with success. It will also allow the AI of my game to be able to play any mod thrown at it, which is a bonus.

I would strongly disagree on a number of points. TBS are largely about maximizing the best thing to do at a given time. If the game system is based on numbers (and it almost has to be), then writing functions to process these numbers is the way to go. Each decision needs to be broken down into component pieces and a formula constructed to analyze its worth relative to other potential actions. These functions will likely all have coefficients and variables. The variables are taken from the game state. The coefficients are authored based on the nature of the decision being made.

The kicker here is that, with a GA, you are writing the functions but you are excusing yourself of the responsibility of determining coefficients. Instead, you are letting some sort of "magic process" come up with them for you. This process can really only be done pre-ship. Once the formulas are set, the only thing that is changing is the variables (as the game state changes). There is nothing to "evolve" any further at that point. Optimal is optimal. If you design your formulas correctly, the "best thing" to do at any one time will fall out of the calculations that you wrote.


Dave Mark - President and Lead Designer of Intrinsic Algorithm LLC

Professional consultant on game AI, mathematical modeling, simulation modeling
Co-advisor of the GDC AI Summit
Co-founder of the AI Game Programmers Guild
Author of the book, Behavioral Mathematics for Game AI

Blogs I write:
IA News - What's happening at IA | IA on AI - AI news and notes | Post-Play'em - Observations on AI of games I play

"Reducing the world to mathematical equations!"

#18 braincell   Members   -  Reputation: 115

Like
0Likes
Like

Posted 13 July 2011 - 10:03 AM

What do you tweak to make the GA evolve an interesting, fun opponent? How strong are your evolved opponents at this time? (I'm not sure evolving them sufficiently is going to be computationally cheap.)


I have two fairly separate "games" within one game. They are economy and warfare. For example, for the economy, we determine first of all which of the several mid-term and long-term goals are post profitable to reach an overall victory or as close as possible. This can be for example maximising research or culture or income, etc. We then simulate turns for the mid and long term if we were primarily focusing on one of these goals. For example for research, we only build research improvements, fund new techs, etc, and forget about the economy. This results in an overreaction, and thus we see what city would suffer the most or profit the most by focusing on something else. The coefficients are thus how many cities can afford to focus on different aspects of the economy. We simulate turns in advance (assumption is: warfare won't influence outcome much, and if it will it's compensated) and then we can try out the "best fit" for the number of cities that would focus on each aspect (eg: research vs income) to get best overall results. This relies on the fact that each of these items can be quantified, as each city can build something else, and they can trade on the empire-level. Then after finding out the extreme and mid level scenarios for research (example), we finetune by experimenting with policy changes and diplomatic influences etc, trying to predict opponents intentions in some way and so on. These factors compensate and create a list of probabilities and future reactions to possible scenarios. These are re-calculated every 10-20 turns, and the mid-tern and long-term policies adjusted etc.

In terms of warfare, we create virtual units assuming the enemy has placed them and then simulate possible outcomes depending on how the units are arranged (who is in front, who is in back, support units, ranged units, etc). We simulate potential front lines developing, and how to respond to various concentrations of forces and several scenarios, but not too many. The outcome is often a noisy affair, with a wild variety of probabilities, but then the risk-taking of the AI kicks in just like a human player would, and it tries it's best without getting fixated on any particular goal.

@IADaveMark

Yes, we have the ability for each mod to be "learnt" by the AI. This is when the coefficients are found, and it's a somewhat computationally intensive process but not as much as we expected. It still only gives guidelines for the overall game, such as build queues, overall long-term cost of any type of victory (some mods can loose balance, and this is a good way to check if the mod itself is balanced), etc. During gameplay, we still let the AI simulate turns in advance for the economy where the assumption is no external influence will be too great - but all of them are taken into account as probabilities (trade, diplomacy, war). Each simulated turn takes into account the pre-made coefficients, and tries things out depending on the current state of the empire and available resources. It doesn't test too many scenarios because it has the coeffieicent and "kind of" knows which way to go in order to achieve specified mid-term and long-term goals. There is still some testing done however, as to the order in which things should be built and so on, but these are all simple operations and the amount of data is kept to a minimum because of the aforementioned assumptions. It's very game-specific and I'm not sure if it has been done in exactly this way before, or that I'm explaining myself all that well for that matter - I'm kind of tired at the moment.




Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS