• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.
Sign in to follow this  
Followers 0
Storyyeller

when are genetic algorithms useful?

17 posts in this topic

I've seen hype about genetic algorithms, but no actual examples of problems where they are useful. Based on reading the threads here, it looks like the requirement for a GA to be useful is

* Problem is noisy and complex enough to make other algorithms fail
* Problem is understood well enough to determine good mutation and crossover operators.

But it seems to me like the requirements are somewhat contradictory. If you know a lot about a problem, surely there's a better method than just randomly moving population of search points around and praying for emergent behavior? Are there any concrete examples where genetic algorithms are actually useful?
0

Share this post


Link to post
Share on other sites
[quote name='Storyyeller' timestamp='1308500766' post='4825110']
I've seen hype about genetic algorithms, but no actual examples of problems where they are useful. Based on reading the threads here, it looks like the requirement for a GA to be useful is

* Problem is noisy and complex enough to make other algorithms fail
* Problem is understood well enough to determine good mutation and crossover operators.

But it seems to me like the requirements are somewhat contradictory. If you know a lot about a problem, surely there's a better method than just randomly moving population of search points around and praying for emergent behavior? Are there any concrete examples where genetic algorithms are actually useful?
[/quote]

In general there are better solutions, GAs however are easy to implement and can be used to optimize almost any process or design (If you can assign a relative fitness to the result then a GA will work reasonably well), It basically trades the need for human expertise for a need of computing power.

Making machines do "creative" work is interesting and even if they suck at it they will work 24/7 without complaining, they never ask for a raise(unless the cost of power goes up) and you can chain them all in the basement without having the authorities come after you, does it really matter if its practical ?

That said however there are tons of papers around discussing practical applications of GAs that you could take a look at and it has been used in the real world in fields like medical research for example.
1

Share this post


Link to post
Share on other sites
I feel that GAs are a solution in search of a problem. Just like Artificial Neural Networks, they have a very sexy name and they promise a solution to almost anything, so every generation will find them fascinating for a little while. But then you find that for most problems there are better ways to do things.

In particular these two techniques have virtually no application to AI in games.
0

Share this post


Link to post
Share on other sites
I agree with alvaro. I've taken a class in GA at uni and can say that they aren't very useful(IMHO).
All the toy problems we have seen were solvable(sometimes barely) with a GA but I don't see a reason to use it an not a hand crafted algorithm.
They are easy to implement and most of the code is reusable ( you need to change only a few operators), and maybe for a simple problem they might be a good choice
if you've already got a framework running and don't have the time to invest in an ad hoc algorithm. Also fine-tuning all the parameters can take more time than actually writing the whole thing.
In general the idea is nice but I personally haven't come across a problem where GA was a good solution.
0

Share this post


Link to post
Share on other sites
I've seen an AI player for the board game Puerto Rico (written in excel no less) using a GA which is almost unbeatable. It can be found on the Puerto Rico page of BoardGameGeek.
1

Share this post


Link to post
Share on other sites
There's many of applications of GA and ANN solving real-world physics, engineering, and computing problems.

http://en.wikipedia.org/wiki/List_of_genetic_algorithm_applications
http://en.wikipedia.org/wiki/Neural_network#Applications_of_natural_and_of_artificial_neural_networks

Most of them have nothing to do with game AI. There are a few games that use them though:

- Forza (ANN drivers)
- Colin McRae (ANN drivers)
- Black and White (some type of learning?)
- Creatures (not sure ...)
1

Share this post


Link to post
Share on other sites
The reason for using a GA to train an ANN is that multilayer ANNs are hideous. The mapping from weights to training error is an ugly nonconvex mess. Since there's not much you can do, you throw a sample-based optimizer like an ANN at it.

But remember that the [i]reason[/i] you were stuck using a GA was your choice of ANNs in the first place. Had you just stuck with a linear architecture -- a sum of basis functions -- you'd have gotten a standard linear least squares problem that you could have solved quickly with QR factorization or the conjugate gradient method. So why didn't you just choose that from the outset?

Good question.
0

Share this post


Link to post
Share on other sites
[quote name='Storyyeller' timestamp='1308500766' post='4825110']
But it seems to me like the requirements are somewhat contradictory. If you know a lot about a problem, surely there's a better method than just randomly moving population of search points around and praying for emergent behavior? Are there any concrete examples where genetic algorithms are actually useful?
[/quote]

Avlaro kind of summed it up. I've used GA's before, with some success, but there have always been better alternatives and I was just 'playing'.

Their strength is that they are not problem specific, and in theory could be used to 'solve' just about any problem given enough computing time.

In reality this is mostely pointless. GAs, and ANNs, were dreamt up in a time when processing power (and raw data) was very limited. These days it's feasible to do a somewhat exhaustive search of a space using millions of data samples, so....

As far as games go, I can think of a few uses, but so far haven't seen anyone use them.
1

Share this post


Link to post
Share on other sites
[quote name='EJH' timestamp='1308591870' post='4825567']
- Black and White (some type of learning?)
- Creatures (not sure ...)
[/quote]

Black & White simply used reinforcement learning. All that does is tweak weight coefficients.
Creatures used a GA for the actual genetics (so not really a GA per se... just a gene sequence that was combined and mutated from parents). They also used a form of NN for the learning, iirc. Not really typical gameplay, however. Again, both of these were, as Alvaro said, the algorithms looking for problems and finding them.


0

Share this post


Link to post
Share on other sites
[quote name='Emergent' timestamp='1308597009' post='4825609']
The reason for using a GA to train an ANN is that multilayer ANNs are hideous. The mapping from weights to training error is an ugly nonconvex mess. Since there's not much you can do, you throw a sample-based optimizer like an ANN at it.

But remember that the [i]reason[/i] you were stuck using a GA was your choice of ANNs in the first place. Had you just stuck with a linear architecture -- a sum of basis functions -- you'd have gotten a standard linear least squares problem that you could have solved quickly with QR factorization or the conjugate gradient method. So why didn't you just choose that from the outset?

Good question.
[/quote]

stochastic gradient descent is better. Convex loss is overrated you know.
0

Share this post


Link to post
Share on other sites
[quote name='IADaveMark' timestamp='1308608541' post='4825692']
[quote name='EJH' timestamp='1308591870' post='4825567']
- Black and White (some type of learning?)
- Creatures (not sure ...)
[/quote]

Black & White simply used reinforcement learning. All that does is tweak weight coefficients.
Creatures used a GA for the actual genetics (so not really a GA per se... just a gene sequence that was combined and mutated from parents). They also used a form of NN for the learning, iirc. Not really typical gameplay, however. Again, both of these were, as Alvaro said, the algorithms looking for problems and finding them.



[/quote]

dont you think its a bit disingenuous to call reinforcement learning as just tweaking weight coefficients? Most of machine learning can be described such.
1

Share this post


Link to post
Share on other sites
[quote name='Daerax' timestamp='1308754999' post='4826452']
stochastic gradient descent is better. Convex loss is overrated you know.
[/quote]

Stochastic gradient descent is better than what, for what?

And what's wrong with convex loss functions? Are you coming from a robust statistics point of view? I.e., you want to stop penalizing errors as they get very large as a way to ignore outliers? That makes sense, and it can be applied to ANNs, but it is not a part of what is typically done. ANNs typically use a loss function that's nonconvex in the weights but just quadratic in the data. It a way, it's the worst of both worlds. You don't get convexity in the weights so you can't find globally-optimal ones, and at the same time you don't get outlier rejection. That's why I ask, "what's the point?"

As for loss functions, some more robust ones are still convex, e.g. Hubert's. Tukey's isn't, but at least it is quasiconvex (its sublevel sets are convex).

Back to the subject of stochastic optimizers, I kind of like the cross-entropy method nowadays.
0

Share this post


Link to post
Share on other sites
[quote name='Daerax' timestamp='1308755134' post='4826456']
dont you think its a bit disingenuous to call reinforcement learning as just tweaking weight coefficients? Most of machine learning can be described such.
[/quote]
I was a bit brief in the post... what I meant is that they are explicit knobs for each item and decision... not a bunch of anonymously vague perceptrons.


0

Share this post


Link to post
Share on other sites
I see GA as very useful when used appropriately in turn-based strategy games. They can be coded so as to not require much computing power at all. Depending on the goals, you can set several extreme cases, and then fine tune the outcomes between them by a form of subdivision as I call it. This reduces the amount of computation exponentially by a huge amount, depending on how many parameters are tied together and how they influence each other. I'm currently experimenting with this, and I think it will be implemented with success. It will also allow the AI of my game to be able to play any mod thrown at it, which is a bonus.
1

Share this post


Link to post
Share on other sites
[quote name='braincell' timestamp='1309885946' post='4831427']
I see GA as very useful when used appropriately in turn-based strategy games. They can be coded so as to not require much computing power at all. Depending on the goals, you can set several extreme cases, and then fine tune the outcomes between them by a form of subdivision as I call it. This reduces the amount of computation exponentially by a huge amount, depending on how many parameters are tied together and how they influence each other. I'm currently experimenting with this, and I think it will be implemented with success. It will also allow the AI of my game to be able to play any mod thrown at it, which is a bonus.
[/quote]
What do you tweak to make the GA evolve an interesting, fun opponent? How strong are your evolved opponents at this time? (I'm not sure evolving them sufficiently is going to be computationally cheap.)
0

Share this post


Link to post
Share on other sites
[quote name='braincell' timestamp='1309885946' post='4831427']
I see GA as very useful when used appropriately in turn-based strategy games. They can be coded so as to not require much computing power at all. Depending on the goals, you can set several extreme cases, and then fine tune the outcomes between them by a form of subdivision as I call it. This reduces the amount of computation exponentially by a huge amount, depending on how many parameters are tied together and how they influence each other. I'm currently experimenting with this, and I think it will be implemented with success. It will also allow the AI of my game to be able to play any mod thrown at it, which is a bonus.
[/quote]
I would strongly disagree on a number of points. TBS are largely about maximizing the best thing to do at a given time. If the game system is based on numbers (and it almost has to be), then writing functions to process these numbers is the way to go. Each decision needs to be broken down into component pieces and a formula constructed to analyze its worth relative to other potential actions. These functions will likely all have coefficients and variables. The variables are taken from the game state. The coefficients are authored based on the nature of the decision being made.

The kicker here is that, with a GA, you are writing the functions but you are excusing yourself of the responsibility of determining coefficients. Instead, you are letting some sort of "magic process" come up with them for you. This process can really only be done pre-ship. Once the formulas are set, the only thing that is changing is the variables (as the game state changes). There is nothing to "evolve" any further at that point. Optimal is optimal. If you design your formulas correctly, the "best thing" to do at any one time will fall out of the calculations that you wrote.

-1

Share this post


Link to post
Share on other sites
[quote name='LorenzoGatti' timestamp='1310135892' post='4832793']
What do you tweak to make the GA evolve an interesting, fun opponent? How strong are your evolved opponents at this time? (I'm not sure evolving them sufficiently is going to be computationally cheap.)
[/quote]

I have two fairly separate "games" within one game. They are economy and warfare. For example, for the economy, we determine first of all which of the several mid-term and long-term goals are post profitable to reach an overall victory or as close as possible. This can be for example maximising research or culture or income, etc. We then simulate turns for the mid and long term if we were primarily focusing on one of these goals. For example for research, we only build research improvements, fund new techs, etc, and forget about the economy. This results in an overreaction, and thus we see what city would suffer the most or profit the most by focusing on something else. The coefficients are thus how many cities can afford to focus on different aspects of the economy. We simulate turns in advance (assumption is: warfare won't influence outcome much, and if it will it's compensated) and then we can try out the "best fit" for the number of cities that would focus on each aspect (eg: research vs income) to get best overall results. This relies on the fact that each of these items can be quantified, as each city can build something else, and they can trade on the empire-level. Then after finding out the extreme and mid level scenarios for research (example), we finetune by experimenting with policy changes and diplomatic influences etc, trying to predict opponents intentions in some way and so on. These factors compensate and create a list of probabilities and future reactions to possible scenarios. These are re-calculated every 10-20 turns, and the mid-tern and long-term policies adjusted etc.

In terms of warfare, we create virtual units assuming the enemy has placed them and then simulate possible outcomes depending on how the units are arranged (who is in front, who is in back, support units, ranged units, etc). We simulate potential front lines developing, and how to respond to various concentrations of forces and several scenarios, but not too many. The outcome is often a noisy affair, with a wild variety of probabilities, but then the risk-taking of the AI kicks in just like a human player would, and it tries it's best without getting fixated on any particular goal.

@IADaveMark

Yes, we have the ability for each mod to be "learnt" by the AI. This is when the coefficients are found, and it's a somewhat computationally intensive process but not as much as we expected. It still only gives guidelines for the overall game, such as build queues, overall long-term cost of any type of victory (some mods can loose balance, and this is a good way to check if the mod itself is balanced), etc. During gameplay, we still let the AI simulate turns in advance for the economy where the assumption is no external influence will be too great - but all of them are taken into account as probabilities (trade, diplomacy, war). Each simulated turn takes into account the pre-made coefficients, and tries things out depending on the current state of the empire and available resources. It doesn't test too many scenarios because it has the coeffieicent and "kind of" knows which way to go in order to achieve specified mid-term and long-term goals. There is still some testing done however, as to the order in which things should be built and so on, but these are all simple operations and the amount of data is kept to a minimum because of the aforementioned assumptions. It's very game-specific and I'm not sure if it has been done in exactly this way before, or that I'm explaining myself all that well for that matter - I'm kind of tired at the moment.
0

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now
Sign in to follow this  
Followers 0