Sign in to follow this  

Genetic algorithm

This topic is 3490 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I had a thought about the AI in my game. It occurred to me that I might be able to leverage the multiple (human) players to help the AI improve (and also create opponents of varying styles and ability). There are a number of parameters in a couple of routines that find the "best" positions for attacking (hand-to-hand), for defense and for missile position. Right now, these are set to somewhat arbitrary hand-tuned values, but I have no idea if the strategy could be improved through the GA method of cross-over and mutation. These could be used as the values in a gene that would be used in a genetic algorithm to find the best values. There would be about 20 values, so the gene wouldn't be too complex. I suppose that the parameter values and the "fitness" should be seeded by me in some way before I attempt to use the actual game to create and modify them to eliminate the less-fit genes, but I would like the game to learn from the multiple human players that would be playing. Does this sound practical, or even possible? Has anyone (here) attempted to use a GA to modify a live, running game? Do you have any advice for me?

Share this post


Link to post
Share on other sites
The major problem I see with this is that a GA would require a significant amount of time spent in the game to evaluate any particular set of parameters. As a result, you'll need a lot of human players playing a lot of games to get to the point where a GA will produce a reasonable set of parameters.

-Kirk

Share this post


Link to post
Share on other sites
This sounds like an interesting approach for auto-adjusting difficulty. Like Kirk said, it would take a while for the AI to evolve to a point where it would be challenging, but in theory, it could eventually reach a point where it perfectly matches the player in skill — though never in robust contextual decision-making, which is where AI really falters.

Share this post


Link to post
Share on other sites
One would expect that with about 20 dimensions the objective function would be highly multimodal (i.e., many different locally optimal points in the 'AI attribute space'). It's also likely that a lower dimensional subspace conveys all of the important performance information for the problem.

Ideally, you don't want your game AI to run optimally on a given problem. What you want is for it to be challenging to the player. That means that as the player builds skill so should the AI, but ultimately, the AI should lose more often than it wins, lest you frustrate the player too much.

Cheers,

Timkin

Share this post


Link to post
Share on other sites
Quote:
Original post by ID Merlin
I had a thought about the AI in my game. It occurred to me that I might be able to leverage the multiple (human) players to help the AI improve (and also create opponents of varying styles and ability).
A cautionary tale:

Space Empires V has a mechanism that uses the player to classify their own starships for targetting by the AI. A text box for labeling ship Type was converted into a drop-down list, to be selected by the player and then used for identifying ships by the AI.

Of course it isn't in the player's best interest to share this information with the enemy. So the result is that the player builds fleets of harmless interstellar mail couriers. But with big guns, fighter escourt, and battalions of ground invasion security personnel.

Moral of the story: Enemies are enemies.

A similar but unrelated thing happened with Master of Orion III. Ships were arbitrarily categorized as Core, Escourt, or Picket, and then how many of each type went into a fleet was enforced. So the player just made three copies of the same ship, with different categories.

[Edited by - AngleWyrm on May 27, 2008 1:56:04 AM]

Share this post


Link to post
Share on other sites
Thanks for all the input. I don't think that having my computer play itself will give me what I want, exactly, because the GA would probably still be tuning itself in the year 2030 if I try to use human players as the "fitness" function.

I may try just creating a small subset of strategies, and logging the results for a while, perhaps using those results to hand-tune the parameters, before I try to actively tune the strategy with a GA. And I will probably look at building a system where my local computer plays two opponents locally, where I can probably get data 1,000s of times as fast as humans.

As an aside, one advantage of using performance measures on the strategy would allow me to use less effective strategies for players who are having a hard time beating the computer. (Many players are already losing, even with an enemy at 75% of the player strength.)

Share this post


Link to post
Share on other sites

This topic is 3490 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this