Game of The Generals AI

Started by
52 comments, last by JhomarA.StaCruz 12 years, 12 months ago
Quote:Original post by swiftcoder
Quote:Original post by alvaro
How hard have you tried to find out on your own what it is?
It is one of those things that is surprisingly hard to google for - only one relevant entry on the first page [smile]

It gets far better if you search for "UCB1 monte carlo".

Dave Mark - President and Lead Designer of Intrinsic Algorithm LLC
Professional consultant on game AI, mathematical modeling, simulation modeling
Co-founder and 10 year advisor of the GDC AI Summit
Author of the book, Behavioral Mathematics for Game AI
Blogs I write:
IA News - What's happening at IA | IA on AI - AI news and notes | Post-Play'em - Observations on AI of games I play

"Reducing the world to mathematical equations!"

Advertisement
My AI just started making smart moves (e.g. protecting the flag).
Quote: Original post by mudslinger
My AI just started making smart moves (e.g. protecting the flag).



Did you do it similarly to what I suggested? Can you describe what you've done so far?
It's exactly the algorithm you suggested. The playout is purely random, and make some game configurations more likely based on the history of the game. (but not Bayes' formula)

It's terribly inefficient. With 5,000 playouts, it takes a few minutes to run. I think I can speed it up.
Quote: Original post by mudslinger
It's exactly the algorithm you suggested. The playout is purely random, and make some game configurations more likely based on the history of the game. (but not Bayes' formula)

It's terribly inefficient. With 5,000 playouts, it takes a few minutes to run. I think I can speed it up.



Yes, performance tends to be an issue with this approach. Time to optimize the code. There are also sometimes simple modifications to the playout policy that will make the playouts shorter or faster to evaluate, without biasing the results too badly.
Going back to Bayes, I would think that you could profile all the pieces based on a sliding belief of what you think they might be. Obviously, everything is equal at the beginning. Once you know what something is, it not only affects that piece but the others as well since the type of piece that is now know is taken off the potential list of the others. Over time, you will hone down what you think things might be. Additionally, you can nudge things based on the other player's behavior. If he starts moving a piece aggressively toward one of your pieces that HE knows, you can assume that it is a higher-ranking piece and slide your assumptions accordingly.

For strategy, you can then work out a shallow-depth min-max that is optimized using utility theory. That is, what are you going to gain/lose over the next few moves. This can support a higher-level strategy such as attrition, a directed attack at a place where you believe the enemy's flag to be, or defending a high-value area. There are other playbook entries that can be devised such as making a hole for a spy using other higher-ranking pieces, etc.

That said, I don't think Monte Carlo is even necessary.

Dave Mark - President and Lead Designer of Intrinsic Algorithm LLC
Professional consultant on game AI, mathematical modeling, simulation modeling
Co-founder and 10 year advisor of the GDC AI Summit
Author of the book, Behavioral Mathematics for Game AI
Blogs I write:
IA News - What's happening at IA | IA on AI - AI news and notes | Post-Play'em - Observations on AI of games I play

"Reducing the world to mathematical equations!"

Quote:Original post by InnocuousFox
That said, I don't think Monte Carlo is even necessary.


This is the algorithm I might try next:

For each move M {  Do 1,000 (or more) times {   Generate a plausible configuration for the pieces of the opponent using Bayes.     Run minimax on M     Count points for possible gains and losses for move M  }  Sum points for move M}Pick the move with most points


Thanks to everyone who's been replying.
Hello again,

I'm kinda lost with probabilities. I don't know how to adjust the chances of a piece being a certain piece using Bayes'. Here are the simplified rules of the game:
Each player has:amount  type1       flag6       privates1       rank 11       rank 21       rank 31       rank 41       rank 51       rank 61       rank 71       rank 81       rank 91       rank 101       rank 111       rank 122       spies---21(1) Higher rank takes out lower rank, privates and flags(2) Spies take out all ranks and flags, except privates(3) Privates take out spies and flag



This means every piece has a 1/21 chance of being a flag, 6/21 of being a private, 1/21 of being rank 1, ..., 2/21 chance of being a spy.

(Note: I'm speaking in the perspective of the AI)
My questions are:

(1) How do I adjust piece probabilities if my piece Rank N takes out an enemy piece with Rank M? (N < M, N is known)
(2) How do I adjust piece probabilities if an enemy piece Rank N takes out my piece with Rank M? (N < M, M is known)
(3) How do I adjust piece probabilities if my piece Rank N takes out an enemy piece with Rank N? (equal pieces, both are taken out)

It is obvious that if a piece takes out a spy, that piece must be a private. Also, there must always be a Flag.


What I've done so far is to set ranges on enemy pieces. That is, if it takes out my piece Rank N, that piece is sure to be a piece with Rank >N. Spies are counted, so that if my private takes out a spy, then the number of spies of my enemy decreases.

Thank you.
Remember that the Bayes information is only for the unknown pieces. When you discover a piece, then you decrease the "unknown" pool by 1 and the "piece X" pool by 1.

In your example, if you discover the single rank 6 piece, you now have 20 pieces unknown rather than 21. Therefore, the percentage of anything being a rank 6 is now 0%, but the odds of a given piece being something else increases slightly because the denominator is 20 rather than 21. (e.g. 4.7% to 5.0%)

Now, if you are holding a piece of Rank n, and you want to know the odds of you winning, you simply total up the percentage chances of all the things that it would beat (i.e. rank > n). Obviously, include the flag in this list. The same can be said for calculating the other direction. If you put a piece at risk, what are the odds that the unknown piece next to it could capture it?

The fun comes when you start to bias pieces for other reasons. For example, if the other player knows you have a Rank 3 piece that you are advancing and a particular piece of his is running away from it, it is more likely that this piece is Rank 4+. To get really fancy, it is actually more likely that this piece is rank 5 or 6 than it is 7+. After all, he would be more concerned with saving a 5 or 6 than he would a lesser rank. However, that gets all sorts of subjective at that point.

The point is, you can tweak the percentage chance of something manually and have it cascade to other pieces. The trick is to use a fully normalized range that encompasses every piece. As you increase the likelihood of Piece A being Rank N, it necessarily reduces the odds of the other pieces being Rank N and, therefore, increases the odds the the other pieces are NOT Rank N (i.e. all the others). It's a more subtle version of going from whatever % all the way to 1.0 (i.e. I know what that piece is).

So, once you have all these percentages, you can seed all sorts of things... MinMax, Markov models, Monte Carlo, etc. to determine what the "best move" should be at any given point.

(I miss anything Alvaro?)

Dave Mark - President and Lead Designer of Intrinsic Algorithm LLC
Professional consultant on game AI, mathematical modeling, simulation modeling
Co-founder and 10 year advisor of the GDC AI Summit
Author of the book, Behavioral Mathematics for Game AI
Blogs I write:
IA News - What's happening at IA | IA on AI - AI news and notes | Post-Play'em - Observations on AI of games I play

"Reducing the world to mathematical equations!"

Quote:Original post by InnocuousFox
(I miss anything Alvaro?)


There are two things that you might have missed. One is that in this game it seems to be the case that the rank of the enemy piece is not revealed after a challenge, so you generally don't learn that a piece is definitely of rank 6, only who won.

The bigger thing that you (and all of us in this thread so far) are missing is a concrete algorithm that can generate an unbiased random arrangement of the pieces that satisfies certain constraints, or that respects the probabilities that we have discovered for each piece.

In theory, you could simply generate completely unbiased random distributions for the initial configuration of the pieces and give them a weight that is proportional to the product of the conditional probability of every event that has happened from the beginning of the game. In particular, distributions that are inconsistent with the results of the challenges so far would get a weight of 0, so you can just discard them. If you have a fancier Bayesian model that tells you things like "the flag moving forward unprotected is a very low-probability event", the product of conditional probabilities will take care of it.

The problem with this simple and theoretically sound method is that after a few challenges, the probability of sampling something that gets a weight larger than 0 will be vanishingly small. I believe importance sampling ("Application to simulation" in particular) might be the kind of tool needed here, but I haven't actually used other than for a couple of toy problems.

I'll think about the problem some more, but it's probably tricky.

This topic is closed to new replies.

Advertisement