Sign in to follow this  

statistical significance and sample size to improve AI

Recommended Posts

I am making an RTS game and trying to improve the AI. The AIs behavior depends upon many variables. I am running simulations with different values of variables to see which ones yeild 'smart' AIs to increase the AI difficulty. What I am wondering is how many trials of each variable should I run and how do I calculate what is a significant result which means the AI is actually smarter? I tried doing searches related to statistics but could only find examples relating to populations and survey size. For example, one of the variables is the number of offensive reserve units the AI creates before sending groups to battle. The default value is 4 units, I then ran test with the following values and results
% won	value				battles/samples
43%	ai.iNumReserveUnits = 0;	43
62%	ai.iNumReserveUnits = 6;	30
62%	ai.iNumReserveUnits = 8;	43
55%	ai.iNumReserveUnits = 10;	30
So player 1 used the test value of 0,6,8,10 and player used the default value of 4. The first number is the % times player1 won the match. Thus looking at this data increasing iNumReserveUnits to 6 or 8 should increase the chances of the AI winning by 12%. Are there enough samples for these cases? Is this conclusion valid?

Share this post

Link to post
Share on other sites
All you want to know is if changing the parameter caused the new AI to win a statistically significant number of times more than half the time. This can be done using a one-sided binomial test. You should decide on a sample size and a desired confidence level in advance, and from there the cumulative binomial distribution can be used to compute the number of wins needed for the improvement to be statistically signficiant. For 30 trials and 95% confidence, the AI would need to have at least 20 wins. Raising the desired confidence would increase the number of wins needed. Raising the number of trials would make the test more sensitive (more able to detect small changes). For 43 trials and 95% confidence, you'd need at least 28 wins.

Share this post

Link to post
Share on other sites
If you would like to test hypotheses on 2 samples, say A and B, with samplesizes of at least 25 each and normal distribution, for a significant difference of samplemeans, you could use the 'two-sample z-test':

First you have to set up a null and alternative hypotheses. For example:
H0: The samplemeans are the same (nothing has changed/is different)
H1: The samplemeans are not the same (sth. has changed/is different)

Then calculate the sample means, say MoA and MoB, and sample standard deviations, say SdA and SdB, for both samples A and B.
After that, calculate the estimated standard error (ESE):

ESE = sqrt(((SdA^2)/NA)+((SdB^2)/NB))

Where NA is the sample size of sample A and NB is the sample size of sample B.
Now you can calculate the test statistic z by:

z = (MoA-MoB)/ESE

If z <= -1.96 or z >= 1.96, then H0 is rejected at the 5% significance level in favour of H1 (that means: sth. has changed/is different).
Otherwise H0 can't be rejected at the 5% significance level (that means: nothing has changed/is different).

Quick, easy and at a 95% significance level, because 95% of values in a normal distribution lie within 1.96 standard deviations of the mean. To obtain a significance level of 90% use 1.64 and for 99% 2.58 instead.

I hope that was somehow helpful.

Share this post

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this