how can I weigh algorithms efficently.

Started by
8 comments, last by Spline Driver 13 years, 6 months ago
So I have a great idea for an A.I but I am not sure how to have the A.I randomly choose an algorithm to use then put a weight on it and the better that algorithm gets the A.I to the result the more it will use that algorithm.

Its dificult to explain but I want to make the A.I with reflects's to start but then learn from using the reflects and be able to combined what it has to make new information and stor it in a database with how well it worked and be able to make better geuss on what to do next time it trys the task.

Any ideas?



Advertisement
depending on what your task is... you could use how long it takes to accomplish said task. The faster algorithm ofcourse would be the best.

for RTS games, sending a group of units at the player, judgeing how long that group last and how much damage they cause would be a good indication, that it picked a good combo of units.
[ dev journal ]
[ current projects' videos ]
[ Zolo Project ]
I'm not mean, I just like to get to the point.
How would I put weights on each algorithm and minpulate the weights according to the better one save and recall them later.
It would help to know what genre of game you are developing and what types of decisions the AI is making.

For instance, for a fighting game where you are trying to decide what type of attack to use (a high-kick, or a punch, or defend instead...), you can think of the problem as a multi-armed bandit and use an algorithm like UCB1 or an epsilon-greedy strategy to pick the next attack.

However, this type of approach may not be applicable to your situation at all.

Its for RPG and the charicters need to learn to interact better over time and the enemy to learn to attack better plus having buddies that help you out also but they start out :dumb" and get smarter over time maybe.
I can't help you: Your problem is not sufficiently well defined.

I think what you want is a genetic algorithm. This is basically what you do for that:

1. Create N algorithm instances with randomly assigned weights.
2. Loop through all the algorithms, testing each one out and assigning it a "fitness" score based on how well it does.
3. Randomly pick two (or more) instances, making those with higher fitness scores more likely to be picked.
4. Use some process to combine the selected instances's weights (average them, randomly pick some from each, etc.)
5. Add some random "mutations" to the weights of the new algorithm instance.
6. Repeat from step 3, until you have N new instances.
7. Repeat from step 2, using the new instances instead of creating random ones.

So that's basically how a genetic algorithm works. I guess that wasn't a very good explanation...You can probably find something on Google.
GA? Hmmm... not hardly. I believe what he is looking for is simply reinforcement learning. A simple version is to have an array of values for each decision. As something good happens, increase the value for that decision. If something bad happens, decrease it. Then, you can use something like weighted randoms to select from the different choices. As the value of a decision goes up, it gets picked more often than the others.

Done.

Dave Mark - President and Lead Designer of Intrinsic Algorithm LLC
Professional consultant on game AI, mathematical modeling, simulation modeling
Co-founder and 10 year advisor of the GDC AI Summit
Author of the book, Behavioral Mathematics for Game AI
Blogs I write:
IA News - What's happening at IA | IA on AI - AI news and notes | Post-Play'em - Observations on AI of games I play

"Reducing the world to mathematical equations!"

Quote:Original post by InnocuousFox
GA? Hmmm... not hardly. I believe what he is looking for is simply reinforcement learning. A simple version is to have an array of values for each decision. As something good happens, increase the value for that decision. If something bad happens, decrease it. Then, you can use something like weighted randoms to select from the different choices. As the value of a decision goes up, it gets picked more often than the others.

Done.


that sounds what I am looking for. so I belive that I would make a 3D array and one set of vaules is for how many chocies there are another set is the value or weight of for that algorithm and the last is the algorithm.
so say
[1][0-9][algorithm 1]
[2][0-9][algorithm 2 or decision 2]
[3][0-9][algorithm 3 or decision 3]
.
.
.
[n-1][n-(0-9)][algorithm n-1 or decistion n-1]

Whould that be an accurate depiction of what was stated?
I am not sure if this is what you are looking for, but Q Learning can be a great solution to many reinforcement learning problems. This is where I learned it.

http://people.revoledu.com/kardi/tutorial/ReinforcementLearning/Q-Learning.htm

This topic is closed to new replies.

Advertisement