Sign in to follow this  
Plasmadog

Flight Sim dogfighting AI

Recommended Posts

Hi all. Does anyone know of any resources that explain common approaches to writing AI for combat flight sims? I'm sure I've seen the topic come up here and in other places before, but now that I need it I cannot for the life of me find anything at all on the subject.

Share this post


Link to post
Share on other sites
I also have a common interest in both flight sims and AI, and believe me I've done quite a bit of searching to find anything at all related to this subject. Closest thing I ever found was SOAR:

http://sitemaker.umich.edu/soar

http://www.soartech.com/home.php

but that's not even really what I was (or you) looking for.

Sorry I can't be of more help but if you do come across something, please do post links.

Share this post


Link to post
Share on other sites
OK, so I still haven't found anything on the subject, so maybe a better approach is to describe the methods I'm thinking of using, and invite comments.

I've read Craig Reynolds' very informative paper about steering behaviors, and I think those techniques can form the basis of a dogfighting AI. I'm thinking that a single such behavior will not work very well for dogfighting though, or at least it will not look much like the behavior of a real pilot. So, I'm thinking of developing several different steering behaviors. Each of the steering behaviors would have the goal of either evading an enemy, getting into an attack position, flying formation with an ally, etc, but each one would use a different technique to achieve its goal. Most of these behaviors will end up approximating the common dogfighting maneuvers such as "split-s", "Immelmann turn", "half loop", etc, others will be a bit more mundane. Once there is a library of behaviors available, then dogfighting becomes a matter of assessing the current tactical situation and selecting the steering behavior most appropriate for that situation.

So, how is that assessment done? The approach I'm thinking about involves evaluating several key factors of the current situation. Things like the distance and direction of the enemy, speed and altitude of both aircraft (both absolute and relative), max speeds and turning ability of both aircraft, etc. All of these variables will be classified into a small number of discrete groups. For example, distance between the aircraft will be one of [near, medium-near, medium, medium-far, far], and a certain amount of fuzziness will be used in the definition of those distances. Some variables will be boolean, others will have a different number of possible values. I imagine the average number of possible values for a variable will be about 3.

In order to keep the number of these variables down, some of the variables may be products of several real properties. For example, combat pilots sometimes talk in terms of the "energy" of a plane, which is a product of things like speed, mass, altitude, etc. This single metric is probably better than the separate factors of which it is a product.

The end result of this is that any situation can be categorized into one of a finite (albeit very large) number of discrete descriptions. For example, if there are 10 variables to consider, with an average of 3 possible values each, there would be about 59,059 (that's 3^10) situation descriptors. I plan to create an n-dimensional lookup table that defines which steering behavior to use in each of those situations.

Madness! How could I possibly build such a huge and complex table? How could I possibly know which behavior is best for a given situation? Well, I don't intend to do it myself. If I can develop the simulation to the point where the lookup table is the only part left to do, then I should be able to use a genetic algorithm to populate the table for me. By running the simulation many thousand times, starting with a randomly populated table, and by selecting for pilot success and longevity, a pretty good table should evolve. I think.

Maybe that's wishful thinking.

Anyway, I'd really like to hear from anyone who has any opinion on whether or not this seems like a workable system. I'm particularly interested in the opinions of those who have actually used genetic algorithms to produce something like my table. Is it a realistic plan?

Share this post


Link to post
Share on other sites
Hrm. Actually, filling out the N-dimensional matrix you'd create shouldn't be THAT difficult. Your evolving suggestion could work, although then you'd hope that one pilot would become the best.

Personally I would try to tackle this a bit 'easier', and let all pilots work together on one single lookup table. Pit, for example, 10 pilots against each other, and make the game run 'turn based'. In any given situation, let a pilot make X number of random choices in behaviour from the lookup table, weighted by the potential of the manouver (all start equally useful).

Once you have, say, 25 options, for each option calculate forward, assuming that everyone else picks the 1 most likely manouver for them to take. See which of these 25 does best, weigh that one heavier, weigh horrible solutions lower. From your start situation, advance the 'turn' to the next pilot, until all who need to make their choices do so. Fitness function could involve how close you start out to 'shooting range', how many enemies can shoot at you at the start, then for your 'new position', how close you are to shoot someone, and how many can shoot you there. Oh, and crash is BAD.

Sure, what eventually happens often isn't what is predicted during your 25 test runs, as your 'opponents' in the test pick only one option. But hey, we aren't trying to play chess here. ;) We are trying to make an easy algorithm to start filling out your matrix, ne?

This would be boring to look at no doubt, but should be possible to run quite rapidly, all pilots work together (so you analyse any given random situation from N angles at the same time), and you can just leave it running for a few days to see if you make progress.

Share this post


Link to post
Share on other sites
That's an interesting idea. It probably would be easier, but, if I understand you correctly, the fitness of a particular choice would have to be evaluated very shortly after making it. Unless there was a mechanism for looking several moves ahead, or revising fitness retrospectively, it might unfairly disadvantage choices that look bad in the short term but ultimately lead to victory (kind of like sacrificing a queen in chess).
Also, because some steering behaviors will take longer than others to play out, I'm not sure how well the turn-based approach would work.

Perhaps both problems could be addressed by running the simulation for maybe one minute for each choice, and evaluating the fitness several times over that period, giving more weight to the earlier evaluations. Then the simulation could be rewound to the point at which the choice was made and another one tested. One minute should be enough for all choices to lead to some significant change in the tactical situation, and any victory that comes after that probably shouldn't be attributed to any single choice anyway.

Any thoughts on that?

Share this post


Link to post
Share on other sites
For those who are interested, I managed to find a relevant paper. I haven't read it all yet, but it looks very interesting.
http://www.kbs.twi.tudelft.nl/docs/MSc/2005/Solinger_David/thesis.pdf

Also, there will apparently be an article on flight sim AI in the next AI Wisdom book (assuming it gets accepted for publication, and I have no doubt that it will).

Share this post


Link to post
Share on other sites
Wow.. I'm rather stunned. I've been thinking about how one could evolve AI automatically, read articles about it and generally researched it. Everywhere it semt like a really hard thing to do.
But what you're saying about filling out an n-dimensional array is something I've never thought about before. Ayy! This got me exited!
I'm pretty sure this is an old idea, but it's new to me.

The AI has got a set number of things it could detect and base it's actions on. It also has a set number of different actions it could do.
You put every different state which the AI could detect in a n-dimensional array. The contents of each cell is the action to perform in that situation.
First you fill the array up completely randomly.
Then you create a way to measure how successful the AI is.
After that you test the AI setup, and compare it's success to several other different randomizations.
Then you take the most successful one and randomize it slightly again, and so on and so on.


I assume this AI technique already have a name? What is it called?

Share this post


Link to post
Share on other sites
NQ, you should read about Genetic Algorithms... its not exactly what you describe, but its somewhat similar.

Eric

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this