Jump to content
  • Advertisement


This topic is now archived and is closed to further replies.


What kind of AI do Jet Fighter games use???

This topic is 5889 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Yeh, is it really hard to make AI for the enemy aircraft? Like the movements and shooting. What kind of algarithms are most commonly used in flying games?

Share this post

Link to post
Share on other sites
Guest Anonymous Poster
I''d imagine most of the AI for a jet fighter game would be really easy, especially pathfinding, as you''re going to do line of sight to a specific stationary ground target, you can simply use Pythagoras to find a straight line distance to the target, then use simple trig. to find the angle you would need to turn by to be on target, if you can do vector maths, you should have no problem. If you are pathfinding to a moving target, it gets a bit more tricky, but not much, and it adds a bit more ''excitement''. One way of doing it, would be to recalculate the vector to the target on every update cycle, but that would make for a very inefficient and boring looking AI. To make it more interesting, a simple method would be to take the targets current velocity, and direction. Based on your distance to the target at the moment, find the time between you and the target, then calculate a predicted position of the target at the time you will get there.


[x y] v = direction and speed of the target
[x1 v1] v1 = direction and speed of the AI agent

//target travelling in direction [x y], at speed ''v'':

//distance between you and the target:

d = [x1 y1] - [x y]

speed = distance / time

//therefore, time (at that instant) between you and the target is

time = d/(v1 - v)

//therefore, if time is negative, you are going too slow,
//or the target is behind you!

//so, plot a prediction vector for the target:

[x* y*] = [x y] + (v * time)

now, you just head for [x* y*].

Also, you check every so often for major alterations in course of the target, and then update your trajectory accordingly, otherwise, when you reach the waypoint, pathfind to the target directly, or implement some kind of attack script..

In terms of engaging targets, you could implement some scripts such as chase, evade, etc.

Share this post

Link to post
Share on other sites
depends on how realistic the jet fighter simulation is. i defer to the above if its not very realistic. not an insult its just that AI is always defined by the problem. if the prob is simple the AI is simple.

if it fits into "ultra-realistic" there is two types of AI.
best results AI
realistic simulated behavior AI

i know the airforce simulates both mostly the latter but the former has interest.

if your talking about complicated combat enviornments the planes cant be everywhere you want them to be without some prediction. the more complicated enviornment the more results accurate/further prediction will get. there is a branching factor really. and a stiff one. 15 planes all in 30 mile circle is a mess of positioning. all with real life telemetry that involves real life physical stresses on the pilot as a limiting factor. best results isnt a trivial question the more realistic the simulation gets. thats just best results.

military also likes to simulated behavior in there too. no soldier responds to orders perfectly once the shooting starts. the battle simulation cant come close to accurate if the AI doesnt simulate a person. there are a few public military projects related to this.

Ahhh but this is a game board someone might mention. Jane''s doesnt win customers by making stuff up =) its a good question:
is it hard to make a good AI for an enemy fighter pilot in a realism simulator like Jane''s. What kinds of algorithms? pathing seems simple but there is definatly the concept of momentum at work in even games. you can say ya but the momentum can be brute forced with numbers. ya but the branching that results from me veering left me veering right change that. its a simple 3d space but its conceptually not 3d... seems like a vector field of sorts. suddenly its a search for most effective use of momentum realitive to all my options. thats just for best AI.

where does simulating a person fit into that? we can have weights of behavior based on aggresiveness,safety, friend just got blown to pieces cause you screwed up.

should AI artificial intelligence claim "simulated people" as a topic or is that a seperate field?

Share this post

Link to post
Share on other sites
Assume a reasonably accurate flightmodel and a 1-on-1 dogfight, with guns and/or short-range heatseekers. The computer plane should try to get above and behind the players plane and avoid the reverse. Real fight pilots, I believe, think a lot of how in terms of "energy" they have an advantage or disadvantage compared to the boogie. There's 4 terms that enter:

1) Altitude. Its good to be above the enemy, you can easily disengage if you wish, or trade altitude for speed by diving, and the enemy will loose speed if he starts to climb after you.
2) Speed. Can be exchanged for altitude as described. But it's easy to loose if you're not careful. Tight turns in the horizontal plane usually bleeds speed like crazy, and you can't get it back like you can with maneuvers in the horizontal plane, like if you climb, then dive. The optimal speed for turning ("corner speed"), and angular velocity when turning varies a lot between different planes.
3) Position. Well yeah, if the boogie starts on your tail, you'd best try shaking him off which as mentioned in 2) can cost you energy.
4) Engine-power. Can make up for a starting disadvantage if you can draw out the fight. Case in point: the US Navy Wildcats of WW2, which were badly mauled by the nimble japanse Zeros and Hayabusas in the opening battles, because the US pilots didn't avoid the sort of close, tight-turning dogfights where the Zeros had the advantage. At the battle of Midway, 6 months later, the Wildcat pilots had developed tactics to take advantage of their more powerful engines, and in several cases trounced attacking japanese fighters patrols.

Well, I don't know how commercial games do their computer adversaries, but this is how I imagine I could (try to) make an AI in a flightsim take account of these factors in their behavior:

1) Formulate a number (>20) of standard combat maneuvers that a plane can do. These would be standard combat tactics, like the "high yo-yo". IIRC this is when you're chasing a plane turning away from you, climb while turning lightly towards him, then when you're high and slow and thus more maneuverable turn tightly towards him and then dive. Other would be as simple as: fly away fast!
2) For each of these maneuvers formulate a heuristic function that calculates how viable this tactic is, given the height, speed, relative position and type of combatants.
3) Then for each maneuver I would write a mini-AI hardcoded for this specific maneuver.

Ingame the heuristic functions would be evaluated when first contact between the two planes are made. The mini-AI which gave the highest number would be activated. The heuristics will be reevaluated regularly, and if the active mini-AI's fitness falls below a certain treshold compared to one of the others, it'll switch.

"It's always useful when you face an enemy prepared to die for his country. That means both of you have exactly the same aim in mind." -Terry Pratchett

[edited by - deformed rabbit on May 6, 2002 11:26:34 AM]

Share this post

Link to post
Share on other sites
I would tend to agree with deformed rabbit on the need to design set manoeuvres. I might go about implementing them into an AI slightly different though.

The point of having set manoeuvres is that they can be scripted into a plan; either by the developer writing a scripted method for evading or engaging opponents, or by the AI in the game in real time, in response to the opponents moves.

One might say that, for the developer or the game AI, the aim of executing a manoeuvre is to increase the likelihood of killing the opponent while decreasing the energy consumed, or perhaps increasing the chance of escaping from a kill situation while decreasing the energy consumed. Here one assumes that increasing the likelihood of killing decreases the likelihood of being killed, but this is not always the case: you might need to consider the joint likelihood of killing and being killed. In planning terms, energy becomes the resource and being in a kill position the goal. In mathematical terms, choose the manoeuvre(s) that maximise dP/dE, where P is the probability of a kill and E is the energy of the aircraft.

How might this be done?

You could create an AI planner that considers sequences of moves (of length 1 or longer) and chooses the lowest cost move(s) that maximise a payoff function. You could come up with a basic probability of a kill based on a function of relative velocities (speeds and trajectories might have separate influences on probability), difference in heights, type or weapon, range to target, time on target (the amount of time the weapon can be trained on the target), etc. You can also do the same for the probability of being killed.

You can then use the expected payoff to compare which manoeuvre should be executed (the one with the highest expected payoff). The expected payoff might be something like:
P(kill)*Utility(kill) + P(being killed)*Utility(being killed)-f(cost of manoeuvre)

where the utility of being killed might be a really big negative numberand the utility of killing the opponent would be positive (and might vary depending on the stature of the opponent). Finally, f(cost) is an arbitrary function that takes into account the cost of aquiring the payoff. You could equally multiply or divide by this function, since it''s arbitrary. The result though should be that two manoeuvres with equal probabilities of kill and being killed should have different payoffs: higher for a lower cost.

There''s plenty here to digest and plenty more to discuss... on this idea or on others... so I won''t hold the floor any longer.



Share this post

Link to post
Share on other sites
You may find my open-source game Asteroids of some interest. Although it''s in space (ie. no gravity, so altitude becomes irrelevant) I have implemented Martian fighters (up to 8 at a time in any level - see marsship.cpp/h in the sourcecode) which have 2 primary goals:

1) Get behind your ship (whichever way it''s facing)
2) Aim for where your ship is going, not where it is currently (ie. target leading, so if you travel with a constant velocity, you will be hit by their lasers)

Also, at any time, this secondary rule is used:
3) If the targetted ship is within a degree or two of the Martian fighter''s sights (ie. the fighter has got a good shot) then fire.

It''s definitely not perfect. While this may be quite "realistic" behaviour in terms of dogfighting strategy in space (which we have all had years of experience in, right? :D), it does get very annoying to actually play against since you spend a lot of your time just spinning around trying to see the damn things, let alone get a good shot at them. Very dizzy.

Still, worth consideration. They do work to some extent, and all I do to make them "easier" or "harder" is distort their target leading ability and cap their engine speed on different difficulty levels. Have a look for yourself and see what you think.

Share this post

Link to post
Share on other sites

  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

Participate in the game development conversation and more when you create an account on GameDev.net!

Sign me up!