'How' can AI lose????

Started by
12 comments, last by Raghar 18 years, 6 months ago
Hi im new and this is my first post to gamedev.net feel free to yell at me if i haven't read this topic on a back post, im working on reading alot of older stuff, and getting up to speed. My question is HOW does an AI lose? given a situation like starcraft:broodwar, the first game i ever made custom maps/seirinos/AIs for. not couting the give_money command which basicly lets the AI cheat, i am still confused how they can lose. The computer can get off the starting line faster. as in after say 2 mins the computer can have more resourses gathered and more buildings in construction, more units building, more upgrades upgrading. If the computer uses a couter attack seniero, i.e. builds defensives and waits for an attack. (if you couter attack you have more of an advantage. i.e. in defending you can use buildings not units to defend.) The computer will always ahve MORE/BETTER units than a human player and if you go sucided for sucided, meaning 100% of players forces attck, computer defends and, attcks with what ever it has left. HOW can the computer lose? The issues i have had with starcraft is that you cannot force an AI to build say a wall of defenvie buildings, it has to rely more on offense than defense. But even then if you hack the map files to insert your own ai code you can play a "fair" AI for a good 3 hours, on one map.
Advertisement
Because a real time strategy game is about strategy. You could have the best thrower and the best runner, yet still lose at football because neither understand avoidence.

The AI in RTS games is a particular point of interest for game AI. Currently RTS AI only poses a challenge to humans because of it's superior micromanagement skills which you covered. However an AI lacks the ability to create large scale, abstract battle plans. In Starcraft for example, it basically builds units, then throws them directly at you. Some programmers will hardcode the AI with special tactics, but these are easily learned and countered by humans.

So while the AI might be able to create a better economy, and thus build more units, it's ability to use units is pathetic. When was the last time an AI executed a Reaver drop (using 1 or 2 shuttles to drop reavers directly into the enemies worker line, then load them back up after killing off most of the workers), an intentional two pronged attack, or used long range artillery to take out your bunkers/static defenses before sending in ground troops (instead the AI will attack with the artillery, but at the same time send in ground troops on a needless suicide mission).

Human's abstract thinking allows us to execute a defeat in detail - placing the largest amount of force on the weakest point, such as specifically using air units to attack melee ground troops, or executing hit and run attacks against workers, or otherwise exploiting rock/paper/scissor unit relations to their full potential. We use our brains to do the most damage with the least amount of loss.

Human vs AI is like putting a weaker fighter up against an obviously stronger (physically) fighter. The weakness is that the strong fighter is slow and stupid - the human fighter doesn't win by executing a dumb frontal assault like the AI, he moves behind and kicks the moronic AI in the back.
I'm wondering... I don't think I would be able to create such an AI, but... what IF, an AI would be programmed with many of these tactics, I mean, many, and would be programmed to exploit weaknesses, and rotate all of its knowledge upon the player reaction, I mean, then it would be a lot harder, if not impossible to beat?
Then what about games like pong, the computer should never lose..
Quote:Original post by Axiverse
Then what about games like pong, the computer should never lose..


You're right, but there has to be a lead way on that because people would not play if they could never score. A pong game I wrote years ago was unbeatable on hard mode. I have never once scored against it in hard mode.

BTW: I noticed you're from Cincinnati, I am about 30 minutes north east of Columbus. I live in Ashland, just interesting :-)
It depends on how flexible the game rules are, how many options the player has to play with. If you took say, C&C Red Alert (the unit movement and shooting/damage mechanics, along with over armored resource collectors) and made it so that you could only build one type of tank, and removed all map features, the AI would win every time because the only options are build tank and send tank at enemy (at which point it's a numbers game, and the AI would win).

While you can hardcode AI with tactics (and even rotate them), over time humans learn them. Because the AI isn't smart enough to realize that a tactic obviously isn't going to work (i.e. a human will call off an air attack if he suddenly sees the enemy deploy a SAM site), it can be taken it by it's predictability.

While not an RTS, the first Advance Wars for GBA had one AI tactic in particular that could be turned against it. The AI would rank APCs and Landers (water APCs) as very high value targets, since they where weak yet could be very dangerous (enemy uses APC to delivery a soldier right to your HQ and conquers it). Before the player figures this out it's a good tactic. But once the player figures out that the AI will drop everything to attack an APC, the player can exploit it. The player uses an empty APC to draw the enemy into the open to attack it, then uses his own prepositioned units to ambush them.

And this could be done over and over again because the AI can't make abstract decisions, and can't can't make abstract judgements of success vs failure. It can have preprogrammed ways of judging if a tactic worked well or not (say by counting the number of units lost vs killed), but once a player learns the parameters they can just use that against the AI - make it think a particular tactic is the one and only good tactic, then when the enemy is thoughly entrenched in using that tactic, start using the obvious counter.

As an example, the player could focus entirely on ground forces with no anti-air visible, making the AI constantly up the value of air tactics. Then suddenly pull out the anti-air units that have been hidden away or the super anti-air tech the player has been researching, and clean the AIs clock. While a human could be lured into going almost all air in the same way, once they see their air units being destroyed by pure anti-air, they would immediately switch to land units, because they can make the abstract observation that the enemy has switched tactics. An AI can't make that observation. It can only increment or decrement the supposed value of a tactic. And if it's just been built up to believe that tactic is great (airtactics = 0 -> airtactics = 100), it will take twice as long for it to learn that the tactic is now horrible (airtactics = 100 -> airtactics = -100).

the basic flaw is your assumption that ingame "AI" is intelligent.

it is not. no matter how "smart" you program an AI to be, the player is many, many steps ahead. our most important advantage is our ability to learn, no computer ( for a while at least ) that is available to play games on will be able to take the gamestate, make abstractions and abstraction of abstractions, compare them to previous experience, nake conjectures about likely future events, and still render the scene and manage the game.

humans are inborn with such knowledge, and although a computer will almost always beat us at a game we've just aquired, as we get a feel for their tactics we'll turn them against the AI( because its preprogrammed ), while the AI cannot reciprocate.

in some games we may not even realise what we are doing. it can sometimes be hard to watch a new player make an "obvious" mistake because you can *feel* what the AIs response would be.
I guess that's where some sort of total randomness should be carefully inserted in the tactics. But then agian, it can make totally stupid decisions, but at the same time, make it harder to predict.
Quote:Original post by persil
I guess that's where some sort of total randomness should be carefully inserted in the tactics. But then agian, it can make totally stupid decisions, but at the same time, make it harder to predict.


the chances that an AI will do something to advance itself are very small. you could build a new AI to predict which random things would be beneficial - but they still suffer the same weaknesses

if you were playing a RTS( for example ) where every so often the AI did something completely random( send some troops away fromo where they needed, defend some worthless land ), you'd probably wonder what the AI prgrammers were ingesting.

another point, retaining the RTS example, any "random" action taken, if it's going to make a significant effec ton the AI players position, should involve a large body of troops. otherwise its not going to effect the game at all.

but doing something random with large bodies of troops is very risky, the player may figure out that there are say 4 different random things -- attack a strange location, charge a certain area, move to a point on the map and evacute an area.

in realising this the player can predict the outcome of the "random" action taken by the AI as soon as it is taken - an ability that the AI lacks.
Quote:Original post by Axiverse
Then what about games like pong, the computer should never lose.


and it wont if that's what you program it to do, there are plenty of games where it's possible to code "perfect" AI. We don't design game AI to win though, we design it to provide a fun challenge for the player. In the example of Pong, we may give the AI a simulated "reaction time", or have it move the paddle around semi-randomly sometimes, etc.

An AI the player is unable to defeat isn't fun.

- Jason Astle-Adams

This topic is closed to new replies.

Advertisement