• Advertisement
Sign in to follow this  

Idea Kickin'

This topic is 4673 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hey, As some of you may have seen by my posts, I've been working on a game called Shadow Armada for awhile.. But I'm really stuck concerning where to take the AI (Here concerning the piloting of the ship, making it fire is easy enough). I've tried neural nets (Interface = Turn Left/Right, Speed up/slow down, etc) having them play many thousand generations, I did a 10000 trial too with little luck in terms of success. So then, after fiddling with that for awhile, I moved to a bit more "open" evolving structures, and gave each AI a tree of decisions.. Which took an input value, compared it to a reference, and if it was greater than the value it would go down one tree branch and perform a function, and if it was less it would go down another branch. It could also compare with other inputs/etc. This did evolve to some effect, but even after many many trials it was still much too weak to actually use. Next I tried my current scheme, which is to have an array of tweakable values (20 values right now; which evolved both via GA's and I've tried tweaking to match player input), and 100 random paths that the computer can take are evaluated for these. For instance, hitting an asteroid will deduct X points from that path's rating, ramming an opponent that will take damage gives Y points, being in a missile's path takes off Z points, etc. And it works alright... Until you get a grip on the game and realize that it has no strategic skill whatsoever... It's sort of a blind-survival algorithm. And well, I couldn't quite figure out how to make it perform better in that department. So I'd like to know what you guys think I should go for with the AI.. If you want a description of the game, the quick description would be a turn-based squad space shooter. Sort of. Anyway, the link is here and I would really appreciate any feedback on the game/AI you guys can give.. I'm pretty stumped as to a better approach. Btw, that AI is all-seeing.. It can tell which path you've decided to take, which is another thing I'm definitely ditching in the next version.. I don't want any cheats. And I think if you don't understand how to play, the readme describes it alright, and if not that then the online tutorial does... I'm putting in an in-game tutorial sometime soon.. Thanks! Walt

Share this post


Link to post
Share on other sites
Advertisement
What do your current versions do wrong? In what ways don't they meet your requirements?

Share this post


Link to post
Share on other sites
Two major things: They do not alter strategy at all throughout games (Early game strats should tend to differ from end game strats) and, mainly, the difficulty is just not good enough.

Another question is what sort of prediction should I use for the AI? I haven't done much work with predictive algorithms at all.. Is there some sort of algorithm/heuristic that would work well? Or is prediction a bad idea and I should just stick with generalized areas..?

I'm looking for different ideas to kick around really.. I don't know what else to try.

Share this post


Link to post
Share on other sites
you could do it from a higher level, i.e. instead of giving them the option to "Turn Left/Right, Speed up/slow down, etc" let them choose from very basic tactics (evade fire, close in on target, attempt to get behind target, fire). then you can hard-code the tactics (give them some error to adjust difficulty/skill level) and let 'em rip.

you could even let them choose multiple weighted tactics. closing in for a shot while evading fire is a much different strategy than closing in while firing (i.e. bum-rushing).

this would still give you the opportunity to evolve behaviors from them, but you might get somewhere faster at the expense of starting with larger blocks. it'd be even niftier if you gave different enemy ships different capabilities, then they would all develop their own set of tactics to deal with the situation as best they could.

this is so much easier when someone else is going to have to code it :)

Share this post


Link to post
Share on other sites
you said it krez :-)

Perhaps you could use a rt network?

Basically you shove in your inputs.
The inputs changes the internal state of the neuron until it gets above the activation potentiol, then it fires and looses all its "charge".

You also have a decay weight, which is the amount of charge it looses. (as a percentage.)

The outputs are for each of the tactics.

Perhaps train one to do the tactic observations, and another as the pilot? (youd like the outputs of one to the inputs of the other. after training them both seperatly).

From,
Nice coder

Share this post


Link to post
Share on other sites
Ok, sounds good enough, I just need to figure out my input set I s'pose...

So then the two neural nets would be something like...

Neural Net Immediate:
Takes input (Asteroid Threat of Path, Missile Threat, Etc) and produces a single rating float based upon the immediate consequences of moving to a point.

This neural net would take KNOWN INPUT (Aka I KNOW I will strike this asteroid, or this missile) and I'm not too worried about it...

The problem is..

Neural Net Strategy:
What would I input for this? I mean, I could take things like Probability of being behind/sides of ship... And then distance to teammates? What sort of inputs would I use for a neural net that decides general strategy? (rt net does sound good for a type of "overmind" net btw, thx coder)

Then one more thing comes to mind...

How does this sound for firing:
Store the last Y scenarios where a missile of type X hit the target. When a fire is being pondered, take into account your position versus their position and find an instance where this is within a certain range (Buoyancy) of a past success scenario... Something along those lines. So a "memory" of the last times these weapons worked. Then every game or so small random values would be added (mutation) to ensure that the AI constantly probes new missile techniques... Would this be an effective way of teaching the computer to effectively use missiles? Or how else should I go about this..

Share this post


Link to post
Share on other sites
Deterministic finite automaton. You can learn about them from my school's webpages at http://www.csl.mtu.edu/cs2311/www/lecture/finiteAutomaton.pdf. It is some pretty heavy stuff and thats probably not the best website in the world. I'd decide if it is for you, and if so try finding a book on it. You may have to read some of the other notes from the course to understand the DFA notes. You can find those here.

Share this post


Link to post
Share on other sites
A little game I'm working on now I implemented a little thing for firing missiles. Each shot it tracks player velocity and reaction to incomming fire and stores it for reference for the next time it fires at the player, to attempt to simulate learning how the player reacts to incomming fire, stoping, changing direction or speeding up and calculates its next aim on that probibility. Also if the enemy is damaged it does a fight or flight check based on its current state of damage where fight = manuvering into the best possible position for fire that has less chance of being damaged again.
I have it shooting one missile at a time atm as I work out the pathfinding for the fight manuvering for safe firing positions while keeping track of player position and updating its tactic.
The graphics suck but I am hoping it will be the smartest weak looking game that ever kicked your butt :)

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement