Jump to content
  • Advertisement
Sign in to follow this  
qwerzor

combined AI

This topic is 3656 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I was wondering if there are any samples online about AI libraries that combine multiple AI theories on one entity? At the moment I'm working on my own game engine and I had this idea about having the ability to inject multiple ai theories in one monster, like having a monster contain both A* and flocking abilities. However I'm uncertain how these two different theories are going to function properly together. Should I just pick the theory that has the biggest score increase or would I have to combine multiple theories each step?

Share this post


Link to post
Share on other sites
Advertisement
Those two techniques are relatively easy to combine. Implement flocking behaviors first, and then add a behavior that attracts the agent to the next node in the A* path.

Share this post


Link to post
Share on other sites
i know that; but im looking for a generic way to combine difference ai theories not just flocking and a* that was just an example. additionally i want to be able to change the intelligence of entities during runtime.

at the moment i tought up some software architecture, but im still not sure how i can combine these different theories other then only selecting the one with the best score.

brainjg9.jpg

Share this post


Link to post
Share on other sites
I'm not convinced that there's much worth in trying to think about a variety of quite different techniques in the same way - they do different things, require different interfaces, and produce different outputs.

On the other hand, there are some general purpose methods used for combining multiple systems into one, such as blackboard systems or subsumption architectures.

Oh, and they're not really 'theories'. They don't predict anything and they aren't testable.

Share this post


Link to post
Share on other sites
Best score based on the algorithm's own prediction is useless. I don't know how to exemplify it without a short parable, so bear with me.

An old king had two advisers, Alphabeta and Geeay. One day, seeing that his neighbor was training a small but well-equipped army, he summoned both to his chamber to discuss the best strategy to follow. Geeay quickly intuited that, with a small army that could quickly be surrounded and beaten into submission, the neighboring kingdom would certainly surrender. Meanwhile, the prudent Alphabeta went ahead and simulated a battle between the two kingdoms, and realized that the enemy would certainly draw the king's army into a ravine where their numbers would be for naught, and so proposed an unsatisfying but safe truce. The old king, emboldened by dreams of conquest, foolishly led the attack, and perished with his men in the ravine Alphabeta predicted would be his doom.

And now you know why I don't post to Writing in Games. :)

Share this post


Link to post
Share on other sites
Quote:
Original post by ruby-lang
Best score based on the algorithm's own prediction is useless. I don't know how to exemplify it without a short parable, so bear with me.

An old king had two advisers, Alphabeta and Geeay. One day, seeing that his neighbor was training a small but well-equipped army, he summoned both to his chamber to discuss the best strategy to follow. Geeay quickly intuited that, with a small army that could quickly be surrounded and beaten into submission, the neighboring kingdom would certainly surrender. Meanwhile, the prudent Alphabeta went ahead and simulated a battle between the two kingdoms, and realized that the enemy would certainly draw the king's army into a ravine where their numbers would be for naught, and so proposed an unsatisfying but safe truce. The old king, emboldened by dreams of conquest, foolishly led the attack, and perished with his men in the ravine Alphabeta predicted would be his doom.

And now you know why I don't post to Writing in Games. :)


This.

Seriously though, you need to be looking at a hybrid architecture here - each method you talk about needs to be "modulated" by the other methods you're using.

Sometimes (as in the excellent parable) the two methods will provide contradictory evidence, and this is where an agent's "personality" will come into -play: parameters for agression, pensiveness, and whatever else you want to quantify.

Alternatively you could assign a utility value to all of your outcomes, and tack a bayesian decision network on top of the whole thing, thus generating an optimum decision-to-utility balance... hey: it's only CPU cycles, who's counting! :)

Share this post


Link to post
Share on other sites
when using hybrid techniques you need to seriously think about whether it will achieve any that a single technique cant? and if it's worth it.

I've had experience using PSOs to train neural networks, or even using ESs to generate game trees and often the complexity just isn't worth it.

Occams razor applies here, the simplest solution is often the best one.

Share this post


Link to post
Share on other sites
Quote:
Original post by Coldon
I've had experience using PSOs to train neural networks, or even using ESs to generate game trees and often the complexity just isn't worth it.

"Particle Swarm Optimization" wasn't too hard to figure out with a bit of googling, but I have no idea what "ES" is. People from different backgrounds and skill levels read these forums and it's probably best to not use many acronyms.

[EDIT: Is it "Expectation-Solution"?]

Share this post


Link to post
Share on other sites
The "theories" in and of themselves don't do anything. They're not magical black box algorithms that grant intelligence. They no more grant "intelligence" for an entity than does algebra. They are algorithms that you have to apply to certain situations to get certain results.

A* is just a way to find a path through a network of nodes. It doesn't choose a destination, it doesn't allow an entity to move from point A to point B. It simply finds a path between 2 nodes, end of story.

Similarly, flocking doesn't get you anything volitional for an entity. It's just a ruleset that you must apply to other systems (locomotion, physics, etc) to actually get something "intelligent".

State machines, same thing. It's just an algorithm that simplifies the construction of entity behavior.

Most AI already combines a number of techniques. All are applied specifically to the domains of "intelligence" for which they are useful. For instance, the AI on the FPS game I'm working on have: A* pathfinding, state machine driven animation, heuristically driven behaviors and event driven "senses". All provide inputs or outputs of the total AI system. Point being, these "theories" you talk about are not complete AI solutions, they are simply building blocks that can be combined to create AI.

If you are interested in trying out various techniques simply create a basic entity system with a static and well published interface. Then you can try out various combinations of algorithmic techniques to form different types of "brains".

-me

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

Participate in the game development conversation and more when you create an account on GameDev.net!

Sign me up!