Jump to content
  • Advertisement
Sign in to follow this  
Algorithmic Ecology

AEcology: Upcoming AI generation library

This topic is 968 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

After a long period of development since my post to announce the project in 2014, I'm happy to be back here to present the culmination of our efforts over the past couple of years. I've received a lot of good feedback from developers since then, and I would love to hear some more now that we have a more mature product ready to be released.

 

Long story short, we are releasing a machine learning library capable of producing efficient and unique AI behaviors through a simple interface and I am here to gather initial reaction feedback and hopefully find some developers interested in becoming early adopters for the project. I have a very specific pitch for anyone who is interested below, but if you want to just skip to the meat and see what it's all about, please check out the website at https://www.aecology.com. Thanks again for all you do, guys.

 

AEcology is an AI generation library which has been in development since mid 2013. The mission of the software is to enable developers to generate machine learning AI models, a likely inevitability for the field which currently has few viable implementations. AEcology focuses on delivering the various promises of this paradigm. Specifically:

  • Augmenting or replacing traditional AI with behaviors or decisions that appear diverse and natural.

  • Producing control solutions which are prohibitively difficult or tedious to design manually.

  • Enabling AI to scale with environmental factors such as player skill level.

  • Optionally allowing developers to fit parameters to the situation which calls for them (For example, the color, size, strength or FOV of an agent or even an artifact of a level/environment).

  • Effortless synthesis of "Artificial Life"-type systems.

The engine itself is loosely based on Neural Networks and genetic programming. However, instead of focusing on the novelty of machine learning itself, AEcology uses practical concepts from the field which can lend themselves to producing a unique user experience with minimal development effort. In our discussions with developers in the past, the primary concerns of this approach involve runtime speed issues and impracticality in training the models. For that reason, our initial demos of AEcology in practice (available on the website) focus on these concerns.

For speed concerns, AEcology uses a mathematical abstraction of the Neural Network Feedforward algorithm which greatly improves runtime performance. To show this capability, we have constructed a demo showcasing 500-700 on-screen agents running at 60FPS.

For concerns over impracticality, AEcology uses very simple and flexible agent management structures which greatly simplify the unsupervised learning process. To showcase this ability, one of our demos involves showing how AEcology can learn the basics of a simple soccer game in less than 1 minute of training time.

AEcology is currently undergoing preparations for its first beta release. The first release, termed version Zero, provides support for single modality objective functions for both continuous function approximators and classifiers. In clearer terms, this means the generated AI is currently capable of either producing single-objective control outputs (moving, turning, navigating, etc) or binary outputs (decisions, classifiers, etc). We have developed a model for higher-level thought (essentially deciding on the appropriate action and then carrying out that action), but the first step is to gather as much user feedback as possible to improve the current version before adding the next layer of complexity. However, AEcology completely supports synthesis of multi-objective functions through cascading existing structures. On our website we have provided a video example demonstrating an agent which must both evade and attack an opponent using this technique.

We're currently looking for game or simulation developers to implement AEcology for their project and provide feedback to help us improve the software. The software license grants free use for personal, research and most commercial purposes. We look forward to the chance to provide close collaboration and support to early adopters to help their project succeed (which, in turn, helps AEcology grow).

For more information and to see video demos of AEcology in action, check out http://www.aecology.com. Please excuse the sloppy layout.

 

 

Share this post


Link to post
Share on other sites
Advertisement

Do you have any data indicating whether this approach requires more or less effort/time/money/etc. than more traditional and well-understood AI modeling techniques?

 

Personally, I tend to be deeply skeptical of machine learning in games, because unless machine learning is a principal mechanic in the game itself, it tends to fight very viciously with authorial control over the design of the game and the game characters' behaviors. In my experience it's extremely hard to customize and fine-tune without doing a ton of retraining and other annoying "pipeline" tasks.

 

Which is why I'd be concerned about whether or not it really saves anything over tools I already know how to use.

 

Diverse and natural behaviors? Piece of cake. I'm sure Dave will be along shortly with a handy link to our joint experiences in that realm.

 

Difficult or tedious to build the desired control systems? I don't see this crop up too often except with unproven tools that haven't been tested in fire yet (nothing personal).

 

AI that scales with various factors? Again, I do this stuff in my sleep.

 

Parameterized AI? Yeah, got that nailed down too :-)

 

 

So it can learn soccer. Soccer is a bloody simple game, and you're not even saying it produces superior AI play. One minute is a nice advertising blip but how long does it take to learn soccer well enough to compete as a game AI? Can it learn poker? Can it learn to play poker like a natural? Can it learn to raid in WoW?

 

 

Have you sat down an actual game designer to use this thing? Can they make it do what they want easily? Can they make it sing?

 

 

I guess the bottom line of my cynical line of inquiry is this: how does your product differentiate itself from the numerous (often ANN-based) AI packages that have failed to deliver in the past? What are you doing differently enough to give you a prospect of success?

 

 

I don't mean to be overly harsh or negative about your efforts. Please don't take it as such. Rather, think of it like a free temperature reading of the game AI community ;-)

Share this post


Link to post
Share on other sites

No worries about the criticisms, ApochPiQ. The effort cannot succeed without knowing what the hurdles are.
 

I'd be happy to answer your points if I could trouble you to elaborate on them. In the field of controls, for example, machine learning is used regularly to augment or supplant classical methods, especially when systems have high dimensionality in gain search space. Coming from a controls engineering background myself, I was particularly excited about this aspect. I could provide you with some examples if you like.

 

It's possible that the wording behind the soccer demo wasn't clear about the purpose of the video. It's purpose was to show how fast prototype convergence occurs as this is a long standing concern with ANN systems. Implementing such a system to learn poker, raid, or compete is entirely possible. We can definitely look into implementing those, however the concern is that it has already been done in academia with competing soccer AI and poker, and it may be difficult to justify the time investment that setting up raid training would take. I'd imagine we would need a rather large budget to move in that direction at this stage.

 

As far as demoing with game designers, that was the original intent sought by posting this on the gameDev forums. I may not have made that as clear as I could have.

 

Would you mind digging me up a few references to the past ANN-based agent libraries you're referring to? I'd be happy to define the distinction but I'm not clear on what you're referring to.

 

I think I made a mistake in burying this in a wall of text but please keep in mind that this is the alpha prototype of the first iteration of the software. Any suggestions that you may have will absolutely help.

 

 

captain_crunch:

Thanks for taking a look! One of the highlight features of the library is an ability to learn multiple different objectives in parallel, so the library could absolutely learn positional play (one of our other videos on competitive learning was made to show this type of functionality). Do you think that is something you would be interested in seeing? 

Edited by Algorithmic Ecology

Share this post


Link to post
Share on other sites

 

 

In the field of controls, for example, machine learning is used regularly to augment or supplant classical methods, especially when systems have high dimensionality in gain search space. Coming from a controls engineering background myself, I was particularly excited about this aspect. I could provide you with some examples if you like.

 

You should elaborate just a smidge better on what you're calling "controls" as by reflex I'm substituting SIS for every reference of the word, in which case you lost my attention quite quickly.

If it doesn't have a proven industrial use than it doesn't have a game development use either in my opinion. All surviving "fancy" AI schemes mimic industry. Utility is basically LOPA, which is proven.

I'm terribly sorry but for suggesting machine theory I really doubt you can enumerate the many superior alternatives that have been in use by the chemical industry for years. Safety is serious, and AI is a part of that.

Edited by JSandusky

Share this post


Link to post
Share on other sites

 

You should elaborate just a smidge better on what you're calling "controls" as by reflex I'm substituting SIS for every reference of the word, in which case you lost my attention quite quickly.

If it doesn't have a proven industrial use than it doesn't have a game development use either in my opinion. All surviving "fancy" AI schemes mimic industry. Utility is basically LOPA, which is proven.

I'm terribly sorry but for suggesting machine theory I really doubt you can enumerate the many superior alternatives that have been in use by the chemical industry for years. Safety is serious, and AI is a part of that.

 

 

No apologies necessary. I am familiar with that realm as well. If you search for "Control theory" on wikipedia, the result you get is the context I am speaking to. I'm not sure there's another word for it as that is the most common usage of the term.

In industrial robotics and particularly in chemical and power generation plants in general, the feedback control used is normally something called Model Predictive Control, which involves a learning-type optimization to anticipate changes in plant processes. In most cases it is the de facto standard for those systems and has had great success (in fact, if you google the term it is defined by the fact that it has been the industry standard in the chemical industry since the 1980s). I believe the systems you're citing are generally used as redundancy layers. The two are different, but not incompatible.

 

 

 

 

Edit: Shifting gears here, I can tell there is a lot of skepticism, so maybe it would be more productive to ask what you guys would need to see the library do before you would consider its usefulness. Can you think of some practical demos that you would need to see before considering the system to be viable?

Edited by Algorithmic Ecology

Share this post


Link to post
Share on other sites
I'm not clear on how we got to control theory from game development.

For machine learning there are unquestionably domains where the techniques are applicable and the successes speak for themselves.

Game development is not (yet) one of those domains.


I wish I could remember specific toolkits but (A) my memory is shit and (B) they come and go on practically an annual basis. I'm fairly sure some web diving would turn up examples, although the failures tend to vanish into the night more often than not. The important thing is that this is not untested territory. There is a serious base of existing skepticism towards ANN-based game AI and for good reasons.


I think it's interesting that setting up more complex AI scenarios is considered prohibitively time-intensive/expensive for you. Doesn't that imply that building a real game is going to be comparatively expensive? If you can't prove that this scales to real games, even simple games, you're never going to get any traction with experienced AI developers who know how to build equivalent AI in much less time and with infinitely more capacity to fine-tune the behavior to their tastes.


To answer your most recent question, I want a complete and exhaustively itemized breakdown of how you would build, say, a simple AI for playing (as an AI player) a campaign mission in StarCraft. Better yet, show that I could implement the campaign AI for 2 or 3 missions without rebuilding the entire NN data set for each mission. If you can make a game demo that proves that it is (A) less work and (B) more controllable to use your product versus existing known techniques for scripting campaign AIs, you have my interest.

FWIW StarCraft is just an example I pulled out of my ass; you could play any game really, as long as there's a strong degree of authorial tuning that goes into making that game experience great.

Share this post


Link to post
Share on other sites

The sentiment that I am gathering is that in order for the tool to be useful in any capacity, it must be shown that it can play a game like Starcraft or WoW by itself without the use of any other AI techniques. Is that an accurate assessment?

 

The library was originally built to be a lightweight supplement to designed systems. However, if it must be all-encompassing in order to work, maybe a major development pivot is in order.

Edited by Algorithmic Ecology

Share this post


Link to post
Share on other sites

The sentiment that I am gathering is that in order for the tool to be useful in any capacity, it must be shown that it can play a game like Starcraft or WoW by itself without the use of any other AI techniques. Is that an accurate assessment?


No, that's not how I read ApochPiQ's post at all. What he is saying is that there are specific types of problems that AI programmers face and which are difficult and time consuming, for example making the AI for a campaign mission in StarCraft. You need to prove that your middleware can be a useful tool in solving this problem or other similar problems. Desirable features include:
* fast deployment,
* easy to adjust,
* robust when facing unusual circumstances.

Share this post


Link to post
Share on other sites

I'm finally going to jump in here with my standard question about any ML-based approach. This is the question I have been using for 10 years online, at GDC AI roundtables, and any other conversation where this comes up.

 

Let's say you have trained out your NN to a place where it is looking pretty good doing [whatever] in your AAA game. You call your lead designer over to see and he says, "OK, yeah... that looks kinda cool. But in this one situation here, can you make it so that he does XYZ a little more often?"

 

What do you do? Try to tweak your training data and hope nothing else goes awry? Retrain your data set from scratch and hope you get remotely close to what it was you were trying to accomplish? You are working with a black box of number soup (to mix my metaphors) and you don't know what any one of those funky numbers does or how it relates to any of the other numbers.

 

In many AI architectures, this is a 10 minute fix. Turn a knob (often one that isn't even in code), restart the game, and check to see if the behavior is what you want. If it isn't, lather, rinse, and repeat until it is. You know exactly what every bit of code or data is doing, so you know how to adjust what it is you are trying to improve. Done.

 

Remember, that in most games, we are not trying to generate "A behavior"... we are trying to generate "THE behavior" -- specifically the one that the designer feels serves the game purpose best.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!