Sign in to follow this  

AEcology: Upcoming AI generation library

This topic is 717 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

After a long period of development since my post to announce the project in 2014, I'm happy to be back here to present the culmination of our efforts over the past couple of years. I've received a lot of good feedback from developers since then, and I would love to hear some more now that we have a more mature product ready to be released.

 

Long story short, we are releasing a machine learning library capable of producing efficient and unique AI behaviors through a simple interface and I am here to gather initial reaction feedback and hopefully find some developers interested in becoming early adopters for the project. I have a very specific pitch for anyone who is interested below, but if you want to just skip to the meat and see what it's all about, please check out the website at https://www.aecology.com. Thanks again for all you do, guys.

 

AEcology is an AI generation library which has been in development since mid 2013. The mission of the software is to enable developers to generate machine learning AI models, a likely inevitability for the field which currently has few viable implementations. AEcology focuses on delivering the various promises of this paradigm. Specifically:

  • Augmenting or replacing traditional AI with behaviors or decisions that appear diverse and natural.

  • Producing control solutions which are prohibitively difficult or tedious to design manually.

  • Enabling AI to scale with environmental factors such as player skill level.

  • Optionally allowing developers to fit parameters to the situation which calls for them (For example, the color, size, strength or FOV of an agent or even an artifact of a level/environment).

  • Effortless synthesis of "Artificial Life"-type systems.

The engine itself is loosely based on Neural Networks and genetic programming. However, instead of focusing on the novelty of machine learning itself, AEcology uses practical concepts from the field which can lend themselves to producing a unique user experience with minimal development effort. In our discussions with developers in the past, the primary concerns of this approach involve runtime speed issues and impracticality in training the models. For that reason, our initial demos of AEcology in practice (available on the website) focus on these concerns.

For speed concerns, AEcology uses a mathematical abstraction of the Neural Network Feedforward algorithm which greatly improves runtime performance. To show this capability, we have constructed a demo showcasing 500-700 on-screen agents running at 60FPS.

For concerns over impracticality, AEcology uses very simple and flexible agent management structures which greatly simplify the unsupervised learning process. To showcase this ability, one of our demos involves showing how AEcology can learn the basics of a simple soccer game in less than 1 minute of training time.

AEcology is currently undergoing preparations for its first beta release. The first release, termed version Zero, provides support for single modality objective functions for both continuous function approximators and classifiers. In clearer terms, this means the generated AI is currently capable of either producing single-objective control outputs (moving, turning, navigating, etc) or binary outputs (decisions, classifiers, etc). We have developed a model for higher-level thought (essentially deciding on the appropriate action and then carrying out that action), but the first step is to gather as much user feedback as possible to improve the current version before adding the next layer of complexity. However, AEcology completely supports synthesis of multi-objective functions through cascading existing structures. On our website we have provided a video example demonstrating an agent which must both evade and attack an opponent using this technique.

We're currently looking for game or simulation developers to implement AEcology for their project and provide feedback to help us improve the software. The software license grants free use for personal, research and most commercial purposes. We look forward to the chance to provide close collaboration and support to early adopters to help their project succeed (which, in turn, helps AEcology grow).

For more information and to see video demos of AEcology in action, check out http://www.aecology.com. Please excuse the sloppy layout.

 

 

Share this post


Link to post
Share on other sites

Do you have any data indicating whether this approach requires more or less effort/time/money/etc. than more traditional and well-understood AI modeling techniques?

 

Personally, I tend to be deeply skeptical of machine learning in games, because unless machine learning [b]is[/b] a principal mechanic in the game itself, it tends to fight very viciously with authorial control over the design of the game and the game characters' behaviors. In my experience it's extremely hard to customize and fine-tune without doing a ton of retraining and other annoying "pipeline" tasks.

 

Which is why I'd be concerned about whether or not it really saves anything over tools I already know how to use.

 

Diverse and natural behaviors? Piece of cake. I'm sure Dave will be along shortly with a handy link to our joint experiences in that realm.

 

Difficult or tedious to build the desired control systems? I don't see this crop up too often except with unproven tools that haven't been tested in fire yet (nothing personal).

 

AI that scales with various factors? Again, I do this stuff in my sleep.

 

Parameterized AI? Yeah, got that nailed down too :-)

 

 

So it can learn soccer. Soccer is a bloody simple game, and you're not even saying it produces superior AI play. One minute is a nice advertising blip but how long does it take to learn soccer well enough to compete as a game AI? Can it learn poker? Can it learn to play poker [i]like a natural[/i]? Can it learn to raid in WoW?

 

 

Have you sat down an actual game designer to use this thing? Can they make it do what they want easily? Can they make it sing?

 

 

I guess the bottom line of my cynical line of inquiry is this: how does your product differentiate itself from the numerous (often ANN-based) AI packages that have failed to deliver in the past? What are you doing differently enough to give you a prospect of success?

 

 

I don't mean to be overly harsh or negative about your efforts. Please don't take it as such. Rather, think of it like a free temperature reading of the game AI community ;-)

Share this post


Link to post
Share on other sites

No worries about the criticisms, ApochPiQ. The effort cannot succeed without knowing what the hurdles are.
 

I'd be happy to answer your points if I could trouble you to elaborate on them. In the field of controls, for example, machine learning is used regularly to augment or supplant classical methods, especially when systems have high dimensionality in gain search space. Coming from a controls engineering background myself, I was particularly excited about this aspect. I could provide you with some examples if you like.

 

It's possible that the wording behind the soccer demo wasn't clear about the purpose of the video. It's purpose was to show how fast prototype convergence occurs as this is a long standing concern with ANN systems. Implementing such a system to learn poker, raid, or compete is entirely possible. We can definitely look into implementing those, however the concern is that it has already been done in academia with competing soccer AI and poker, and it may be difficult to justify the time investment that setting up raid training would take. I'd imagine we would need a rather large budget to move in that direction at this stage.

 

As far as demoing with game designers, that was the original intent sought by posting this on the gameDev forums. I may not have made that as clear as I could have.

 

Would you mind digging me up a few references to the past ANN-based agent libraries you're referring to? I'd be happy to define the distinction but I'm not clear on what you're referring to.

 

I think I made a mistake in burying this in a wall of text but please keep in mind that this is the alpha prototype of the first iteration of the software. Any suggestions that you may have will absolutely help.

 

 

captain_crunch:

Thanks for taking a look! One of the highlight features of the library is an ability to learn multiple different objectives in parallel, so the library could absolutely learn positional play (one of our other videos on competitive learning was made to show this type of functionality). Do you think that is something you would be interested in seeing? 

Edited by Algorithmic Ecology

Share this post


Link to post
Share on other sites

 

 

In the field of controls, for example, machine learning is used regularly to augment or supplant classical methods, especially when systems have high dimensionality in gain search space. Coming from a controls engineering background myself, I was particularly excited about this aspect. I could provide you with some examples if you like.

 

You should elaborate just a smidge better on what you're calling "controls" as by reflex I'm substituting SIS for every reference of the word, in which case you lost my attention quite quickly.

If it doesn't have a proven industrial use than it doesn't have a game development use either in my opinion. All surviving "fancy" AI schemes mimic industry. Utility is basically LOPA, which is proven.

I'm terribly sorry but for suggesting machine theory I really doubt you can enumerate the many superior alternatives that have been in use by the chemical industry for years. Safety is serious, and AI is a part of that.

Edited by JSandusky

Share this post


Link to post
Share on other sites

 

You should elaborate just a smidge better on what you're calling "controls" as by reflex I'm substituting SIS for every reference of the word, in which case you lost my attention quite quickly.

If it doesn't have a proven industrial use than it doesn't have a game development use either in my opinion. All surviving "fancy" AI schemes mimic industry. Utility is basically LOPA, which is proven.

I'm terribly sorry but for suggesting machine theory I really doubt you can enumerate the many superior alternatives that have been in use by the chemical industry for years. Safety is serious, and AI is a part of that.

 

 

No apologies necessary. I am familiar with that realm as well. If you search for "Control theory" on wikipedia, the result you get is the context I am speaking to. I'm not sure there's another word for it as that is the most common usage of the term.

In industrial robotics and particularly in chemical and power generation plants in general, the feedback control used is normally something called Model Predictive Control, which involves a learning-type optimization to anticipate changes in plant processes. In most cases it is the de facto standard for those systems and has had great success (in fact, if you google the term it is defined by the fact that it has been the industry standard in the chemical industry since the 1980s). I believe the systems you're citing are generally used as redundancy layers. The two are different, but not incompatible.

 

 

 

 

Edit: Shifting gears here, I can tell there is a lot of skepticism, so maybe it would be more productive to ask what you guys would need to see the library do before you would consider its usefulness. Can you think of some practical demos that you would need to see before considering the system to be viable?

Edited by Algorithmic Ecology

Share this post


Link to post
Share on other sites
I'm not clear on how we got to control theory from game development.

For machine learning there are unquestionably domains where the techniques are applicable and the successes speak for themselves.

Game development is not (yet) one of those domains.


I wish I could remember specific toolkits but (A) my memory is shit and (B) they come and go on practically an annual basis. I'm fairly sure some web diving would turn up examples, although the failures tend to vanish into the night more often than not. The important thing is that this is not untested territory. There is a serious base of existing skepticism towards ANN-based game AI and for good reasons.


I think it's interesting that setting up more complex AI scenarios is considered prohibitively time-intensive/expensive for you. Doesn't that imply that building a real game is going to be comparatively expensive? If you can't prove that this scales to real games, even simple games, you're never going to get any traction with experienced AI developers who know how to build equivalent AI in much less time and with infinitely more capacity to fine-tune the behavior to their tastes.


To answer your most recent question, I want a complete and exhaustively itemized breakdown of how you would build, say, a simple AI for playing (as an AI player) a campaign mission in StarCraft. Better yet, show that I could implement the campaign AI for 2 or 3 missions without rebuilding the entire NN data set for each mission. If you can make a game demo that proves that it is (A) less work and (B) more controllable to use your product versus existing known techniques for scripting campaign AIs, you have my interest.

FWIW StarCraft is just an example I pulled out of my ass; you could play any game really, as long as there's a strong degree of authorial tuning that goes into making that game experience great.

Share this post


Link to post
Share on other sites

The sentiment that I am gathering is that in order for the tool to be useful in any capacity, it must be shown that it can play a game like Starcraft or WoW by itself without the use of any other AI techniques. Is that an accurate assessment?

 

The library was originally built to be a lightweight supplement to designed systems. However, if it must be all-encompassing in order to work, maybe a major development pivot is in order.

Edited by Algorithmic Ecology

Share this post


Link to post
Share on other sites

The sentiment that I am gathering is that in order for the tool to be useful in any capacity, it must be shown that it can play a game like Starcraft or WoW by itself without the use of any other AI techniques. Is that an accurate assessment?


No, that's not how I read ApochPiQ's post at all. What he is saying is that there are specific types of problems that AI programmers face and which are difficult and time consuming, for example making the AI for a campaign mission in StarCraft. You need to prove that your middleware can be a useful tool in solving this problem or other similar problems. Desirable features include:
* fast deployment,
* easy to adjust,
* robust when facing unusual circumstances.

Share this post


Link to post
Share on other sites

I'm finally going to jump in here with my standard question about any ML-based approach. This is the question I have been using for 10 years online, at GDC AI roundtables, and any other conversation where this comes up.

 

Let's say you have trained out your NN to a place where it is looking pretty good doing [whatever] in your AAA game. You call your lead designer over to see and he says, "OK, yeah... that looks kinda cool. But in this one situation here, can you make it so that he does XYZ a little more often?"

 

What do you do? Try to tweak your training data and hope nothing else goes awry? Retrain your data set from scratch and hope you get remotely close to what it was you were trying to accomplish? You are working with a black box of number soup (to mix my metaphors) and you don't know what any one of those funky numbers does or how it relates to any of the other numbers.

 

In many AI architectures, this is a 10 minute fix. Turn a knob (often one that isn't even in code), restart the game, and check to see if the behavior is what you want. If it isn't, lather, rinse, and repeat until it is. You know exactly what every bit of code or data is doing, so you know how to adjust what it is you are trying to improve. Done.

 

Remember, that in most games, we are not trying to generate "A behavior"... we are trying to generate "THE behavior" -- specifically the one that the designer feels serves the game purpose best.

Share this post


Link to post
Share on other sites
I can try to answer Dave's question. You can have the output of the NN be a SoftMax layer that is used to compute probabilities for each action. So the probability of picking action a is

P(a) = exp(score(a)) / Sum_over_a'(exp(score(a')))

You could have some controls that add a constant to the score of a particular action that you want to favor, or even have hand-written code that computes numbers that are to be added to the score for each action. That way you can have a baseline behavior that is learned using the NN, and as fine control as you want to modify it. Would that work?

Another similar approach is to integrate the NN in a utility-maximizing architecture, where the NN is used to get an estimate of expected utility of each action, and then you can use that as an ingredient in your final utility function. You can start with a utility function that is just "what the NN says", and change it as much as you want to satisfy your game designer, or even to give the agent some personality.

Share this post


Link to post
Share on other sites
The conventional wisdom in game AI is that if you do what Alvaro suggests, you wind up throwing out the ML components to the extent that they conflict with the authorial control components.

This is not to assert that the conventional wisdom is unassailably correct, nor am I trying to say that the idea itself has no merit. Certainly I could see potential for machine learning systems to act as elements of a larger game implementation. In fact as I said before I think there are definitely times when ML can be a mechanic within the game that works nicely with other gameplay systems (c.f. Creatures, Black & White, etc.). It's hard to argue about from a position of factual strength though because there's just not that much data.

Risk aversion is a serious thing in game development, and I suspect that a lot of the resistance to ML solutions is borne out of precisely that aversion. It doesn't help that a lot of prominent game AI people have been burned by ML and rather emphatically warn against the allure of the shiny.

Which is why I say that in order for products like OP's to gain traction, you need to provide strong evidence that the tools are going to mitigate technical risk instead of exploding it. Frankly it's not even a particularly harsh demand, nor is it reserved for ML in particular; just about any game middleware will undergo the same scrutiny.

Share this post


Link to post
Share on other sites

What do you do? Try to tweak your training data and hope nothing else goes awry? Retrain your data set from scratch and hope you get remotely close to what it was you were trying to accomplish? You are working with a black box of number soup (to mix my metaphors) and you don't know what any one of those funky numbers does or how it relates to any of the other numbers.

 

I'm not sure that's an accurate characterization. Each ANN output is just an algebraic function of the inputs. In fact, reverse engineering it to answer that question takes only milliseconds and some basic algebra. This is basically what Google DeepMind does to visualize exactly what is being passed through their deep networks, and this is on networks of 10-30 layers containing many millions of nodes.

 

 

 

 

What he is saying is that there are specific types of problems that AI programmers face and which are difficult and time consuming, for example making the AI for a campaign mission in StarCraft. You need to prove that your middleware can be a useful tool in solving this problem or other similar problems. Desirable features include:
* fast deployment,
* easy to adjust,
* robust when facing unusual circumstances.

 

This sounds fair. I'm getting the impression that people want to see solutions for planning and time-series types of things, which isn't the goal of the AEcology library, but is interesting anyway.

To that end, I don't think the issue is the difficulty so much as the time resources it would take to generalize such a tool. There are definitely some techniques that have surfaced recently which are well-suited to this, but it would be tough to pursue an Open Source approach for it. In your opinion do you think devs would consider paying for something like this, if it existed?

 

 

Certainly I could see potential for machine learning systems to act as elements of a larger game implementation. In fact as I said before I think there are definitely times when ML can be a mechanic within the game that works nicely with other gameplay systems (c.f. Creatures, Black & White, etc.).

This is the idea, more or less. I wouldn't use ML for anything in which the analytical answer is already known unless apparent unpredictability is the goal.

Edited by Algorithmic Ecology

Share this post


Link to post
Share on other sites

This sounds fair. I'm getting the impression that people want to see solutions for planning and time-series types of things, which isn't the goal of the AEcology library, but is interesting anyway.


*sigh* Not really.

I want to see that I can build a game AI with this tool. That's it. Pick a game, build an AI. The richer the game rules the better. (I'm saturated in RPG rulesets at the moment but it doesn't have to be that.) I fear that this is a hurdle for you because you don't actually know what most game implementers go through to build game AIs, which is going to be a crucial downfall of your middleware if true. I'd love to be shown wrong, though.


In your opinion do you think devs would consider paying for something like this, if it existed?



"Devs" do not exist as a homogeneous blob of consumers. Some companies, especially larger studios, might spring for it if it is well proven technology with demonstrable applications to games. On the other side of the scale, most indie type endeavors don't buy middleware unless they absolutely have to.

Middleware, especially game AI middleware, is very hard to sell. I'd advise against it. Open Source with support contracts is a viable and much safer route IMO.

Share this post


Link to post
Share on other sites

 

I want to see that I can build a game AI with this tool. That's it. Pick a game, build an AI. The richer the game rules the better. (I'm saturated in RPG rulesets at the moment but it doesn't have to be that.) I fear that this is a hurdle for you because you don't actually know what most game implementers go through to build game AIs, which is going to be a crucial downfall of your middleware if true. I'd love to be shown wrong, though.

 

 

I think the Starcraft idea is a reasonable benchmark, actually. This gives us something to work with. 

Share this post


Link to post
Share on other sites
Game development is partly like movie making. You have a director (designer), you have scenes and a script (desired behavior) and you have actors (AI entities) and most of the time the actors need to performe in the boundaries of the given script and what the director want to see. The actor (AI implementation) will eventually decide about the quality of the performance, but at the end it is all fake.

You commonly can't take non-actors, put them into a scene, give them a rough briefing and tell your director, that they will learn to handle it eventually. This will not work for 99% of all movies.

My excursion with ML in my game was very frustrating, thought it sounds like a valid candidate for ML. Eventually Marks words hit the nail on the head:
1. Seeing unexpected behavior resulted in hours and days of analysing data (oh.. it is not a bug, it was a logical consequence).
2. Player didn't see any intelligent behavior at all, just stupid decisions perceived as bugs.
3. Modfiying this behaviour to meet your 'script' (expected behavior) resulted in special cases, additional parameters, more tweaking.

Eventually it was a maintainance hell with behavior which was perceived as buggy and bad.

Share this post


Link to post
Share on other sites

 

No apologies necessary. I am familiar with that realm as well. If you search for "Control theory" on wikipedia, the result you get is the context I am speaking to. I'm not sure there's another word for it as that is the most common usage of the term.

In industrial robotics and particularly in chemical and power generation plants in general, the feedback control used is normally something called Model Predictive Control, which involves a learning-type optimization to anticipate changes in plant processes. In most cases it is the de facto standard for those systems and has had great success (in fact, if you google the term it is defined by the fact that it has been the industry standard in the chemical industry since the 1980s). I believe the systems you're citing are generally used as redundancy layers. The two are different, but not incompatible.

 

Great was worried my wording would come off as too adversarial. You're on the opposite side. I've never been given access to engineers on the front-side of systems and have always had to rely on liasons/experts for what analysis should expect. Refreshing to find a similar slightly grumpy tone on the other side of the fence.

 

 

 

I'm not clear on how we got to control theory from game development.

 

As far as I'm concerned AI is lagging behind industrial methods. As I mentioned before LOPA is utility based AI (it is very literally Dave's IAUS) and it's quite old. I doubt there's an AI patent that can't be invalidated by a chemical industry precedent. From Markov models to scalar math, everything that AI is based on first has an industrial use that long predates it. Industry moves forward constantly, industry will always be ahead of game AI.

 

There isn't even a "unrealistic accessibility" factor to it, I have to make the evaluations function on garbage hardware.

To my knowledge influence mapping first appears in 1987 (I would have been 2) in early models of the transfer of heat through nodes in a pressurized pipe system for "fast" evaluation (I haven't searched the subject further past - I was only concerned about modernizing the program as that was my job). Back then they made some very disgusting estimations of this transfer but those are no different than modeling influence in a navigation mesh. I know you've written an article on influence mapping thus I assume you're familiar with the many publications of Dr. Baybutt?

 

If not then you should read them, you'll shit bricks and learn a lot. Actually you'll probably just become an alcoholic in despair.

 

---

 

I think the ultimate summary is that "tweakability" must be extremely high. In a Utility based AI system I've been writing for Unity I made it a point to create a document that outlined the "less"/"more" Guideword factors of the different curves to demonstrate how curves change based on their inputs (MKCB curves).

Control and flexibility is what we all want. Naturally machine learning is against this a bit as we can work around it with weights and biases.

Share this post


Link to post
Share on other sites
Non-sequitur. Just because some industrial techniques have application in game AI doesn't mean all of them must.

I still maintain that game development is a field in which machine learning has a very limited place.

Share this post


Link to post
Share on other sites

This topic is 717 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this