Jump to content
  • Advertisement

Algorithmic Ecology

Member
  • Content Count

    17
  • Joined

  • Last visited

Community Reputation

204 Neutral

About Algorithmic Ecology

  • Rank
    Member

Personal Information

  1. Algorithmic Ecology

    AEcology: Upcoming AI generation library

      I think the Starcraft idea is a reasonable benchmark, actually. This gives us something to work with. 
  2. Algorithmic Ecology

    AEcology: Upcoming AI generation library

      I'm not sure that's an accurate characterization. Each ANN output is just an algebraic function of the inputs. In fact, reverse engineering it to answer that question takes only milliseconds and some basic algebra. This is basically what Google DeepMind does to visualize exactly what is being passed through their deep networks, and this is on networks of 10-30 layers containing many millions of nodes.         This sounds fair. I'm getting the impression that people want to see solutions for planning and time-series types of things, which isn't the goal of the AEcology library, but is interesting anyway. To that end, I don't think the issue is the difficulty so much as the time resources it would take to generalize such a tool. There are definitely some techniques that have surfaced recently which are well-suited to this, but it would be tough to pursue an Open Source approach for it. In your opinion do you think devs would consider paying for something like this, if it existed?   This is the idea, more or less. I wouldn't use ML for anything in which the analytical answer is already known unless apparent unpredictability is the goal.
  3. Algorithmic Ecology

    AEcology: Upcoming AI generation library

    The sentiment that I am gathering is that in order for the tool to be useful in any capacity, it must be shown that it can play a game like Starcraft or WoW by itself without the use of any other AI techniques. Is that an accurate assessment?   The library was originally built to be a lightweight supplement to designed systems. However, if it must be all-encompassing in order to work, maybe a major development pivot is in order.
  4. Algorithmic Ecology

    AEcology: Upcoming AI generation library

      No apologies necessary. I am familiar with that realm as well. If you search for "Control theory" on wikipedia, the result you get is the context I am speaking to. I'm not sure there's another word for it as that is the most common usage of the term. In industrial robotics and particularly in chemical and power generation plants in general, the feedback control used is normally something called Model Predictive Control, which involves a learning-type optimization to anticipate changes in plant processes. In most cases it is the de facto standard for those systems and has had great success (in fact, if you google the term it is defined by the fact that it has been the industry standard in the chemical industry since the 1980s). I believe the systems you're citing are generally used as redundancy layers. The two are different, but not incompatible.         Edit: Shifting gears here, I can tell there is a lot of skepticism, so maybe it would be more productive to ask what you guys would need to see the library do before you would consider its usefulness. Can you think of some practical demos that you would need to see before considering the system to be viable?
  5. Algorithmic Ecology

    AEcology: Upcoming AI generation library

    No worries about the criticisms, ApochPiQ. The effort cannot succeed without knowing what the hurdles are.   I'd be happy to answer your points if I could trouble you to elaborate on them. In the field of controls, for example, machine learning is used regularly to augment or supplant classical methods, especially when systems have high dimensionality in gain search space. Coming from a controls engineering background myself, I was particularly excited about this aspect. I could provide you with some examples if you like.   It's possible that the wording behind the soccer demo wasn't clear about the purpose of the video. It's purpose was to show how fast prototype convergence occurs as this is a long standing concern with ANN systems. Implementing such a system to learn poker, raid, or compete is entirely possible. We can definitely look into implementing those, however the concern is that it has already been done in academia with competing soccer AI and poker, and it may be difficult to justify the time investment that setting up raid training would take. I'd imagine we would need a rather large budget to move in that direction at this stage.   As far as demoing with game designers, that was the original intent sought by posting this on the gameDev forums. I may not have made that as clear as I could have.   Would you mind digging me up a few references to the past ANN-based agent libraries you're referring to? I'd be happy to define the distinction but I'm not clear on what you're referring to.   I think I made a mistake in burying this in a wall of text but please keep in mind that this is the alpha prototype of the first iteration of the software. Any suggestions that you may have will absolutely help.     captain_crunch: Thanks for taking a look! One of the highlight features of the library is an ability to learn multiple different objectives in parallel, so the library could absolutely learn positional play (one of our other videos on competitive learning was made to show this type of functionality). Do you think that is something you would be interested in seeing? 
  6. After a long period of development since my post to announce the project in 2014, I'm happy to be back here to present the culmination of our efforts over the past couple of years. I've received a lot of good feedback from developers since then, and I would love to hear some more now that we have a more mature product ready to be released.   Long story short, we are releasing a machine learning library capable of producing efficient and unique AI behaviors through a simple interface and I am here to gather initial reaction feedback and hopefully find some developers interested in becoming early adopters for the project. I have a very specific pitch for anyone who is interested below, but if you want to just skip to the meat and see what it's all about, please check out the website at https://www.aecology.com. Thanks again for all you do, guys.   AEcology is an AI generation library which has been in development since mid 2013. The mission of the software is to enable developers to generate machine learning AI models, a likely inevitability for the field which currently has few viable implementations. AEcology focuses on delivering the various promises of this paradigm. Specifically: Augmenting or replacing traditional AI with behaviors or decisions that appear diverse and natural. Producing control solutions which are prohibitively difficult or tedious to design manually. Enabling AI to scale with environmental factors such as player skill level. Optionally allowing developers to fit parameters to the situation which calls for them (For example, the color, size, strength or FOV of an agent or even an artifact of a level/environment). Effortless synthesis of "Artificial Life"-type systems. The engine itself is loosely based on Neural Networks and genetic programming. However, instead of focusing on the novelty of machine learning itself, AEcology uses practical concepts from the field which can lend themselves to producing a unique user experience with minimal development effort. In our discussions with developers in the past, the primary concerns of this approach involve runtime speed issues and impracticality in training the models. For that reason, our initial demos of AEcology in practice (available on the website) focus on these concerns. For speed concerns, AEcology uses a mathematical abstraction of the Neural Network Feedforward algorithm which greatly improves runtime performance. To show this capability, we have constructed a demo showcasing 500-700 on-screen agents running at 60FPS. For concerns over impracticality, AEcology uses very simple and flexible agent management structures which greatly simplify the unsupervised learning process. To showcase this ability, one of our demos involves showing how AEcology can learn the basics of a simple soccer game in less than 1 minute of training time. AEcology is currently undergoing preparations for its first beta release. The first release, termed version Zero, provides support for single modality objective functions for both continuous function approximators and classifiers. In clearer terms, this means the generated AI is currently capable of either producing single-objective control outputs (moving, turning, navigating, etc) or binary outputs (decisions, classifiers, etc). We have developed a model for higher-level thought (essentially deciding on the appropriate action and then carrying out that action), but the first step is to gather as much user feedback as possible to improve the current version before adding the next layer of complexity. However, AEcology completely supports synthesis of multi-objective functions through cascading existing structures. On our website we have provided a video example demonstrating an agent which must both evade and attack an opponent using this technique. We're currently looking for game or simulation developers to implement AEcology for their project and provide feedback to help us improve the software. The software license grants free use for personal, research and most commercial purposes. We look forward to the chance to provide close collaboration and support to early adopters to help their project succeed (which, in turn, helps AEcology grow). For more information and to see video demos of AEcology in action, check out http://www.aecology.com. Please excuse the sloppy layout.    
  7. Algorithmic Ecology

    Online AI game

    I love the idea. I messed around with it for a while and I have a little bit of input.. I liked the idea behind the Drone Wars game and I know that the Tron concept is supposed to be a challenging AI problem, but I felt that the Poker chip game might have been a little too chaotic and I didn't like the idea of some of the agent actions being automatic. I could see how it would still be a difficult problem to solve though. I'd really like to see challenges that involve more complexity (like the Ants AI challenge) or RTS type of things where unit allocations and individual actions need to be planned out carefully. This is also opinion, but I think I'd like to see larger and more complex games vs better graphics. Maybe more game ideas can be explored and implemented with lower quality visuals and the more popular ones can be developed further? On the technical end, I attempted to whip up a Constraint Satisfaction solution for the Drone Wars game but while writing it I got a time out message and then I couldn't use the compiler without copy-pasting the code and refreshing the browser. Maybe there's a way to save it without creating an account or something, but if it wasn't obvious to me how to do so and I don't have a lot of time to dig around for instructions so I quit. I think the user options could be a little more intuitive and visible for that reason. If I really wanted to make it work I could have just used a text editor or something, but I didn't have a lot of time for it.   Otherwise, I can't wait to see how it progresses. It could potentially be addicting.
  8. Algorithmic Ecology

    Motor control for physics-based character movement

    This is a different definition than what you had before, and is probably closer to what it should actually look like. I'm still having trouble understanding why the iteration would be beneficial, however, since the difference would be that you're doubling the execution time of the algorithm.   Never mind the tanh thing, I just looked back through some of my old code and realized it works either way. I think I had better results using sigmoid in hidden layers.   Assuming the ANN is set up for hidden layer feedforward and you're not running internal information through again on the next timestep, the problem might just be in your GA or fitness function declarations, since that is a whole other world of pain in itself. Have you been able to evolve any simple goals like single joint movement to a desired state, etc?   Also, I meant that you can use some of the information from inverse kinematics to define your goal states or to set benchmarks that the optimization should converge toward, because you seem to have specific desires for how the solution looks and the GA will normally just converge onto anything that satisfies the requirements. (edit) Inverse kinematics is normally used in robotics with good success. Given the fact that it works for physical systems I have a hard time believing that it wouldn't work in a virtual one (especially since it is used all the time in game design with physics engines).     Also, definitely look into recurrence if you're trying to get sequences of actions going.   Another thing to think about: I work with a guy who used oscillatory neuron principles developed by Randall Beer to produce locomotion in animal-like robots. If you're feeling particularly daring it is an option: http://scholar.google.com/citations?user=F_J8QyAAAAAJ&hl=en&oi=ao
  9. Algorithmic Ecology

    Motor control for physics-based character movement

    It was mathematically proven in 1989.       I think you should go back and double check this. I strongly suspect that this is not the case for many reasons. I've never seen anyone use adjacency matrices for practical ANNs so I'm hesitant to put this out there but I would expect a correct ANN feedforward with adjacency matrices to be something more of the form: output vector(Ox1) = sigmoid( layer2weights(OxH) * (layer1Weights(HxI) * inputs(Ix1)) )   where O is number of outputs, I number of Inputs, H number of hidden units and members of layer2Weights and layer1Weights are selected by the GA. The only parameters that should be changing within a simulation should be the inputs and outputs. If anything else is changing before evaluating a new GA individual or you're running your intermediate values through tanh, you're preventing the ANN from doing its job.    (edit) I'm going to say this again though..If you already know the positions and velocities you are looking for you may be better off just using inverse kinematics to develop the movement before trying to work it out through optimization algorithms.
  10. Algorithmic Ecology

    Motor control for physics-based character movement

      I'm sorry, I'm having trouble understanding the way you're defining it without seeing a discrete description of the structure. The best guess I have is that you're representing the structure as an adjacency matrix and iteratively modifying it, which is a possible solution, but I have no way of knowing exactly what your algorithm accomplishes without seeing a mathematical description and I'm having trouble finding any reason that one would need to iterate the algorithm more than once per frame. (edits) From what I think I understand about your implementation, I strongly suspect that there is no benefit to redundantly passing information through the structure while the input vector stays the same in each timestep (which is what I'm assuming is happening) because analytically you're passing new feature information through a set of weights that are meant to process other features. This can cause a lot of information to be lost and is why recurrent information is normally just fed in as an auxiilary input on the next timestep (aka recurrence). You may have better results by using another static coefficient matrix in lieu of "memories" and just feed the recurrent information you deem relevant in as separate inputs when your input vector is updated.       If this is true and you don't have a way to bound the data then you're limited by the finite size of your program memory and no algorithm is suitable anyway.   Fair enough, let me be more specific: Assuming you are using the appropriate architecture for your objective, the viability of your model's hypothesis is only limited by your ability to select the appropriate fitness function.           I think you may be misunderstanding the concept. The output set is a mapped function of the inputs at any given time regardless of how many layers there are. There is normally no temporal aspect involved. Mathematically any function can be represented with only two layers so it seems that all you would be doing by repeatedly running information through tanh and multiplying it by the same matrix is corrupting your data in proportion to the number of iterations you run.       Anything that is necessary to occur at a given moment can be represented by a function. The hypothesis function (from whatever optimization/learning/controller) algorithm you use just has to change if you move to another function after. If actions have to occur in sequence to reach the desired goal then you can add recurrence.       This is called speciation, and it is already well-developed if you're interested in expanding on it. What you can't do is run GA on two disparate goals without partitioning the population, though. It will fail every time.       I didn't mean that you combine the two. I meant that they are both methods to minimize error when comparing a system's state to its desired state. (edit) Actually, intentional emulation is possible because you can feed information from one iteration as input to the next iteration if you choose to do so. This is actually pretty common in physical systems. See "recurrence" above.
  11. Algorithmic Ecology

    Motor control for physics-based character movement

    A few comments. I don't know the details of your implementation so these may not apply but I hope it is useful:   If I am reading your implementation correctly, it looks like you're multiplying your ANN inputs across the coefficients to get outputs and running them through your squash function. The reason I don't think this would produce results is that the model seems to be missing hidden layer summing junctions, which are where the real computation in an ANN is accomplished. It would be very difficult to produce ANN-like behavior without a feedforward, 2-layer graph structure because without it your control inputs are basically just scaled products of your inputs.   It is completely feasible to have your input array be allocated to the largest size that it would possibly encounter and just feed your inputs in as 0.f values when they are unused, as long as you are always mapping the same inputs in to the same locations. When the GA converges it will automatically omit any unnecessary inputs.   Make sure to normalize your inputs to the interval [-1, 1] before pushing through the ANN. I usually just divide by the feasible domain.   ANN feedforward (if you choose to implement it) need only be accomplished once per frame as long as your delta-t values in the model all accurately reflect the operating frequency.   An ANN's success is only limited by the soundness of the fitness function you are giving the GA. Use easier-to-accomplish fitness functions at first and work your way up as it trains. Also, an ANN will converge very well on ONE behavior, so don't expect multiple control functions out of the same ANN. For multiple functions you would construct another ANN in parallel designed to converge to a different fitness function.   A trained feedforward ANN can produce exactly the same results as a PID controller, even though the architecture is different. If you have already worked out the transfer functions for every joint in the system though, then PD control might be even easier than what you're doing now. You may be able to develop inverse kinematics for desired behavior first and then just find controller parameters to minimize the disparity between actual and desired behaviors.
  12. I'm going to play devil's advocate here and argue that everything you're looking to do already exists and the definitions you're looking for are clearly defined and distinctions drawn. It turns out there is already a calculus dedicated to knowledge representation and logic actions along with a programming language (Prolog) designed for the purpose you seem to be describing.   I believe the information you're looking for begins in chapter 7 of Artificial Intelligence: A Modern Approach by Russell and Norvig. I'm not going to say you should download the pdf from a google search (because you really should just buy the book if you're interested), but it isn't hard to find.   If you're looking to expand more on ideas of generalization you want bayesian inference and fuzzy logic, which you will encounter if you read to the end of the book (but it won't make sense until you have a good understanding of logical operations and knowledge representation). For an understanding of the current academic state of knowledge on the subject try some searches on IEEE Xplore (google it) about the subject.   Hope this helps.
  13. Algorithmic Ecology

    Algorithmic Ecology: Machine Learning AI Engine

      I haven't had this experience yet, but I'll keep it in mind. WireZapp- Yeah, that's the idea. Machine learning algorithms can reliably produce functions that would be prohibitively difficult for a human to produce, but it wouldn't normally replace something like tree or graph-based heuristics in terms of provability and whatnot.
  14. Algorithmic Ecology

    Algorithmic Ecology: Machine Learning AI Engine

    Many reasons, but mostly because it doesn't seem to be very common and there are some newer methods in machine learning research that I think could have some potential.   There's also a personal interest factor and the practice is useful to me for my field.
  15. Algorithmic Ecology

    Algorithmic Ecology: Machine Learning AI Engine

    Thanks jefferytitan, I appreciate the feedback!   For predictability, I think this only applies when a developer means to train a game in real time. For example, I've been drafting up a couple ideas for simple games with AI that automatically scales with the player. Offline, though, machine learning agents generally will only become just strong enough to overcome their training scenarios because no reward is defined for becoming any better than that, so its actually harder to develop "too strong" AI than it may sound.   Training AI in real-time as someone is playing may be a challenge, but I think there is a common misconception that neural nets always operate slowly. For some perspective, in my demo video I included a segment that had 160 agents rendered on screen. Each agent contained a net with 374 neurons, meaning that there were around 6x10^4 neurons operating in the same thread without any shudder at 60fps, so at this point I'm confident that the limitation will be more in rendering than computation. The processor instructions at the assembly level for Neural Nets actually appear to be simpler with less wasted cycles than "if-then" instructions (trained NNs are basically just math equations with the inputs as variables), so at the moment it isn't an issue (knock on wood).    Your third point is one that will take some research, though. Agents like this are capable of finding and exploiting bugs in the environment and finding absurd solutions to problems, which is both a blessing and a curse. There will probably be some pain in figuring this one out. I do have several ideas for solving the problem of triggering the appropriate animation for a behavior, and it actually might be a good idea to demonstrate that in my next release.   Thanks for the feedback! This will definitely help me.
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!