Advertisement Jump to content
  • Advertisement


  • Content Count

  • Joined

  • Last visited

Community Reputation

1207 Excellent

About Daerax

  • Rank
  1. You might enjoy reading the following explorations of the concept: Accelerando - Stross The Lifecycle of Software Objects by Ted Chiang Permutation City - Greg Egan
  2. Daerax

    Bayes Networks in games?

    I am short on time or I would volunteer. Someday maybe. But he is right Decision Trees are something that are not payed enough attention to. Random Forest in my use case outperformed neural nets and SVM and are speedy to train. For games, more than neural or bayes nets or genetic search, decision trees are the one thing out of machine learning I would argue to be most applicable to games in a splash and dash manner. They represent a probability distribution over the data, are not very far from FSM many are used to and with a weighted randomized voting method are close to behaviour trees (although built in an inverse manner - decision trees you start from a list of scenarios and desired ouptuts and it returns a tree, btrees - you start with actions to input states and build the tree - at least for what I can understand of behaviour trees, the game literature terminology is not one I am fluent in) Here is a fairly clear but basic python example of a decision Tree from Machine Learning An Algorithmic Perspective (I highly recommend the book). http://www-ist.masse...Code/6/
  3. Daerax

    Bayes Networks in games?

    You are definitely right about game having a lot to teach other fields. Robotics usually focus on reinforcement learning type stuff. Yeah contractors, data scientist are the prime users of bayes nets. They have the resources, expertise and time to build, train and sample from networks (sometimes measured in days). The lack of black box also means that the parameters themselves may have actionable info. HMM mostly, never heard of bayes nets though Condition random fields gaining in use. Siri is stupid (not an insult) - I read from someone who worked on it that its mostly keyword matching.
  4. Daerax

    Bayes Networks in games?

    Bayes Networks would be wasted on games. They are slow, inference on them is NP - hard. Building the network structure from data is also NP-hard. So for a decent sized network you gonna be doing something like gibbs sampling to do inference on network a structure that is almost guaranteed to be wrong. For a game the AI will take a long time to get at the distribution and is not worth it. Most of machine learning is either too slow, data intensive or stationary for Game use. If there would be one machine learning thing that I would actively look into, it would be Decision Trees. Decision trees are brittle so can add variance and do well with little data. You could then augment them as Random forest or boosted trees. I think that would be where I would start. A close cousin to bayes nets that might be useful to some types of games (arcade shooter, anything requiring movement tracking and prediction) would be a particle filter. Finally, a bared down Reinforcement learning algo may be good for long term play in a strategy or RPG. All of these would be very hard to get right and take a lot of time when a simple markov chain based model or even Finite state machine would have done just as good or better for much less work.
  5. Daerax

    when are genetic algorithms useful?

    Black & White simply used reinforcement learning. All that does is tweak weight coefficients. Creatures used a GA for the actual genetics (so not really a GA per se... just a gene sequence that was combined and mutated from parents). They also used a form of NN for the learning, iirc. Not really typical gameplay, however. Again, both of these were, as Alvaro said, the algorithms looking for problems and finding them. [/quote] dont you think its a bit disingenuous to call reinforcement learning as just tweaking weight coefficients? Most of machine learning can be described such.
  6. Daerax

    when are genetic algorithms useful?

    stochastic gradient descent is better. Convex loss is overrated you know.
  7. Daerax

    Bayesian Belief Networks

    For Bayes nets - The domain of discourse is much larger as you can quantify over much more complex entities, your measure of confidence is much more flexible and not merely bivalent, propositional falls out of probability at the limits, you can update your knowledge in the optimal way, you can some idea of causal relationships. Flaw of Bayesian vs Predicate logic system - Not very good at deep chains of reasoning and capturing structural relationships.
  8. Daerax

    Neural Networks book for newbies.
  9. Daerax

    Machine Learning with Multiplayer games

    Also, You say that machine learning is only good at learning is only good at learning local optima. This is strictly not well defined. Local optima of what? classification is done by searching for a good function, treating it as finding an optimum for the data is not really true unless you consider each classifier as optimizing on a space of functions - which is a reasonable consideration (Gaussian processes are an interesting take on this). Some algorithms are specially crafted to optimize on a convex function with only a global optimum, in fact each algorithm will transform the same data into a different space and learn a different function. Some will learn local optimums with respect to the given data in this transformed space while others will learn globals, but their concept of global and local is not strictly meaningful with respect to the concepts the data represents. They are just trying to minimize some loss function. But back to optimizing on a function space for the data - the space or inputs and actions in consideration is often so large, unstable and complex that looking for a global is not even meaningful and impossible if you are thinking of one function for any dataset (game + differing players = different spaces). Even humans cant do this. The No Free Lunch theorem ensures that you cant just throw some algorithm at arbitrary data and expect it to do well for any given dataset. For some it will do worse than random. As for an AI that can beat the player using the same rules (same rates, no farsight, no buffs) this is not at all easy. You are right in that for a game like rock,paper, scissors a dead simple algorithm like weighted majority will destroy a human but doing well in a more nuanced game is very, very hard. Doing it in a way that is creative and bordering on weird is not hard. Use a mixed strategy and balance exploration and exploitation, penalize overexploitation and decaying memory should get you that far. But doing it in a way that is not completely idiotic and cannot be trounced by the intuition and higher level nuanced reason of a expert human is very hard. I am working on this as a hobby And I can tell you that making a bot play as well as a good human despite having perfect memory,excellent micromanagement,even temperment, and superhuman calculating ability is very hard. I've had to put many hours of thought to come up with something half decent and profitable against mid-weak level players. Unless you think No limit holdem is harder than an RTS such as starcraft 2.
  10. Daerax

    Data-Structure for interactive objects

    Could you elaborate a bit more on that, as I'm not sure how they are bad. This problem is known as nearest neighbor search. For small numbers (somewhere up to 100), a list/array is just fine due to the way hardware works. Other suggestions for spatial partitions are listed in the article. For point queries, quad tree is probably the simplest. Hashing can also be a viable alternative. [/quote] I'll have a read on the solutions you mentioned, thanks [/quote] I am also curious as to how AVL trees are *always* evil. Does the same reasoning apply to red black trees? sjaakiejj consider also kd trees.
  11. Daerax

    Machine Learning with Multiplayer games

    Because an AI won't be able to micro perfectly. Where micro perfectly means consistently make the best possible decisions in hindsight at the micro level given the current situation for all possible situations and player skill levels such that it also propagates to perfect macro play. If it is not doing that then it is imperfect. It must also be unexploitable and must provably settle at Nash Equilibrium against other perfect players.
  12. Daerax

    Machine Learning with Multiplayer games

    The scenario where the human is perfect. This is every bit as realistic as the perfectly operated AI concept.
  13. Daerax

    Neural Networks experiments.

    You're bang on Daerax. I've done a fair amount of work with imaging; feature tracking, object detection, etc.. a bit of time series processing, and an embarassing amount of text corpus work. Machine learning is one of those areas that I never seem to get bored of. There are a lot of really smart guys out there working at it too, which gives someone like me plenty of new things to learn. And you? Based on your posts so far it sound like you have a pretty solid academic background in the field. I'm looking forward to playing around with online learning. It will be a totally new area for me. [/quote] Haha no academic background on the subject. I am also a self learner on this topic. I've always been into AI from afar and also enjoyed probability theory. From the moment I implemented my first simple naive bayes classifier I was hooked. It's amazing the feeling you get when it classifies something, like you are part of something big, a glimpse to the future of our robotic overlords. My perspective on human learning and intelligence has changed greatly since then as well.
  14. Daerax

    how much training data to give my ANN

    Pareto Coevolution creates very strong game players and it is a provably free lunch. For Connect 4 Reinforcement learning will also do very well (you can still use your MLP with it.
  15. Daerax

    Neural Networks experiments.

    Yeah there's no better way to learn than experience, so I agree. I can tell you though that making your weak learner too strong leads to over fitting as well as under learning, countering the usefulness of Adaboost. Based on what I can pick up from your posts - are you doing some kind of image or audio analysis/clustering? Online algorithms are cool. They are an extremely effective way to get into Game Theory. The algorithms are all fairly simple but their analysis profound. For example online algorithms by regret minimization can in a Zero Sum game find a very good approximation of a min max strategy. They do even better, playing min max on an optimal opponent and being able to switch to a mixed strategy to take advantage of a non optimal player rather than continue on a fixed strategy. They are also much quicker than the standard tree or dynamic programming based methods. For positive sum games there are results leveraging correlated equilibrium which is a much more meaningful/practical result than Nash Equilibrium. So they are especially good for adversarial games and quite decent for cooperative. E.g. poker, portfolio optimization, game playing generally, trading. And classification and regression in a non parametric setting with no assumptions of distributions at play.
  • Advertisement

Important Information

By using, you agree to our community Guidelines, Terms of Use, and Privacy Policy. is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!