• Advertisement


  • Content count

  • Joined

  • Last visited

Community Reputation

1207 Excellent

About Daerax

  • Rank
  1. You might enjoy reading the following explorations of the concept: Accelerando - Stross http://www.worldswithoutend.com/novel.asp?ID=808 The Lifecycle of Software Objects by Ted Chiang [url="http://subterraneanpress.com/index.php/magazine/fall-2010/fiction-the-lifecycle-of-software-objects-by-ted-chiang/"]http://subterraneanpress.com/index.php/magazine/fall-2010/fiction-the-lifecycle-of-software-objects-by-ted-chiang/[/url] Permutation City - Greg Egan
  2. Bayes Networks in games?

    [quote name='ApochPiQ' timestamp='1320082315' post='4878944'] If you honestly feel they are that valuable, you should be willing to make the resources available to teach people about them [i]without needing encouragement[/i]. Most people won't see the need or interest until [i]after[/i] a resource is available which clearly, concisely, and comprehensively illustrates the value of that technique. Frankly, you shouldn't need our approval or rabid anticipation to do something that you think is worthwhile. [/quote] I am short on time or I would volunteer. Someday maybe. But he is right Decision Trees are something that are not payed enough attention to. Random Forest in my use case outperformed neural nets and SVM and are speedy to train. For games, more than neural or bayes nets or genetic search, decision trees are the one thing out of machine learning I would argue to be most applicable to games in a splash and dash manner. They represent a probability distribution over the data, are not very far from FSM many are used to and with a weighted randomized voting method are close to behaviour trees (although built in an inverse manner - decision trees you start from a list of scenarios and desired ouptuts and it returns a tree, btrees - you start with actions to input states and build the tree - at least for what I can understand of behaviour trees, the game literature terminology is not one I am fluent in) Here is a fairly clear but basic python example of a decision Tree from Machine Learning An Algorithmic Perspective (I highly recommend the book). [url="http://www-ist.massey.ac.nz/smarsland/Code/6/dtree.py"]http://www-ist.masse...Code/6/dtree.py[/url]
  3. Bayes Networks in games?

    [quote name='Emergent' timestamp='1319324953' post='4875467'] While it's true that intellectual products like new algorithms don't take huge numbers of people to produce -- so you'll be facing some competition to get these jobs -- I wouldn't totally write off these technologies as useless. - Android and iOS devices both include speech recognition engines. Witness the much-hyped [i]Siri[/i]. As far as I know, Hidden Markov Models are at the heart of most speech recognition algorithms; it's possible that Bayes nets are in use here too. And the speech recognition engines, though impressive, still leave much room for improvement. That means room for competition. - The government funds a great many contractors that you've never heard of, who are trying to develop both old-fashioned database systems, and fancier statistical analysis tools including semantic networking tools, for understanding intelligence data. I imagine Bayes nets either get used here, or could be used. - Robotics is very slowly taking off, not just in defense and in a few "silly" consumer applications like the Roomba (though it has some sophisticated competitors that even do SLAM!), but also in warehousing and factory automation, and we're just [i]beginning [/i]to see the very leading edge of agricultural robotics. The world won't need a billion roboticists, but it [i]is[/i] one more area where these sophistical tools can actually be useful. I also think that games have a lot to teach these other fields. Sure, they don't need to deal with uncertainty to the same extent, but one thing they do a great job of is producing usable interfaces for interacting with the real world (or simulations thereof). People are beginning to acknowledge, for instance, that Starcraft is a pretty good model for what a good "net-centric warfare" interface should look like. Indeed, it was by explicitly following a strategy of copying Starcraft's UI that Ed Olson and his students won the recent MAGIC robotics competition in Australia. The difference, of course, is that instead of loading a map file you're doing SLAM, and the "fog of war" is real! My point in bringing this up is to say that, although some of these algorithms don't get used in games themselves, they get used in other fields that involve many of the same things as game development. [/quote] You are definitely right about game having a lot to teach other fields. Robotics usually focus on reinforcement learning type stuff. Yeah contractors, data scientist are the prime users of bayes nets. They have the resources, expertise and time to build, train and sample from networks (sometimes measured in days). The lack of black box also means that the parameters themselves may have actionable info. HMM mostly, never heard of bayes nets though Condition random fields gaining in use. Siri is stupid (not an insult) - I read from someone who worked on it that its mostly keyword matching.
  4. Bayes Networks in games?

    [quote name='calculemus1988' timestamp='1319156107' post='4874867'] I am learning about Bayes Networks at Stanford's online AI class. I was wondering if there are books that talk about and implement code related to Bayes networks in context of games? Thanks [/quote] Bayes Networks would be wasted on games. They are slow, inference on them is NP - hard. Building the network structure from data is also NP-hard. So for a decent sized network you gonna be doing something like gibbs sampling to do inference on network a structure that is almost guaranteed to be wrong. For a game the AI will take a long time to get at the distribution and is not worth it. Most of machine learning is either too slow, data intensive or stationary for Game use. If there would be one machine learning thing that I would actively look into, it would be Decision Trees. Decision trees are brittle so can add variance and do well with little data. You could then augment them as Random forest or boosted trees. I think that would be where I would start. A close cousin to bayes nets that might be useful to some types of games (arcade shooter, anything requiring movement tracking and prediction) would be a particle filter. Finally, a bared down Reinforcement learning algo may be good for long term play in a strategy or RPG. All of these would be very hard to get right and take a lot of time when a simple markov chain based model or even Finite state machine would have done just as good or better for much less work.
  5. when are genetic algorithms useful?

    [quote name='IADaveMark' timestamp='1308608541' post='4825692'] [quote name='EJH' timestamp='1308591870' post='4825567'] - Black and White (some type of learning?) - Creatures (not sure ...) [/quote] Black & White simply used reinforcement learning. All that does is tweak weight coefficients. Creatures used a GA for the actual genetics (so not really a GA per se... just a gene sequence that was combined and mutated from parents). They also used a form of NN for the learning, iirc. Not really typical gameplay, however. Again, both of these were, as Alvaro said, the algorithms looking for problems and finding them. [/quote] dont you think its a bit disingenuous to call reinforcement learning as just tweaking weight coefficients? Most of machine learning can be described such.
  6. when are genetic algorithms useful?

    [quote name='Emergent' timestamp='1308597009' post='4825609'] The reason for using a GA to train an ANN is that multilayer ANNs are hideous. The mapping from weights to training error is an ugly nonconvex mess. Since there's not much you can do, you throw a sample-based optimizer like an ANN at it. But remember that the [i]reason[/i] you were stuck using a GA was your choice of ANNs in the first place. Had you just stuck with a linear architecture -- a sum of basis functions -- you'd have gotten a standard linear least squares problem that you could have solved quickly with QR factorization or the conjugate gradient method. So why didn't you just choose that from the outset? Good question. [/quote] stochastic gradient descent is better. Convex loss is overrated you know.
  7. Bayesian Belief Networks

    [quote name='kokopellaras' timestamp='1308345160' post='4824611'] Hi everyone, does anybody know what are the advantages and disadvantages of Bayesian Belief network over propositional logic based system? Thanks [/quote] For Bayes nets - The domain of discourse is much larger as you can quantify over much more complex entities, your measure of confidence is much more flexible and not merely bivalent, propositional falls out of probability at the limits, you can update your knowledge in the optimal way, you can some idea of causal relationships. Flaw of Bayesian vs Predicate logic system - Not very good at deep chains of reasoning and capturing structural relationships.
  8. Neural Networks book for newbies.

    [quote name='MrProper' timestamp='1301758791' post='4793510'] Hi, can anyone recommend me good Neural Networks book? I need book, that explains all the things in simple, not very theoreticaly complicated, way. Also i would be very happy if book gives some advices and ideas on how to implement learning algorythms, perceptron, multi-layeres perceptron and so on. It doesnt have to be book, some internet articles could be ok as well. Thanks a lot for any replies. [/quote] http://www.dkriesel.com/_media/science/neuronalenetze-en-epsilon2-dkrieselcom.pdf [url="http://www.heatonresearch.com/book/programming-neural-networks-encog-cs.html"]http://www.heatonresearch.com/book/programming-neural-networks-encog-cs.html[/url]
  9. Machine Learning with Multiplayer games

    Also, You say that machine learning is only good at learning is only good at learning local optima. This is strictly not well defined. Local optima of what? classification is done by searching for a good function, treating it as finding an optimum for the data is not really true unless you consider each classifier as optimizing on a space of functions - which is a reasonable consideration (Gaussian processes are an interesting take on this). Some algorithms are specially crafted to optimize on a convex function with only a global optimum, in fact each algorithm will transform the same data into a different space and learn a different function. Some will learn local optimums with respect to the given data in this transformed space while others will learn globals, but their concept of global and local is not strictly meaningful with respect to the concepts the data represents. They are just trying to minimize some loss function. But back to optimizing on a function space for the data - the space or inputs and actions in consideration is often so large, unstable and complex that looking for a global is not even meaningful and impossible if you are thinking of one function for any dataset (game + differing players = different spaces). Even humans cant do this. The No Free Lunch theorem ensures that you cant just throw some algorithm at arbitrary data and expect it to do well for any given dataset. For some it will do worse than random. As for an AI that can beat the player using the same rules (same rates, no farsight, no buffs) this is not at all easy. You are right in that for a game like rock,paper, scissors a dead simple algorithm like weighted majority will destroy a human but doing well in a more nuanced game is very, very hard. Doing it in a way that is creative and bordering on weird is not hard. Use a mixed strategy and balance exploration and exploitation, penalize overexploitation and decaying memory should get you that far. But doing it in a way that is not completely idiotic and cannot be trounced by the intuition and higher level nuanced reason of a expert human is very hard. I am working on this as a hobby And I can tell you that making a bot play as well as a good human despite having perfect memory,excellent micromanagement,even temperment, and superhuman calculating ability is very hard. I've had to put many hours of thought to come up with something half decent and profitable against mid-weak level players. Unless you think No limit holdem is harder than an RTS such as starcraft 2.
  10. Data-Structure for interactive objects

    [quote name='sjaakiejj' timestamp='1300408765' post='4787254'] Hi Antheus, thanks for your reply [quote name='Antheus' timestamp='1300407678' post='4787240'] AVL trees are evil. AVL trees are always evil. [/quote] Could you elaborate a bit more on that, as I'm not sure how they are bad. [quote] This problem is known as [url="http://en.wikipedia.org/wiki/Nearest_neighbor_search"]nearest neighbor search[/url]. For small numbers (somewhere up to 100), a list/array is just fine due to the way hardware works. Other suggestions for spatial partitions are listed in the article. For point queries, quad tree is probably the simplest. Hashing can also be a viable alternative. [/quote] I'll have a read on the solutions you mentioned, thanks [/quote] I am also curious as to how AVL trees are *always* evil. Does the same reasoning apply to red black trees? sjaakiejj consider also kd trees.
  11. Machine Learning with Multiplayer games

    [quote name='ApochPiQ' timestamp='1300474163' post='4787622'] What about an AI that can micro perfectly seems so implausible to you? [/quote] Because an AI won't be able to micro perfectly. Where micro perfectly means consistently make the best possible decisions in hindsight at the micro level given the current situation for all possible situations and player skill levels such that it also propagates to perfect macro play. If it is not doing that then it is imperfect. It must also be unexploitable and must provably settle at Nash Equilibrium against other perfect players.
  12. Machine Learning with Multiplayer games

    [quote name='ApochPiQ' timestamp='1300327896' post='4786857'] I'm prepared to be wrong, but I'm not yet convinced. Give me a scenario where a human decision can outplay a perfectly operated AI. [/quote] The scenario where the human is perfect. This is every bit as realistic as the perfectly operated AI concept.
  13. Neural Networks experiments.

    [quote name='willh' timestamp='1299045615' post='4780857'] [quote name='Daerax' timestamp='1298829840' post='4779734'] Yeah there's no better way to learn than experience, so I agree. I can tell you though that making your weak learner too strong leads to over fitting as well as under learning, countering the usefulness of Adaboost. Based on what I can pick up from your posts - are you doing some kind of image or audio analysis/clustering? [/quote] You're bang on Daerax. I've done a fair amount of work with imaging; feature tracking, object detection, etc.. a bit of time series processing, and an embarassing amount of text corpus work. Machine learning is one of those areas that I never seem to get bored of. There are a lot of really smart guys out there working at it too, which gives someone like me plenty of new things to learn. And you? Based on your posts so far it sound like you have a pretty solid academic background in the field. I'm looking forward to playing around with online learning. It will be a totally new area for me. [/quote] Haha no academic background on the subject. I am also a self learner on this topic. I've always been into AI from afar and also enjoyed probability theory. From the moment I implemented my first simple naive bayes classifier I was hooked. It's amazing the feeling you get when it classifies something, like you are part of something big, a glimpse to the future of our robotic overlords. My perspective on human learning and intelligence has changed greatly since then as well.
  14. how much training data to give my ANN

    Pareto Coevolution creates very strong game players and it is a provably free lunch. For Connect 4 Reinforcement learning will also do very well (you can still use your MLP with it.
  15. Neural Networks experiments.

    Yeah there's no better way to learn than experience, so I agree. I can tell you though that making your weak learner too strong leads to over fitting as well as under learning, countering the usefulness of Adaboost. Based on what I can pick up from your posts - are you doing some kind of image or audio analysis/clustering? Online algorithms are cool. They are an extremely effective way to get into Game Theory. The algorithms are all fairly simple but their analysis profound. For example online algorithms by regret minimization can in a Zero Sum game find a very good approximation of a min max strategy. They do even better, playing min max on an optimal opponent and being able to switch to a mixed strategy to take advantage of a non optimal player rather than continue on a fixed strategy. They are also much quicker than the standard tree or dynamic programming based methods. For positive sum games there are results leveraging correlated equilibrium which is a [i]much[/i] more meaningful/practical result than Nash Equilibrium. So they are especially good for adversarial games and quite decent for cooperative. E.g. poker, portfolio optimization, game playing generally, trading. And classification and regression in a non parametric setting with no assumptions of distributions at play.
  • Advertisement