Jump to content

  • Log In with Google      Sign In   
  • Create Account

FREE SOFTWARE GIVEAWAY

We have 4 x Pro Licences (valued at $59 each) for 2d modular animation software Spriter to give away in this Thursday's GDNet Direct email newsletter.


Read more in this forum topic or make sure you're signed up (from the right-hand sidebar on the homepage) and read Thursday's newsletter to get in the running!


willh

Member Since 03 Mar 2009
Offline Last Active Sep 19 2012 07:00 AM

#4981661 Chess AI with Neural Networks

Posted by willh on 19 September 2012 - 07:00 AM

You could use the NN to build an evaluation function, as Alvaro suggested.

Basically, feed it things like 'passed pawns', 'player captures', 'opponent captures', etc.. All of the things you would calculate as part of a regular evaluation function become inputs in to the network. The network would output a score that determine how good (or bad) a particular position is.

In theory it should work just fine. In practice though it is going to be difficult to provide training data.


#4949815 State of the art game AI in 2012?

Posted by willh on 16 June 2012 - 09:49 AM

The state of the art is IBMs Watson. Nothing else comes close to it's achievements.

Second to that is Microsoft Kinect. It detects people and estimates poses in real time, even against cluttered backgrounds. Very artificially intelligent.

Most of the best AI is in things you don't notice... Path estimation to reduce the appearance of lag, algorithms to ensure realistic poses and animations, navigation algorithms, etc...

Most video game NPCs use scripted behavior or simple hard coded behaviors. You will find the most interesting AI where a machine needs to interface with reality, but at that point it tends to become very transparent.

Some UAVs have pretty slick AI for target tracking and line-of-sight planning. Googles self driving car is another example.








#4942040 Turn Based Strategy AI

Posted by willh on 21 May 2012 - 06:06 PM

Im really enjoying following your progress. It's satisfying technicallY, and I can't help but feel warm and fuzzy knowing how your son is involved. :). Its really cool how enthusiastic he gets about the different tricks you add, and he asks some really good questions.

I'm particularly interested in seeing you apply the GA optimization. Have you thought about keeping the top N strategies, so that maybe you could have different play styles that are of the same difficulty?

Keep up the great work!


#4941176 Finding RtNEAT tutorial/example code

Posted by willh on 18 May 2012 - 07:40 AM

"that's like needing to cite a paper to verify that your cr will not run when it's out of gas". LOL. Mind if I borrow that?

Dave, he wants an online learning algorithm. NNs are well suited to the task. It takes a trivial amount of time to train/retrain an ANN and the whole thing can be automated. You're always complaining that NNs have no use in games, and now this gentleman has a plan to use one with legitimate reasons.

Blessman: you can cite this website and the post. It is peer reviewed, as everything gets reviewed and commented on by people who have some experience working within the field. There is a video on YouTube of reinforcement learning being used to teach an opponent to play the game "golden spaceships". If you look you can find it-- it's not an ANN, but the idea is the same.

Dave makes a good point about the use of ML techniques in general. For video games they are almost never the right choice. Hand coded rules are easier to adjust to produce the desired behavior and most games require simple agents.






#4937148 Neural Networks and Genetic Algorithms

Posted by willh on 03 May 2012 - 11:20 AM

It doesn't help that most people's way of thinking of neural networks is they're just magical artificial brains.

In fact Neural Networks are mostly bullshit, created to solve problems that you don't want to or are not intelligent enough to code up a proper solution to. In these cases you just accept approximate solutions and state that you want any inputs to be mapped to outputs that are known to arise from similar inputs.

TL;DR: NNs are way over-hyped.


Over hyped, yes. Bullpoo, no. Misunderstood, definately.

An ANN is an approximation of a function based on observations. So is a linear regression. So is a running average. So is almost all of statistics. Hardly bull poo.

I consistently see problems where an ANN outperforms a decision tree.


#4909008 Artificial Emotion

Posted by willh on 02 February 2012 - 10:57 PM

Regarding laughter: the best (and most recent) theory I have heard is that laughter is a way of signalling to others that there is no danger despite a potentially dangerous situation.

For example:' a joke is only a good joke when the punch line is unexpected. The joke teller was deceptive, normally a bad thing, but you laugh because it's not actually threatening. You laugh when you see someone get hurt (football to the nuts, etc..) but only so long as the injury isn't serious.



#4899981 Trying to get a better understanding of trig.

Posted by willh on 05 January 2012 - 10:19 AM

Have you tried graphing the functions out? Sometimes seeing something graphically makes it easier to understand.

Sine is a bizarre thing to grasp because it's used in many different ways. I couldn't learn it in school and struggled to understand, but seemed to have no problem teaching it to myself when I wanted to know how to triangulate radio sources. You just need to find the right way of looking at it.

If youre trying to calculate distances or specifc angles, then Sine is the ratio of the lengths between two specific sides of a right angle triangle. Sine(angle) = opposite length/hypotenuse length. In other words, how angle, height, and overall distance are related.

If you're trying to make a smooth wave, then sine is a smooth periodic amplitude function. Try graphing y = sine(x) where x is a value between 0 and 6.28.

Of course they are the same thing, but maybe looking at it one or the other will help.

Good luck!


#4878884 Bayes Networks in games?

Posted by willh on 31 October 2011 - 07:49 AM

1320029118[/url]' post='4878783']


If there would be one machine learning thing that I would actively look into, it would be Decision Trees. Decision trees are brittle so can add variance and do well with little data. You could then augment them as Random forest or boosted trees. I think that would be where I would start.

A close cousin to bayes nets that might be useful to some types of games (arcade shooter, anything requiring movement tracking and prediction) would be a particle filter. Finally, a bared down Reinforcement learning algo may be good for long term play in a strategy or RPG. All of these would be very hard to get right and take a lot of time when a simple markov chain based model or even Finite state machine would have done just as good or better for much less work.


Trees are that little tool that nobody ever talks about. You can derive probability distributions from them, use them for clustering, regression, and classification. They can be built in near real time, and a human can understand their output (it's a decision tree!!!).
I've posted a few screen shots of regression and boosted regression trees. I've even offered to write a tutorial. They lack a cool name maybe... Or the math isnt complex enough to invoke the voodoo factor? Not even one thumbs up though... <sigh>


#4874767 train neural network with genetic algorithms

Posted by willh on 20 October 2011 - 11:49 AM

I still don't know what sort of "neural network" is being used as a race car controller. What are its inputs (range, speed, acceleration sensors) and outputs (steering, throttle, brakes)? Does it hold some internal state? Is it really a neural network, or thinking of it as a generic concise representation of a lookup table would be more appropriate? What constraints (e.g. symmetry between left and right) can be imposed on its structure?

Without assurance that good car controllers are possible, I'm strongly inclined to attribute any problems with cars that keep crashing around to fundamentally inadequate architecture (mainly ignoring important inputs), not to insufficient or unsophisticated training.



He specified the inputs: they are range to wall, and he has four of them. Range to wall is meaured in distance I'm assuming, and not 'time to collision'.

Good car controllers are definately possible. Below are some decent ones.

a. This one made by evolving a neural network (video)
b. A prettier one using Neuroevolution
c. Even prettier one

It's not really that complicated a problem when there are no other cars on the track. With enough time the neural network would just memorize the optimal path.

I can't speak for the OP, but one reason to use a neural network (MLP) is that they can approximate any function 'as is' without special coding considerations. Changing the inputs/outputs doesn't mean re-writing the neural network code.


He is definately missing some inputs if he only has 4 direction sensors. Speed of car would be an important one. :)


#4873699 Learning system suitable for finding combinations of features?

Posted by willh on 17 October 2011 - 06:45 PM

I first posted this from my iPad and it didn't include any line spaces. Sorry. :) Edited to fix, and added a few more comments


Support vector machine will not work, as he wants a continuos output. He could try support vector regression. Basically want you want to do is regression. NOTE: Avlaro, if you meant SVR my apologies.

Goggle "regression analysis" and you will find enough to occupy you for a life time. The best choice depends on your data.

Your problem is non linear I am assuming, otherwise simple weighted linear multivatiate regression would work.

- There is nothing wrong with neural networks provided you convert your inputs in to a suitable range. Input normalization is where most people fail with neural networks.

- Random forests will do what you want, but depending on the data, a neural network could be better.

- Boosted regreression trees will also work, but you will want to be familiar with regression trees and boosting before you try it.

- Gaussian mixture regression reportedly works well on noisy data (data that obviously is missing at least one dimension of information), but the implementation isn't exactly straightforward.

- MARSplines are meant specifically for regression problems, I've never used them but from what I've read they seem like a good solution.

- Support vector regression is another excellent tool for regression problems. A lot like neural networks in terms of many training parameters to tweak.

- You could also try evolutationary programming. Slow training, but flexible.



All that said, and without seeing your data, I would recommend:

- Back propagation neural network
or
- a regression tree


I've attached some screen shots showing the differences between ANNs, CART, and Boosted regression on the same 'noisey' sine wave. The Mean Squared Error is in the top of each graph.

Posted Image

The ANN produces a smooth continous function but doesn't handle outliers very well.






Posted Image

Regression trees are able to handle data that isn't smooth and continous, but produces 'plateaus' where you see idential outputs regardless of the input.



Posted Image

Boosted regression gives you the best of both worlds-- smooth outputs, while handling outliers well. The above model is actually quite weak because I only boosted a few trees.


#4825687 ANN in a fighting game?

Posted by willh on 20 June 2011 - 04:09 PM

I've been interested in the potential of neural nets in video games for a while now. But I have been thinking. How good are they at action oriented games, and more specifically, fighting games? I have too worries: a) they suck at real-time gameplay, or b) they're ridiculously good at it and become inhumanly difficult. I guess I'm just wondering if anybody here has any idea of whether a NN is a good idea for AI in a fighting game.



Just want to echo what's already been said..

ANNs are useful because they are not problem specific. You can, in theory, throw any problem at an ANN, combined with a training algorithm of your choice, and get some sort of answer. It's not just pattern classification-- any kind of function approximation. .

ANNs are usually a poor choice for solving a problem because they are not problem specific. :) The word 'poor' here is subjective. You can solve almost anything using backpropagation if you fiddle around enough, and the answer will be almost as good (or as good) as the best tool for the job. This should surprise nobody since z = f(x*y) is kind of flexible. :D


#4825678 when are genetic algorithms useful?

Posted by willh on 20 June 2011 - 03:58 PM

But it seems to me like the requirements are somewhat contradictory. If you know a lot about a problem, surely there's a better method than just randomly moving population of search points around and praying for emergent behavior? Are there any concrete examples where genetic algorithms are actually useful?


Avlaro kind of summed it up. I've used GA's before, with some success, but there have always been better alternatives and I was just 'playing'.

Their strength is that they are not problem specific, and in theory could be used to 'solve' just about any problem given enough computing time.

In reality this is mostely pointless. GAs, and ANNs, were dreamt up in a time when processing power (and raw data) was very limited. These days it's feasible to do a somewhat exhaustive search of a space using millions of data samples, so....

As far as games go, I can think of a few uses, but so far haven't seen anyone use them.


#4786035 Machine Learning with Multiplayer games

Posted by willh on 15 March 2011 - 08:31 AM

Algorithmically, solving chess is fairly simple. It's just a matter of throwing enough horsepower at it. Deep Blue was not impressive in its chess prowess nearly as much as it was impressive in its computing power. The same can be said of Watson on Jeopardy.


I do not agree. Watson (Deep QA) is a very different beast and is not just a game tree. Watson is more 'AI' than Deep Blue.


There are, as far as a I know, no equivilants to WATSON available on the desktop running at a much slower speed.


#4767727 Stuck on a question....

Posted by willh on 31 January 2011 - 06:17 PM

42


#4765733 Neural Networks experiments.

Posted by willh on 27 January 2011 - 11:56 AM

Welcome to the board JRowe.

There is a good reason why nobody is talking about ANNs these days. It's 2011; Support Vector Machines are what's in. They do a better job at generalization, work for both classification and regression, and are way easier to tune than an ANN. That's not to say they are useless, it's just that 99% of the time they are the wrong tool for the job-- ESPECIALLY when you are hard coding the topology.

My advice to you is to hand code a multilayer perceptron using a LINEAR activation function (i.e. simple threshold) to solve XOR. Then do the same thing using a non-linear activation function.




PARTNERS