Neural Network - Discussion

Started by
102 comments, last by Kylotan 15 years, 7 months ago
Quote:Original post by Timkin
Oh, just a quick response to kirkd... if it weren't for Kolmogorov, who built on Markov's work, we wouldn't have our modern information age (including computers, electronic and photonic communications systems, the internet, etc)! So, it's probably a little unreasonable to suggest that Markov's work sat idle for 100 years until applications of Markov chains arose in speech recognition! ;)

Cheers,

Timkin


Would be nice to be able to read what kirkd said. I do not agree with that. Perhaps there would have been stalls and things might have ended up with slightly different notations like in say complexity theory, perhaps a different way of doing probabilities but I am certain that no one man has had so much impact since Aristotle. There were simply too many people approaching the notion of computing from many angles including off the top of my head: Russell, Church, Haskell Curry, McCarthy, Turing, von neuman, John backus...

Interestingly what does owe alot to AI is programming languages, lots of things considered powerful today were already being used back in the day to make AI programs easier to tackle: functional programming, relational/logic programming and constraint solving, object oriented programming, module concept and dynamic dispatch to name a few.
Advertisement
Daerax,

I apologize, however, I deleted my original response rather than leave what was considered to be misinformation in place. What I had originally said was the Markov developed the basis of Hidden Markov Models in the 1880s but didn't find much practical application until the 1990s with speech recognition and bioinformatics. It was not my intent to suggest it found no usage, but rather that a technology that found limited practical application for 100 years could be applied to a modern problem.

-Kirk

Quote:Original post by InnocuousFox
NNs can pretty much handle only a static time slice and don't handle looking ahead (or behind) very well. That makes any sort of planning algorithm a little muddy.


I am not sure I understand your reasoning for this statement.

NN's are nothing more than function approximators, and as such has no extra limits attached to them. The reason for NN's is machine learning the functions to begin with when you don't have the information necessary to simply map input(s) to output(s) using more traditional methodolgies.

If you want to evolve a NN to look ahead or behind, give it some feedback nodes (outputs that are used specifically for inputs on the next query.) This does put a constraint on training methodology as far as I am aware (can't backprop train a node without an error metric for it.) A GA training approach is well suited to this kind of setup.

Once a NN is suitable (or any function approximation methodology), it can usualy be easily converted to lookup tables + interpolation, a simple and efficient function approximator that is also easily tweaked by hand.
First, in order to read non-current data (e.g. for detecting trends), you would have to assign an input to "what came before". In a sequential action, this is fine because you can look at whatever data structure you are using to keep track of "what came before" (e.g. a list) and say (current - n) or whatever. However, if you are looking at time sensitive things, e.g. "things within the past 5 minutes", you have to get a little more clever. For example, in doing stock market analysis (a common example) you have to have inputs for "yesterday's closing price", "last week's closing price", "last month's closing price", or whatever time scale you find necessary. The more of those you throw in there, that is that many more inputs you need to account for. Each of those is subject to glitches and spikey local minima/maxima.

More to the point, however, NNs are good for creating an output based on the current state of the inputs. You can't as easily start playing with things such as a Minimax-style problem or something as complex as a plan of actions in a sequence.

More later... I'm at a client site.

Dave Mark - President and Lead Designer of Intrinsic Algorithm LLC
Professional consultant on game AI, mathematical modeling, simulation modeling
Co-founder and 10 year advisor of the GDC AI Summit
Author of the book, Behavioral Mathematics for Game AI
Blogs I write:
IA News - What's happening at IA | IA on AI - AI news and notes | Post-Play'em - Observations on AI of games I play

"Reducing the world to mathematical equations!"

Some responses to recent comments...

Original post by Daerax
Quote:
I do not agree with that. Perhaps there would have been stalls and things might have ended up with slightly different notations like in say complexity theory, perhaps a different way of doing probabilities but I am certain that no one man has had so much impact since Aristotle. There were simply too many people approaching the notion of computing from many angles including off the top of my head: Russell, Church, Haskell Curry, McCarthy, Turing, von neuman, John backus...


Most of what arose in western engineering (particularly telecommunications and control) and subsequently computing from the 40s onward was based directly on the understanding of stochastic processes developed by the Russian-Germanic alliance of the late 19th and early 20th century. Generally speaking, western scientists and mathematicians were simply no where near the level needed to create this understanding. There is countless evidence of advances in western engineering and computing being directly based on Russian publications, or on Western scientists having spent time with their foreign counterparts, bringing back the knowledge with them.

During the latter half of the 19th century and into the 20th, there is a single strong thread of Russian mathematicians, predominantly coming from the same school at Moscow State University. The mathematics group there was pivotal to the developments of the time. Everything that came later in this area can be shown to have grown from the knowledge developed by this one group. Kolmogorov was one of those who stood out from the crowd, hence my selection of him.

I could provide examples of the direct links and the basis of my opinion if anyone is particularly interested, but I'd end up waffling on for ages, hence the omission from this post! ;)

On the issue of handling time in ANNs...

Feed forward networks are very poor at handling time. Even when you provide inputs covering information at previous time, which is essentially an attempt to model the autocorrelation of the process. However, there ARE network architectures that handle time very well... they're just harder to train, because you now have the problem of ensuring that you're seeing all processes passing through a given point at a given time.

Recurrent networks can be designed to model the underlying time-space differential of the process. You can even ensure properties such as stable (non-divergent) learning. I've made some particular contributions in this area in the application of recurrent architectures to learning control problems (where you know nothing of the system you are trying to control, only the performance requirements). Having said that, I certainly wouldn't advise anyone to apply these architectures to control problems in games.

Cheers,

Timkin
Quote:Original post by InnocuousFox
First, in order to read non-current data (e.g. for detecting trends), you would have to assign an input to "what came before".


I think you are missing the point of machine learning.

You really shouldn't assign any sort of historic input. Instead, you can let the machine figure out whats important to "remember", when to "forget", and so forth. You can simply continue to give it current state information, and access to the feedback nodes. The more feedback nodes you give it, the more historic state it has.

The historic state, the feedback, has no assigned meaning to you. Thats for the GA to optimize.

Quote:Original post by InnocuousFox
More to the point, however, NNs are good for creating an output based on the current state of the inputs.


True.

Quote:Original post by InnocuousFox
You can't as easily start playing with things such as a Minimax-style problem or something as complex as a plan of actions in a sequence.


NN's are just numeric functions. It does what functions do. Of course it isnt suitable for searching a game tree. Thats what search algorithms are for.
... which is why NN's are more "silver spoon" than "silver bullet". It's a big ol' mathematical function - not a problem-solving algorithm.

Dave Mark - President and Lead Designer of Intrinsic Algorithm LLC
Professional consultant on game AI, mathematical modeling, simulation modeling
Co-founder and 10 year advisor of the GDC AI Summit
Author of the book, Behavioral Mathematics for Game AI
Blogs I write:
IA News - What's happening at IA | IA on AI - AI news and notes | Post-Play'em - Observations on AI of games I play

"Reducing the world to mathematical equations!"

When I hear "AI" I immediately think "expected utility maximization", not "artificial neural networks".

Not all of AI is expected utility maximization, but a huge chunk of it is. It takes on many different shapes (e.g., game tree search) and it takes a lot of effort to get this general idea to solve real problems, so in that sense it's not a silver bullet either. But it is probably the first thing you should think of when approaching any decision-making problem.

Unfortunately, ANNs have a much catchier name and most people think of them first.

Quote:Original post by InnocuousFox
... which is why NN's are more "silver spoon" than "silver bullet". It's a big ol' mathematical function - not a problem-solving algorithm.


Problem-solving is another name for the perhaps overly generalized term, Search Algorithms (be it AB, MTD(f), A*, etc..)

Chess is a fine example where, while dominated by the traditional searches, Machine Learning has also played an important role.

What is the value of a Knight or Bishop sitting on D6?

AB-driven engines have an eval() that needs to know the values as they relate to its own run-time search capabilities. Many engines use piece-square tables, a set of many hundreds of coefficients which are tweaked not by humans, but instead by machine learning algorithms. Not only are these coefficients too big a problem for a human to manualy define due to the sheer number of them, the human also isnt well suited for the task because he or she is not truely capable of groking the intricacies of how these values relate to an N-ply AB search nor the horizon effects associated with that.

The ML algorithms are very powerful tools which are typically not very useful as part of the end-product, but can be very usefull on your way to creating that end-product.
Can't be done with current technology but good luck trying. =/
This is your life, and it's ending one minute at a time. - Fight club

This topic is closed to new replies.

Advertisement