Sign in to follow this  
sion5

Neural Network - Discussion

Recommended Posts

Hnefi    386
Quote:
Original post by sion5
Thanks Hnefi, that was a very informative post. I dont understand the later part of this comment:

Using GA as a learning algorithm avoids the local minima problem almost completely, but you pay for it by throwing away all structural knowledge you have of the problem which manifests itself as extremely poor learning rates.

I undertand the avoiding of the local minima problem but what do you mean by saying you do so at the cost of throwing away all structural knowledge you have of the problem which manifests itself as extremely poor learning rates??

Structural knowledge is the knowledge you have about how to solve the problem. You can use that to make the search for a solution more efficient. A* in pathfinding is an example of this; you use the knowledge that distance is an optimistic approximation of node fitness in combination with the knowledge that an optimistic approximation is permissible for that particular algorithm to efficiently and correctly reach a solution.

Neural nets already throw away a lot of structural information, but not all. In the case of backpropagation, a typical example of this is to assign different learning rates to different parts of the network; often, you let each neuron keep track of a measure of "inertia" that makes the search for the optimum much more efficient. This "inertia" is obtained by measuring how much the neuron needed to adjust its weights in the past and let that affect the learning rate in the present. By doing so, you are using the knowledge that this particular neuron is close to or far from a stable optimum, so it makes sense to keep it more or less stable.

GA cannot do this. You cannot get a meaningful idea of what parts of the genomes should be kept stable, because recombining them may alter this completely, even to the point of inverting the fitness of the entire genome. You can make no guarantees that it is better or worse to swap one part of a genome with another. It's all random.

This is a problem with all statistical methods, but GA is one of the worst offenders. That's why it's so general and also so inefficient.

Share this post


Link to post
Share on other sites
sion5    100
Please dont slate me on this as I have not done enough research into the implementation but ....

For my NN I was going to introduce the GA to establish the synaptic weights. The fitness of each weight would be determined by how close to the training data output the NN's output is. The chromosomes with the highest fitness "in theory" will be the weights that give the output we require.

If I haven't explained this well please let me know and I will try and elaborate further.

Share this post


Link to post
Share on other sites
Hnefi    386
Quote:
Original post by sion5
Please dont slate me on this as I have not done enough research into the implementation but ....

For my NN I was going to introduce the GA to establish the synaptic weights. The fitness of each weight would be determined by how close to the training data output the NN's output is. The chromosomes with the highest fitness "in theory" will be the weights that give the output we require.

If I haven't explained this well please let me know and I will try and elaborate further.

That is the standard way of doing it, and it works (but inefficiently). What this solution does is make a number of copies of the neural net, evaluate all of them, and then randomly (with a preference for more fit nets) choose nets/chromosomes to "mate". Which genes get switched between two chromosomes is random; you don't know how different parts of the chromosome affect other parts (and by extension, the fitness of the entire chromosome), so you can't make a reliable statement about which parts of the chromosome should remain stable.

It works, just don't expect it to learn anything significant in realtime.

Share this post


Link to post
Share on other sites
sion5    100
I dont understand why you would want it to learn anything at runtime? You would have trained the NN offline. All you are doing is using it runtime.

Share this post


Link to post
Share on other sites
Ohforf sake    2052
I'm not that much into NNs but my advice would be to do some testing. There are libs like FANN that can be used to set up and train NNs without much knowledge and to play with them.
Just set up a small framework and try to solve some problems. When I did this, I was amazed at how bad it actually works. It's by far not as easy as pluging some test data in, and after 4 hours of learning s.th. usefull comes out.
So as I said, do some testing and you will see, what they are good at and when they are simply a pain in the ass.

As of NNs beeing able to respond to untrained, new scenarios: I highly doubt this works, or at least works better than the "old-school" algorithms.
As was already stated, NNs are good at pattern recognition, where "patterns" would be specific situations in your case. Now the NN will be fine at recognizing the pattern, even if there are small changes (like an enemy more or less, a bit more or less health, ...), but as soon as something completely weird happens, eg. a pattern the developer didn't think of and there for didn't train, it won't know what to do.
Example: If anyone here played Crysis, go ahead, run into an enemy camp, climp on a tower and wait there, the AI won't know what to do, because the devs didn't do much scripting for that scenario. Now, if an NN was used, it would be pretty much the same thing, it wouldn't know what to do for this pattern, as the devs didn't train it. I can't imagine the NN would take it's chewing gum and a pair of boots, and build an rocket launcher from that to blast you down that tower.

Now if you still want to stick with NNs I would try a modular approach. Hnefi already said, that preprocessing the data helps a lot. I would try a network of NNs, with a NN for each specific task. For example a bot would have an NN for target prioritizing, an NN for abstract decision making, an NN for targeting, an NN for movement, ...
That way you can train each module individually and put all parts together in the end, wireing outputs of higher-level modules to inputs of lower-lever ones. Debugging also gets easier, cause if something goes wrong, you can check the outputs of each module to see who fucked up.
However what you get in the end (if it works) has the same functionality as decisiontrees so there is no real benefit.

The only thing that could help NNs is to put some work in learning. If you find a good and fast way of learning from past mistakes, you could try to implement an AI for an RTS game.
Maybe again with a modular approach (eg. one NN for enemy prediction, one NN for building, one NN for general strategy, one NN for mikro-movement, ...). Or some sort of hybrid approach with an old-school AI managing stuff, but an NN that can steer some weights inside the hardcoded AI and serve as "intuition".
However in both cases with the ability to learn from the match by "looking at the replay" or s.th. and figure out a way to perform better in the future.
It would be really cool, having an AI not falling for the same trick over and over again.
I'm pretty sure this can be done way easier without NNs but you never know...


I guess this wasn't much of a help, but you really picked a tough topic and you should really keep your expectations low.
On the other hand, I quite like these discussion-threads, as they are usually an interesting read...

Share this post


Link to post
Share on other sites
sion5    100
Hnefi, thanks again for your input in the discussion. Its a research topic that i'm investigating so i'm going to stick with NN's :-)

Ohforf sake, thanks aswell for your input, you also raised some intresting points.

So, the conclusion is that almost everyone in the industry hates NN's but academics love them :-) at least until they enter the industry

As I have said previously, I haven't worked in the industry and im guessing that whilst universities are teaching the technology, its a subject that can only truly be understood from experience!

Share this post


Link to post
Share on other sites
IADaveMark    3731
Quote:
Original post by sion5
So, the conclusion is that almost everyone in the industry hates NN's but academics love them :-) at least until they enter the industry

As I have said previously, I haven't worked in the industry and im guessing that whilst universities are teaching the technology, its a subject that can only truly be understood from experience!

And now we have come full circle to my original comment regarding how the industry greets someone from academia - especially a student - with skepticism. Visit my link on that exact comment. It is to a blog post by Damian Isla (Bungie... i.e. Halo 2 & 3) where he laments that new students come to him trumpeting their prowess by having knowledge of A* and NNs. The former is the most-written-about subject in game AI except for maybe FSMs and the latter is not useful and therefore irrelevant. But the schools keep injecting the students with it anyway, telling them that it is a useful skill, and sending them off into interviews in the biz.

Share this post


Link to post
Share on other sites
shurcool    439
Quote:
Quote:
Original post by Kylotan
or you can pick a method that explicitly accounts for all the scenarios a developer can envisage and get it working more reliably.

Original post by shurcool
And what happens in a scenario that the developer did not originally envisage?

After thinking a little more about this, I just wanted to bring more attention to this point.

Are there any valid responses to that question?

Share this post


Link to post
Share on other sites
Omid Ghavami    1007
Quote:
Original post by shurcool
Quote:
Quote:
Original post by Kylotan
or you can pick a method that explicitly accounts for all the scenarios a developer can envisage and get it working more reliably.

Original post by shurcool
And what happens in a scenario that the developer did not originally envisage?

After thinking a little more about this, I just wanted to bring more attention to this point.

Are there any valid responses to that question?


Patch

Share this post


Link to post
Share on other sites
Hnefi    386
Quote:
Original post by InnocuousFox
Quote:
Original post by sion5
So, the conclusion is that almost everyone in the industry hates NN's but academics love them :-) at least until they enter the industry

As I have said previously, I haven't worked in the industry and im guessing that whilst universities are teaching the technology, its a subject that can only truly be understood from experience!

And now we have come full circle to my original comment regarding how the industry greets someone from academia - especially a student - with skepticism. Visit my link on that exact comment. It is to a blog post by Damian Isla (Bungie... i.e. Halo 2 & 3) where he laments that new students come to him trumpeting their prowess by having knowledge of A* and NNs. The former is the most-written-about subject in game AI except for maybe FSMs and the latter is not useful and therefore irrelevant. But the schools keep injecting the students with it anyway, telling them that it is a useful skill, and sending them off into interviews in the biz.

Actually, I must disagree here - but maybe I'm the exception that proves the rule. When I took the "Neural networks and learning systems" course at my university, we were taught that ANN's, while interesting, are not useful in practice. We were taught how they work as a theoretical foundation for and comparison to other techniques. In the AI courses I've taken, ANN's have been consistently downplayed as irrelevant; the view is that even if they did do something useful, it wouldn't matter because it doesn't help us actually solve any problems. It'd be a black box, useful for engineers wanting to build something that works but worthless for researchers who want to understand how things work. But again, maybe my university is the exception that proves the rule.

Quote:
Original post by shurcool
Quote:
Quote:
Original post by Kylotan
or you can pick a method that explicitly accounts for all the scenarios a developer can envisage and get it working more reliably.

Original post by shurcool
And what happens in a scenario that the developer did not originally envisage?

After thinking a little more about this, I just wanted to bring more attention to this point.

Are there any valid responses to that question?

Neural networks do reasonably well at generalizing; that's part of their appeal. If an unexpected situation was to occur, it is not impossible that a neural network would be able to deal with it efficiently. How well it does depends on many things; the domain, the pre- and post-processing mechanisms, how well the net was trained, how the net is organized, what role it actually plays in the decision making mechanism etc.

Share this post


Link to post
Share on other sites
IADaveMark    3731
Quote:
Original post by Hnefi
Neural networks do reasonably well at generalizing; that's part of their appeal. If an unexpected situation was to occur, it is not impossible that a neural network would be able to deal with it efficiently. How well it does depends on many things; the domain, the pre- and post-processing mechanisms, how well the net was trained, how the net is organized, what role it actually plays in the decision making mechanism etc.

Regardless of the tool, any decision system is at the mercy of how many inputs are hooked up to it. If you fail to include an input as a possible critieria and yet that piece of information becomes the difference between two otherwise similar scenarios, your agent will not know what to do. Again, this is regardless of the tool used - NNs, BTs, HFSMs, whatever. It's a knowledge representation issue first.

Share this post


Link to post
Share on other sites
Hnefi    386
Quote:
Original post by InnocuousFox
Regardless of the tool, any decision system is at the mercy of how many inputs are hooked up to it. If you fail to include an input as a possible critieria and yet that piece of information becomes the difference between two otherwise similar scenarios, your agent will not know what to do. Again, this is regardless of the tool used - NNs, BTs, HFSMs, whatever. It's a knowledge representation issue first.

I'm not sure I understand what you mean. Neural networks are strictly signal processors; their input domain is perfectly defined. If you attach a neural net to a camera, then any possible image sequence from that camera will be valid and defined input for the network. Attaching additional sensors is not possible without remodeling and retraining the net, but that is a weakness the same way it's a weakness of algebra that "1+cat" is undefined; it's a non-issue. It may not be able to deal with all situations intelligently, depending on previously mentioned factors, but it will always be able to make a decision.

I don't see how it can be a knowledge representation issue, because NN's do not model knowledge explicitly. NN's deal strictly with signals, not abstract representations.

Share this post


Link to post
Share on other sites
sion5    100
Quote:
Original post by InnocuousFox
Quote:
Original post by sion5
So, the conclusion is that almost everyone in the industry hates NN's but academics love them :-) at least until they enter the industry

As I have said previously, I haven't worked in the industry and im guessing that whilst universities are teaching the technology, its a subject that can only truly be understood from experience!

And now we have come full circle to my original comment regarding how the industry greets someone from academia - especially a student - with skepticism. Visit my link on that exact comment. It is to a blog post by Damian Isla (Bungie... i.e. Halo 2 & 3) where he laments that new students come to him trumpeting their prowess by having knowledge of A* and NNs. The former is the most-written-about subject in game AI except for maybe FSMs and the latter is not useful and therefore irrelevant. But the schools keep injecting the students with it anyway, telling them that it is a useful skill, and sending them off into interviews in the biz.


Truth is academia is there to encourage innovation. I'm sorry but anyone can work in a factory pushing out the same product one after the other, but it takes academics to say "Hey wait, surely this can be done better?". Graphics and Audio has come on leaps and bounds in the past few years but where is A.I?? I absolutely love playing games as much as I like trying to create them but it really irritates me that games should be at the forefront of A.I development yet subjects like statistics and robotics are way in front?! If everyone's attitude is that we have found the best solution then there will never be an advancement in this domain.

Back to subject. Are there any readers who are working on/ have worked on high profile games that have tried using NN technology?

Share this post


Link to post
Share on other sites
Kylotan    9859
Quote:
Original post by shurcool
He said he's working on a final year research project. Key word is research.


You have misunderstood me. I fully support the idea of doing research into these things. I was just pointing out that the academic aim of wanting to get a certain tool to be able to perform a certain task is different from the industrial/pragmatic aim of choosing the tool for the task that involves the least risk.

Quote:
Quote:
You could potentially spend forever adjusting inputs, outputs, and hidden layers to try and get your neural network doing something useful, with no guarantee of getting anything that is good enough to be playable, or you can pick a method that explicitly accounts for all the scenarios a developer can envisage and get it working more reliably.

And what happens in a scenario that the developer did not originally envisage?

IMO, this is where the AI can come closest to 'making or breaking' the game - those unexpected but possible scenarios.


Yet they could still appear in neural networks, if you didn't think to include such things in your training data. Perhaps you get lucky and the net generalised to cover that case, and perhaps it didn't. You're not guaranteed to be any better off. So when you have a goal to meet and other tools available, tools tried and tested in many other applications, you use them.

Quote:
I now realize I wouldn't want to go into game development if it ends up involving nothing but spitting out cookie-cut, run-of-the-mill games.


No need for the hyperbole, really. If you have a job to do, then you have a job to do. You can't just expect to be allowed to spend months on R&D to try and come up with a new way of approaching something that isn't guaranteed to work, when there are existing tools that are guaranteed to work. Do that at university, or when self-employed, or when you've worked your way up to a position where you can get funding for such approaches. You'll be lucky to find many jobs in any industry that allow you to spend your wages playing with academic toys that are unproven in the area you're working in.

Share this post


Link to post
Share on other sites
Kylotan    9859
Quote:
Original post by sion5
Truth is academia is there to encourage innovation.

But it is also there to prepare people for industry and employment. A balance has to be struck.

Quote:
I absolutely love playing games as much as I like trying to create them but it really irritates me that games should be at the forefront of A.I development yet subjects like statistics and robotics are way in front?!

These areas are in front because improved AI is essential for their functioning, so there's a lot of money pushed in that direction. Games arguably don't require great AI, and certainly not pioneering AI. It would be great if they had it... but there's no market forces really calling for it. So change is more likely to come from below, from an experimental prototype showing great potential.

You'll encounter less resistance if you try and go with that flow!

Share this post


Link to post
Share on other sites
sion5    100
Ok, so since starting this discussion I have read many more journals, articles and discussions on NN's. My conclusion is that NN's are not an absolute solution, BUT they can help form a solution quite nicely.

Another thing I found with NN's is their problem specific, I think alot of people got narky because I mentioned using a NN for automating an agent. Ok, so its not the best way to solve that PARTICULAR example. Now consider you had a game (NERO is an example) where your given a team of soldiers and you have to train them by putting them through military training exercises of your choice. Once you have trained your soldiers to a level your happy with or set period of time, you are then placed in a battlefield with another human players set of soldiers and see who wins. Correct me if im wrong but the only way to create a game like this would be using neural networks?? Not only this, as Risto Miikulainen said in his paper "Creating Intelligent Agents in Games" you are now starting to create new genres of games (which in my opinion the industry needs)

Share this post


Link to post
Share on other sites
bitshit    163
About previous research in NN applied to games; I once came across a project which expirimented with NN's applied to a racing game, with very good results:

http://togelius.blogspot.com/2006/04/evolutionary-car-racing-videos.html
(Also check out his newer blog entries)

Also I found this talk very interesting:

http://www.youtube.com/watch?v=AyzOUbkUf3M

Share this post


Link to post
Share on other sites
Hnefi    386
Quote:
Original post by sion5Now consider you had a game (NERO is an example) where your given a team of soldiers and you have to train them by putting them through military training exercises of your choice. Once you have trained your soldiers to a level your happy with or set period of time, you are then placed in a battlefield with another human players set of soldiers and see who wins. Correct me if im wrong but the only way to create a game like this would be using neural networks??

No, there are other ways of creating general, adaptive behaviour. Remember, neural networks can't do anything that can't be done using other methods. For training combat agents in an RTS-like game, I'd probably prefer Bayesian networks. Support vector machines using the kernel trick is another alternative that is often preferred over NN's. I'm sure there are several more alternatives available.

Share this post


Link to post
Share on other sites
shurcool    439
Quote:
Original post by bitshit
About previous research in NN applied to games; I once came across a project which expirimented with NN's applied to a racing game, with very good results:

http://togelius.blogspot.com/2006/04/evolutionary-car-racing-videos.html
(Also check out his newer blog entries)

Also I found this talk very interesting:

http://www.youtube.com/watch?v=AyzOUbkUf3M

I think you may be the first person to provide something the OP originally asked for (I missed some posts, so I could be wrong).

I read that blog and it was very interesting and thought-provoking, thanks.

Share this post


Link to post
Share on other sites
IADaveMark    3731
Quote:
Original post by sion5
Truth is academia is there to encourage innovation. I'm sorry but anyone can work in a factory pushing out the same product one after the other, but it takes academics to say "Hey wait, surely this can be done better?".

This is such a load of arrogant crap it is bordering on invalidating the usefulness of this entire thread and disqualifying you from further consideration on any relevant subject matter. 99% of the innovation in the modern world has come from outside academia. In the games industry, much the same can be said.

Quote:
Graphics and Audio has come on leaps and bounds in the past few years but where is A.I??

Because they are vastly different problems. The innovation and improvement curves are not going to be similar.

Quote:
If everyone's attitude is that we have found the best solution then there will never be an advancement in this domain.

Who is this "everyone" of which you speak? Every single time I sit down to work on my stuff, I'm trying to do something better. Every time I crack open the new AI Wisdom book, I see something from a front line dude that makes me say "damn... someone found a better way" - and usually it was because they were trying to solve a problem in their own projects. (You have read all the AI Wisdom books, right?)

Quote:
Back to subject. Are there any readers who are working on/ have worked on high profile games that have tried using NN technology?

There used to be material on that in the early AI Wisdom books... but not in the most recent one. It just hasn't caught on because people haven't found a use for it. Coincidentally, those articles often come from academic individuals and teams and NOT from the industry peeps.

Seriously... my suggestion to you is to do a little more reading. Do a little more browsing the web. The information is not going to come to you here. You are going to need to go to the information. If you want to give the forums at AIGameDev a try, have at it - there's a more active community there with a lot of pros... but I'm quite sure you will get a similar message.

An interesting exercise for you would be this... make a list of all the possible things you think an AI agent needs to do in a game. Pathfinding, steering, animation control, cover processing, decision making, planning, state management, cooperative interaction, etc... Then cull it down to the stuff that could be dealt with by a NN. Then list all the other ways that the same type of thing could be dealt with. List the pros and cons of each method. (if you don't know, you are already working from behind.) Then cull that down by what sorts of requirements the production game space needs. (e.g. computation speed, production speed, predictability, control, stability, etc.) Rank accordingly. This may be rather illuminating.

We've been trying to tell you... but after all, we aren't academics like you - therefore we don't know shit. It's up to you and your fellows to change the world. We just turn wrenches.

Share this post


Link to post
Share on other sites
IADaveMark    3731
Quote:
Original post by Hnefi
Quote:
Original post by InnocuousFox
Regardless of the tool, any decision system is at the mercy of how many inputs are hooked up to it. If you fail to include an input as a possible critieria and yet that piece of information becomes the difference between two otherwise similar scenarios, your agent will not know what to do. Again, this is regardless of the tool used - NNs, BTs, HFSMs, whatever. It's a knowledge representation issue first.

I'm not sure I understand what you mean. Neural networks are strictly signal processors; their input domain is perfectly defined.
[snip]
I don't see how it can be a knowledge representation issue, because NN's do not model knowledge explicitly. NN's deal strictly with signals, not abstract representations.

That's the point. Make a list of all of the inputs that you would require to make a decision about a certain problem. Those are your inputs, correct? What if you were to take one away? The decision model wouldn't be able to take that into account - and the resultant decision could suffer accordingly.

So, back with your original list, what if you haven't thought of everything? There may be a contingency where that one extra input that you forgot to include is the tipping point between stupid decision A and good decision B. However, your agent is stuck with stupid decision A because it doesn't know any better. The only knowledge that your agent had of the world was what you chose to provide it. Even if it is processing that information perfectly, it could still look stupid because of something you forgot.

(Note: this is for any decision tool... not just NN's)

Share this post


Link to post
Share on other sites
IADaveMark    3731
Quote:
Original post by sion5
Ok, so since starting this discussion I have read many more journals, articles and discussions on NN's. My conclusion is that NN's are not an absolute solution, BUT they can help form a solution quite nicely.

YES! Holy crap... about freaking time. If you wanted to use an NN for a small, contained situation, sure. Of course, as mentioned previously, it's likely that similar sort of decisions could be made by RBS, Expert Systems, Bayesian Networks (especially dynamic ones), etc. But at least you are out of the "Holy Grail" mentality.

Share this post


Link to post
Share on other sites
Hnefi    386
Quote:
Original post by InnocuousFox
Quote:
Original post by Hnefi
Quote:
Original post by InnocuousFox
Regardless of the tool, any decision system is at the mercy of how many inputs are hooked up to it. If you fail to include an input as a possible critieria and yet that piece of information becomes the difference between two otherwise similar scenarios, your agent will not know what to do. Again, this is regardless of the tool used - NNs, BTs, HFSMs, whatever. It's a knowledge representation issue first.

I'm not sure I understand what you mean. Neural networks are strictly signal processors; their input domain is perfectly defined.
[snip]
I don't see how it can be a knowledge representation issue, because NN's do not model knowledge explicitly. NN's deal strictly with signals, not abstract representations.

That's the point. Make a list of all of the inputs that you would require to make a decision about a certain problem. Those are your inputs, correct? What if you were to take one away? The decision model wouldn't be able to take that into account - and the resultant decision could suffer accordingly.

So, back with your original list, what if you haven't thought of everything? There may be a contingency where that one extra input that you forgot to include is the tipping point between stupid decision A and good decision B. However, your agent is stuck with stupid decision A because it doesn't know any better. The only knowledge that your agent had of the world was what you chose to provide it. Even if it is processing that information perfectly, it could still look stupid because of something you forgot.

(Note: this is for any decision tool... not just NN's)

I think I understand what you're trying to say now. Basically, if the domain is not structured as well as it could be, the decision process will suffer. I agree, but beside being trivially obvious, it doesn't really have anything to do with dealing with unforeseen situations which is what I was responding to shurcool about.

But sure, as previously stated, preprocessing/input configuration has a great effect on the performance of neural nets.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this