Neural Network - Discussion

Started by
102 comments, last by Kylotan 15 years, 8 months ago
I'm not that much into NNs but my advice would be to do some testing. There are libs like FANN that can be used to set up and train NNs without much knowledge and to play with them.
Just set up a small framework and try to solve some problems. When I did this, I was amazed at how bad it actually works. It's by far not as easy as pluging some test data in, and after 4 hours of learning s.th. usefull comes out.
So as I said, do some testing and you will see, what they are good at and when they are simply a pain in the ass.

As of NNs beeing able to respond to untrained, new scenarios: I highly doubt this works, or at least works better than the "old-school" algorithms.
As was already stated, NNs are good at pattern recognition, where "patterns" would be specific situations in your case. Now the NN will be fine at recognizing the pattern, even if there are small changes (like an enemy more or less, a bit more or less health, ...), but as soon as something completely weird happens, eg. a pattern the developer didn't think of and there for didn't train, it won't know what to do.
Example: If anyone here played Crysis, go ahead, run into an enemy camp, climp on a tower and wait there, the AI won't know what to do, because the devs didn't do much scripting for that scenario. Now, if an NN was used, it would be pretty much the same thing, it wouldn't know what to do for this pattern, as the devs didn't train it. I can't imagine the NN would take it's chewing gum and a pair of boots, and build an rocket launcher from that to blast you down that tower.

Now if you still want to stick with NNs I would try a modular approach. Hnefi already said, that preprocessing the data helps a lot. I would try a network of NNs, with a NN for each specific task. For example a bot would have an NN for target prioritizing, an NN for abstract decision making, an NN for targeting, an NN for movement, ...
That way you can train each module individually and put all parts together in the end, wireing outputs of higher-level modules to inputs of lower-lever ones. Debugging also gets easier, cause if something goes wrong, you can check the outputs of each module to see who fucked up.
However what you get in the end (if it works) has the same functionality as decisiontrees so there is no real benefit.

The only thing that could help NNs is to put some work in learning. If you find a good and fast way of learning from past mistakes, you could try to implement an AI for an RTS game.
Maybe again with a modular approach (eg. one NN for enemy prediction, one NN for building, one NN for general strategy, one NN for mikro-movement, ...). Or some sort of hybrid approach with an old-school AI managing stuff, but an NN that can steer some weights inside the hardcoded AI and serve as "intuition".
However in both cases with the ability to learn from the match by "looking at the replay" or s.th. and figure out a way to perform better in the future.
It would be really cool, having an AI not falling for the same trick over and over again.
I'm pretty sure this can be done way easier without NNs but you never know...


I guess this wasn't much of a help, but you really picked a tough topic and you should really keep your expectations low.
On the other hand, I quite like these discussion-threads, as they are usually an interesting read...
Advertisement
Hnefi, thanks again for your input in the discussion. Its a research topic that i'm investigating so i'm going to stick with NN's :-)

Ohforf sake, thanks aswell for your input, you also raised some intresting points.

So, the conclusion is that almost everyone in the industry hates NN's but academics love them :-) at least until they enter the industry

As I have said previously, I haven't worked in the industry and im guessing that whilst universities are teaching the technology, its a subject that can only truly be understood from experience!
Feeling #0000FF
Quote:Original post by sion5
So, the conclusion is that almost everyone in the industry hates NN's but academics love them :-) at least until they enter the industry

As I have said previously, I haven't worked in the industry and im guessing that whilst universities are teaching the technology, its a subject that can only truly be understood from experience!

And now we have come full circle to my original comment regarding how the industry greets someone from academia - especially a student - with skepticism. Visit my link on that exact comment. It is to a blog post by Damian Isla (Bungie... i.e. Halo 2 & 3) where he laments that new students come to him trumpeting their prowess by having knowledge of A* and NNs. The former is the most-written-about subject in game AI except for maybe FSMs and the latter is not useful and therefore irrelevant. But the schools keep injecting the students with it anyway, telling them that it is a useful skill, and sending them off into interviews in the biz.

Dave Mark - President and Lead Designer of Intrinsic Algorithm LLC
Professional consultant on game AI, mathematical modeling, simulation modeling
Co-founder and 10 year advisor of the GDC AI Summit
Author of the book, Behavioral Mathematics for Game AI
Blogs I write:
IA News - What's happening at IA | IA on AI - AI news and notes | Post-Play'em - Observations on AI of games I play

"Reducing the world to mathematical equations!"

Quote:Original post by kirkd
Just my $0.02 worth - I'll try to keep it to $0.02. 8^)
...
OK. $0.03.

And with the plummeting US dollar, it's more like $0.04. I move that the "2 cents" cliche' be revised to account for inflation.

Dave Mark - President and Lead Designer of Intrinsic Algorithm LLC
Professional consultant on game AI, mathematical modeling, simulation modeling
Co-founder and 10 year advisor of the GDC AI Summit
Author of the book, Behavioral Mathematics for Game AI
Blogs I write:
IA News - What's happening at IA | IA on AI - AI news and notes | Post-Play'em - Observations on AI of games I play

"Reducing the world to mathematical equations!"

Quote:
Quote:Original post by Kylotan
or you can pick a method that explicitly accounts for all the scenarios a developer can envisage and get it working more reliably.

Original post by shurcool
And what happens in a scenario that the developer did not originally envisage?

After thinking a little more about this, I just wanted to bring more attention to this point.

Are there any valid responses to that question?
Quote:Original post by shurcool
Quote:
Quote:Original post by Kylotan
or you can pick a method that explicitly accounts for all the scenarios a developer can envisage and get it working more reliably.

Original post by shurcool
And what happens in a scenario that the developer did not originally envisage?

After thinking a little more about this, I just wanted to bring more attention to this point.

Are there any valid responses to that question?


Patch
Best regards, Omid
Quote:Original post by InnocuousFox
Quote:Original post by sion5
So, the conclusion is that almost everyone in the industry hates NN's but academics love them :-) at least until they enter the industry

As I have said previously, I haven't worked in the industry and im guessing that whilst universities are teaching the technology, its a subject that can only truly be understood from experience!

And now we have come full circle to my original comment regarding how the industry greets someone from academia - especially a student - with skepticism. Visit my link on that exact comment. It is to a blog post by Damian Isla (Bungie... i.e. Halo 2 & 3) where he laments that new students come to him trumpeting their prowess by having knowledge of A* and NNs. The former is the most-written-about subject in game AI except for maybe FSMs and the latter is not useful and therefore irrelevant. But the schools keep injecting the students with it anyway, telling them that it is a useful skill, and sending them off into interviews in the biz.

Actually, I must disagree here - but maybe I'm the exception that proves the rule. When I took the "Neural networks and learning systems" course at my university, we were taught that ANN's, while interesting, are not useful in practice. We were taught how they work as a theoretical foundation for and comparison to other techniques. In the AI courses I've taken, ANN's have been consistently downplayed as irrelevant; the view is that even if they did do something useful, it wouldn't matter because it doesn't help us actually solve any problems. It'd be a black box, useful for engineers wanting to build something that works but worthless for researchers who want to understand how things work. But again, maybe my university is the exception that proves the rule.

Quote:Original post by shurcool
Quote:
Quote:Original post by Kylotan
or you can pick a method that explicitly accounts for all the scenarios a developer can envisage and get it working more reliably.

Original post by shurcool
And what happens in a scenario that the developer did not originally envisage?

After thinking a little more about this, I just wanted to bring more attention to this point.

Are there any valid responses to that question?

Neural networks do reasonably well at generalizing; that's part of their appeal. If an unexpected situation was to occur, it is not impossible that a neural network would be able to deal with it efficiently. How well it does depends on many things; the domain, the pre- and post-processing mechanisms, how well the net was trained, how the net is organized, what role it actually plays in the decision making mechanism etc.
-------------Please rate this post if it was useful.
Quote:Original post by Hnefi
Neural networks do reasonably well at generalizing; that's part of their appeal. If an unexpected situation was to occur, it is not impossible that a neural network would be able to deal with it efficiently. How well it does depends on many things; the domain, the pre- and post-processing mechanisms, how well the net was trained, how the net is organized, what role it actually plays in the decision making mechanism etc.

Regardless of the tool, any decision system is at the mercy of how many inputs are hooked up to it. If you fail to include an input as a possible critieria and yet that piece of information becomes the difference between two otherwise similar scenarios, your agent will not know what to do. Again, this is regardless of the tool used - NNs, BTs, HFSMs, whatever. It's a knowledge representation issue first.

Dave Mark - President and Lead Designer of Intrinsic Algorithm LLC
Professional consultant on game AI, mathematical modeling, simulation modeling
Co-founder and 10 year advisor of the GDC AI Summit
Author of the book, Behavioral Mathematics for Game AI
Blogs I write:
IA News - What's happening at IA | IA on AI - AI news and notes | Post-Play'em - Observations on AI of games I play

"Reducing the world to mathematical equations!"

Quote:Original post by InnocuousFox
Regardless of the tool, any decision system is at the mercy of how many inputs are hooked up to it. If you fail to include an input as a possible critieria and yet that piece of information becomes the difference between two otherwise similar scenarios, your agent will not know what to do. Again, this is regardless of the tool used - NNs, BTs, HFSMs, whatever. It's a knowledge representation issue first.

I'm not sure I understand what you mean. Neural networks are strictly signal processors; their input domain is perfectly defined. If you attach a neural net to a camera, then any possible image sequence from that camera will be valid and defined input for the network. Attaching additional sensors is not possible without remodeling and retraining the net, but that is a weakness the same way it's a weakness of algebra that "1+cat" is undefined; it's a non-issue. It may not be able to deal with all situations intelligently, depending on previously mentioned factors, but it will always be able to make a decision.

I don't see how it can be a knowledge representation issue, because NN's do not model knowledge explicitly. NN's deal strictly with signals, not abstract representations.
-------------Please rate this post if it was useful.
Quote:Original post by InnocuousFox
Quote:Original post by sion5
So, the conclusion is that almost everyone in the industry hates NN's but academics love them :-) at least until they enter the industry

As I have said previously, I haven't worked in the industry and im guessing that whilst universities are teaching the technology, its a subject that can only truly be understood from experience!

And now we have come full circle to my original comment regarding how the industry greets someone from academia - especially a student - with skepticism. Visit my link on that exact comment. It is to a blog post by Damian Isla (Bungie... i.e. Halo 2 & 3) where he laments that new students come to him trumpeting their prowess by having knowledge of A* and NNs. The former is the most-written-about subject in game AI except for maybe FSMs and the latter is not useful and therefore irrelevant. But the schools keep injecting the students with it anyway, telling them that it is a useful skill, and sending them off into interviews in the biz.


Truth is academia is there to encourage innovation. I'm sorry but anyone can work in a factory pushing out the same product one after the other, but it takes academics to say "Hey wait, surely this can be done better?". Graphics and Audio has come on leaps and bounds in the past few years but where is A.I?? I absolutely love playing games as much as I like trying to create them but it really irritates me that games should be at the forefront of A.I development yet subjects like statistics and robotics are way in front?! If everyone's attitude is that we have found the best solution then there will never be an advancement in this domain.

Back to subject. Are there any readers who are working on/ have worked on high profile games that have tried using NN technology?
Feeling #0000FF

This topic is closed to new replies.

Advertisement