neural net test problems

Started by
4 comments, last by Alrecenk 18 years, 5 months ago
I've been working on a neural net system and I think I finally have it working. I'd like to try it out with some common neural net problems, but the only one I know of is the pole balancing problem and that is much too simple. Is there some accepted way to benchmark a neural net's capacity? Also, my neural net has analog output, but all of the problems I've read about use boolean output, what's the deal with that? If you are interested in seeing the pole balancing problem mastered in a matter of minutes you can check out an applet at : http://68.117.148.181/Java/show/ai/link/ [Edited by - Alrecenk on November 26, 2005 11:24:58 PM]
Advertisement
Very cool. :) I love it when people post applets.

When I cranked the speed up it found a solution in no time.

It would jiggle back and forth really quickly keeping the pole balanced.

You could try balancing a pole resting on the tip of another pole. (i.e. a pole with a joint in it)

Again, very neat.

Will
------------------http://www.nentari.com
The Mountain Car problem is a pretty common one for reinforcment learning, can be easily visualized too.

You could also try this one, which would probably be similar but slightly more complicated than the one you have now.

You're applet's fun to watch too :).
I tried a variation of the mountain car problem with pretty good results. It isn't solved as fast as the pole balancing, but I still found a solution in about 20 minutes. Is using GA's considered reinforcement learning? I haven't read much about RL...anyways, since I wouldn't want to dissapoint RPGeezus here's an applet of the mountain car problem or uh, more like the ball loop problem in my case.

The new look of gamedev and the inability to change it is gettin' on my nerves!

[Edited by - Alrecenk on November 26, 2005 11:08:24 PM]
I can't get your mountain car link to work :(.

Quote:Original post by Alrecenk
Is using GA's considered reinforcement learning?


I don't think GA's by themselves are considered reinforcement learning, they're really a type of search...but then again so is all AI. RL usually represents two things: states and actions. For the mountain car example, a state would be position and velocity, and the actions at any given state would be accellerate forwards, backwards, or coast. Each state could then be represented in a table (called a Q-table), with each action comming off of it leading to another state. Anyway, each state-action starts off with a certain value (reward) and is adjusted as the agent visits each state and "sees how well it works".

This table can be replaced with a neural net to learn the 'Q-function', to keep the states from reaching infinity as the problem becomes more complex. You could also generate these Q-tables randomly (random reward values for each state action) and then use a GA to find a good solution. So in short you can use a GA to accomplish the same thing, but the methods are different enough I don't think it'd still be considered reinforcement learning, someone can correct me if I'm wrong though.

Sorry about that, my ip changed. I edited the links so it should work now. The way my neural net works does seem atleast a little like reinforcement learning. I create functions for how good each action is in terms of positions and velocity and then I compare 3 nodes and move depending on which has the highest value. Though I don't check how each individual action affects the score. I actually run 4000 cycles and evaluate each network as a whole. Then toss out the worst ones and replace them with combinations of the better ones. I guess the major difference is that I have lots of tables instead trying to perfect just one.

This topic is closed to new replies.

Advertisement