Archived

This topic is now archived and is closed to further replies.

Yay, my first neuron!

This topic is 4949 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Well I finally understand how artifical neurons work! I created a neuron, gave it two inputs (althoug it supports an unlimited ammount of inputs), and then outputed results of random values, and it worked! But now I need to learn how to do the 'learning' part where the neurons re-map themselves. Does anyone have any tips? The purpose of this is to have AI on my game that uses adaptive learning (of an intellegence level specified by the mapper). Thanks! Oh, and yes the output is in float, the thing supports float, i simply gave it bool logic and inputs for ease of testing. And yes it worked when I tried with 6 inputs. -- Edit The first shot i posted didn't work right, for some reason i had the biast set wrong, fixed it :D [edited by - dreq on May 27, 2004 9:02:01 PM]

Share this post


Link to post
Share on other sites
Dreq: Where did you learn about Neurons / neural networks - I''ve been looking at them for a while but havent found any really good tutorials that I''ve been able to understand. Any good links you know of?

Congratulations on your first neuron, btw

-Mezz

Share this post


Link to post
Share on other sites
I did some neural net work about a year ago. And let me tell you, I probably went about it very poorly. Although I had some form of an idea of backpropagation, it was more than I had time to figure out and implement. That would probably be one of the first things you should look at though. I think. I myself implemented a genetic method. I made a bunch of random neural nets, "competed" them against one another, and picked the ones that did the best. I encoded their neural nets, genetically combined them to get a new generation of neural nets, and repeated. It turned out though, that although this process did successfully learn, its ability to learn kept getting smaller every generation. After about 20 generations or so, it was hardly improving at all. Overall, my neural nets went from having a 50% chance of winning against a random competitor, to having a 52% chance of winning against a random competitor. I couldn''t get much higher than 52% or so. Pathetic. It probably didn''t help that this was chess, and I''m sure neural nets aren''t very well suited for chess.

All in all, it was fun, though, and the genetic algorithm idea is pretty simple to set up.

Share this post


Link to post
Share on other sites
a very good "primer" book to get is The Mind Within The Net by Manfred Spitzer. It doesn''t cover using neural nets for game AI specifically, but rather it covers the big picture of NN''s in general and then goes into detail on how they work in a computer and how they work biologically. It covers neuroplasticity, 2 layer networks, 3 layer networks and abstraction, 3 layer + context layer for temporal learning, and neural mapping among other topics. A very good read someone just starting out. It''ll give you lots of good ideas to start with.

-duck

Share this post


Link to post
Share on other sites
Agony: sounds like you didnt have a scheme for new _major_ mutations to occur?

Neural nets can get into a rut of sorts. An allegory (?) is to compare neural net learning to falling water (rain). The basic idea is that in a neural net, you want to get the best answer as possible - which is comparable to water getting as close to the center of the earth as possible.

When the water hits the ground, the ground usually has a slope. This coaxes the water to follow a path towards the bottom of say a lake or pool. This however often prevents water from getting much closer to the center of the earth by heading towards an ocean.

When only small mutations occur, the effect is to find the best local solution. Larger mutations are more like randomly transporting the water, which allows it to in effect "try again" to get into an ocean, or "better solution".

The more stagnent a breed is, the closer it probably is to a localized best solution, and the more likely you might want to create some more mutated breeds which could reach farther, less related solutions.

As for the re-maping @ Dreq, I have no idea. I assume you mean where nerons change their connections?

I know in our brains it has something to do with the ammount each connection is used. The more often that connection is used the stronger the link becomes. This is how we remember things - we repeat them in our minds, and that establishes the links more permanently.

Share this post


Link to post
Share on other sites
quote:
Original post by Mezz
Dreq: Where did you learn about Neurons / neural networks - I''ve been looking at them for a while but havent found any really good tutorials that I''ve been able to understand. Any good links you know of?

Have you been to Generation5?

Share this post


Link to post
Share on other sites