Jump to content
  • Advertisement
Sign in to follow this  
laurimann

SoM, growing axons and Hebbian learning

This topic is 4159 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hello, I've made this demo that has nine (3*3) input neurons and 16*16 hidden neurons and zero connections to start with. The network grows axons from each neuron and each axon is subject to a simple Hebbian learning rule. So far i've in my tests i've seen the network to adapt a very simple input patters, but further tests await. i've got a whole thread about this going on in here, also a screenshot of the demo! http://www.ai-forum.org/topic.asp?forum_id=1&topic_id=27092 I wish you would send me your thoughts and ideas about self organizing neural networks. My goal in doing this s to create a neural network with growth and learning rule systems (genetics) such that NN could be taught to play GO fluetly on a 9*9 or 13*13 board against any opponent. Yours - Lauri

Share this post


Link to post
Share on other sites
Advertisement
Could you explain what you are doing? How does this network work, and what is it learning?

Share this post


Link to post
Share on other sites
http://www.ai-forum.org/data/33-nimet%F6n3.gif

first of all, here's the screenshot of the neural network.
What you'll see there is 9 input neurons on the left and 16*16 hidden neurons on the right. The blue (strong) and red (weak) lines are already formed connections between neurons. The gray lines are axon growth cones or what you call them, basically growing axons from neurons, one per neuron.

What i'm doing is a network that would require a little as possible or not at all (pre-)fixed attributes: all the connections are formed and strengt modified dynamically.

The network works, if i understod you correctly, by weakening those connections that shoot at different time. That results in simultaneously shooting neurons to have more strong connections, while weakened connections which have become too weak are pruned away, which in turn results the hidden neurons to learn over time what patterns it "sees" in the input neurons.

The network learns, at the moment, via time dependant hebbian learning. I don't understand the way brain learns to be its own teacher, but i see understanding that to be critical for creating an AI that could beat the worlds most shining GO professionals.

Share this post


Link to post
Share on other sites
That all sounds good, but I'll be much more impressed when I see it playing go. Unsupervised topological evolution on neural networks has been done quite a bit before. It's chapter 10 in "AI Tecniques for Game Programming", but as far as I know decent Go playing remains unsolved.

Share this post


Link to post
Share on other sites
Youch! Laurimann, you're in for a painful ride ahead of you. The shear number of hidden nodes and the combinatorics of their connections means that the state space of your network is absurdly huge. Learning in that space will take impossibly long amounts of time and training data and I doubt very much that it would ever learn how to play Go. Have you done any experiments on small networks and analysed their behaviour and performance on simpler problems?

Share this post


Link to post
Share on other sites
Quote:
Original post by laurimann
I don't understand the way brain learns to be its own teacher


There are literally thousands of people devoted entirely towards the incremental approach to that understanding. They are very very far from that goal.

Computer neural networks are certainly a useful tool in that approach. However, a computational neural network is not the same thing as an actual biological neural network. This is because we actually don't understand even some of the basics of how the actual biological ones function.

Go is a very challenging AI task. Using neural networks to play it is certainly a fun and informative undertaking. Google around for "go neural network". There are plenty of existing implementations.

-me

Share this post


Link to post
Share on other sites
Hello,

And thanks for everyone who posted sometihng here, it's great to have a talk about this subject. :) It really means sometihng to me.

Anyway, Alrecenk, thanks for reminding: i've also read "AI Tecniques for Game Programming" a few times myself and a lot of documents on the internet, so i'm aware of most network builds there is.

And Timkin, thanks for sharing my burden of the hard ride ahead with this project. :P I know this might never end up being anything but a failure of a AI project. Anyway, i've had earlier versions of the network solving this kind of a problem:
There is a pacman on a 1D map i.e. a straight line. on the ends of the line there is a wall that stops pacman's movement and there is a piece of food in the map also. Pacman can either move forward or turn, and its goal is to find food ASAP. That was maybe too easy of a problem for the network very soon learned how to do it. Then i tried the same problem in 2D with more difficult obstacles and the results were depressing: the pacman just hoovered around. It learnd to move but that's it!

Palidine, you're saying true things. And thanks for the idea, i googled some go & ANN sites and read them through. Nothing new though.

Oh and SUPPOSE i'm trying to build a network to play 9*9 board of GO. That would require 9*9 input neurons for black, white and empty, and same amount of output neurons and probably at least 4*9*9 hidden neurons. so that's 243 input neurons, 243 output neurons and 972 hidden neurons, total of 1458 neurons. I assume that there would be around 200,000 connections between all the neurons in the final network.

The biggest network i've tested on a 1,3GhZ computer with 512MB memory was with 9 input neurons and 256 hidden neurons and it grew & was quite fast, until the number of connections reached around 700.

Share this post


Link to post
Share on other sites
Quote:
Original post by laurimann
The biggest network i've tested on a 1,3GhZ computer with 512MB memory was with 9 input neurons and 256 hidden neurons and it grew & was quite fast, until the number of connections reached around 700.


I believe what Timkin is saying is not that your simulation will necessarily run slowly (though given your data it will), but that it will take an extremely long time to train. I think what he was hinting at is that because of the vast permutations of your state space you are potentially looking at more than your lifetime to train it. =)

-me

Share this post


Link to post
Share on other sites
Hi.
I am very interested in artificial intelligence research, and in particular i am studying different types of neural networks and learning models.
I would be very interested in discussing about your project (i have build an humanoid robot i could test it on, if something expecially good could be sort out).
Could you please PM me your email address?

Share this post


Link to post
Share on other sites
You know i think that trying to mimich brain evolution with modern level of understanding is resultless: you can't create a learning network system with this amount of knowledge we have nowadays.

But i'm trying different approaches. :) Nothing motivates me more (as a hobby) than creating an independent, learning system!

I'll probably try to figure out something more mathematical, i.e. something that could be proven to work mathematically or just plain normal logical thinking, like Kohonen networks for the most part.

Has anyone any ideas? :)

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!