Announcement: new tutorial online

Started by
7 comments, last by fup 21 years, 6 months ago
Just to let you know that I have a new tutorial online covering Kohonen self organizing maps. I hope you enjoy it. You can find it here: www.ai-junkie.com
ai-junkie.com
Advertisement
Very nice tutorial fup! Vampire classification... I wonder if they received a research grant for that one???

Cheers,

Timkin
It seems the three of us keep running into each other, eh? Hmmmm...

Anywho, I agree with Timkin - very nice tutorial. Very to the point, straight forward, enough detail to "whet the appetite" but not so much it detracts from the focus. Nice work!

I have a couple of questions - you knew I would. 8^)

Can you make your 2-D network torroidal? OK, so you can make them torroidal, but how does that impact the network?

How about 3-D networks?

You mention that the topology is maintained from your data space to your network. Is this primarily in the sense of close things in data space are close in network space and distant things remain distant, or is there a preservation of the true distance metric separation? Multi-dimensional scaling is good for projecting from high dimensions to lower dimensions and maintaining the distances, of course there''s an unavoidable distortion due to loss of information from high dimensions to low. How do SOMs respond to this situation?

Just some curiosities...

-Kirk

Glad you both liked the tutorial. Let me see if I can answer your questions Kirk...

Yes, you can have toroidal networks. In fact, this is usually advantageous if the number of neurons in the SOM is small as it prevents unwanted edge effects. Toroids can create problems though if the radius for the neighbourhood function is set too high, as there may be some overlap during the first few iterations. Care has to be taken to prevent this.

3D networks are also possible... in theory. In practice they are slow to train and, if using the SOM as a visualization tool, you're still faced with the problem of how to represent the 3D SOM in 2D space (your display screen). A holographic SOM would be cool though!

There is generally no preservation of the distance metric in topological space. The SOM algorithm 'pulls' neighbouring nodes together in terms of input vector space. What you see displayed is a distorted representation. Imagine a one dimensional SOM with nodes A, B and C being contiguous. Although nodes A and C may be an equal distance away from B in 1d space, A-B may lay much further apart than B-C in input vector space.
The exception to this is if the input vector is two (or one) dimensional, in which case you can plot each node's weight vectors straight to screen and the true distance metric is maintained. This gives you that 'distorted grid' effect you may have seen in books or in other articles on the internet.

I hope that made sense. It's not easy discussing this stuff without diagrams.



ai-junkie.com

[edited by - fup on October 6, 2002 5:40:27 AM]
Also, just noticed your subtle hint to an error ;0)

I always believed it was ''wet'' as in mouthwatering. It never crossed my mind it was ''whet'' as in ''keen'', ''sharpen'' or ''stimulate''. You learn something new everyday, as they say...



ai-junkie.com
I have a little german text about this here : 'click'

and an example for letter classification here : 'click'

Did anyone made experiments with dynamically added neurons to SOMs or variable neuron positions ?

[edited by - as31415rin on October 6, 2002 3:49:35 PM]
Fup,

thanks for the answers, in particular the detail about maintaining the topological distance, etc. etc. SOMs seem to behave similarly to Multidimensional Scaling in that an effort is made to preserve the relative distances (by whatever your distance metric defines distance) but the actual distances are distorted.

As for my subtle hint at an error, hmmm....I actually hadn''t noticed you made an error and used ''wet'' rather than ''whet.'' I was letting you know that, yes, indeed, I''ve been starving for a tutorial just like this one. Freudian slip? Not sure...

-kirk
Howdy.

I just saw the new issue of Neural Networks is online, and it just so happens to be dedicated to SOMs. Small world. You can get to the abstracts from the link below.

-Kirk

http://www.elsevier.com/locate/jnlnr/00841
it''s a small world indeed. Thanks for the link



ai-junkie.com

This topic is closed to new replies.

Advertisement