Archived

This topic is now archived and is closed to further replies.

fup

Announcement: new tutorial online

Recommended Posts

It seems the three of us keep running into each other, eh? Hmmmm...

Anywho, I agree with Timkin - very nice tutorial. Very to the point, straight forward, enough detail to "whet the appetite" but not so much it detracts from the focus. Nice work!

I have a couple of questions - you knew I would. 8^)

Can you make your 2-D network torroidal? OK, so you can make them torroidal, but how does that impact the network?

How about 3-D networks?

You mention that the topology is maintained from your data space to your network. Is this primarily in the sense of close things in data space are close in network space and distant things remain distant, or is there a preservation of the true distance metric separation? Multi-dimensional scaling is good for projecting from high dimensions to lower dimensions and maintaining the distances, of course there''s an unavoidable distortion due to loss of information from high dimensions to low. How do SOMs respond to this situation?

Just some curiosities...

-Kirk

Share this post


Link to post
Share on other sites
Glad you both liked the tutorial. Let me see if I can answer your questions Kirk...

Yes, you can have toroidal networks. In fact, this is usually advantageous if the number of neurons in the SOM is small as it prevents unwanted edge effects. Toroids can create problems though if the radius for the neighbourhood function is set too high, as there may be some overlap during the first few iterations. Care has to be taken to prevent this.

3D networks are also possible... in theory. In practice they are slow to train and, if using the SOM as a visualization tool, you're still faced with the problem of how to represent the 3D SOM in 2D space (your display screen). A holographic SOM would be cool though!

There is generally no preservation of the distance metric in topological space. The SOM algorithm 'pulls' neighbouring nodes together in terms of input vector space. What you see displayed is a distorted representation. Imagine a one dimensional SOM with nodes A, B and C being contiguous. Although nodes A and C may be an equal distance away from B in 1d space, A-B may lay much further apart than B-C in input vector space.
The exception to this is if the input vector is two (or one) dimensional, in which case you can plot each node's weight vectors straight to screen and the true distance metric is maintained. This gives you that 'distorted grid' effect you may have seen in books or in other articles on the internet.

I hope that made sense. It's not easy discussing this stuff without diagrams.




ai-junkie.com

[edited by - fup on October 6, 2002 5:40:27 AM]

Share this post


Link to post
Share on other sites
Also, just noticed your subtle hint to an error ;0)

I always believed it was ''wet'' as in mouthwatering. It never crossed my mind it was ''whet'' as in ''keen'', ''sharpen'' or ''stimulate''. You learn something new everyday, as they say...




ai-junkie.com

Share this post


Link to post
Share on other sites
Fup,

thanks for the answers, in particular the detail about maintaining the topological distance, etc. etc. SOMs seem to behave similarly to Multidimensional Scaling in that an effort is made to preserve the relative distances (by whatever your distance metric defines distance) but the actual distances are distorted.

As for my subtle hint at an error, hmmm....I actually hadn''t noticed you made an error and used ''wet'' rather than ''whet.'' I was letting you know that, yes, indeed, I''ve been starving for a tutorial just like this one. Freudian slip? Not sure...

-kirk

Share this post


Link to post
Share on other sites