Jump to content
  • Advertisement
Sign in to follow this  
ysg

Having a hard time understanding how a Hopfield ANN works

This topic is 1935 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hello, I'm reading a book by Jeff Heaton where he talks about artificial neural networks in C#, the 2nd edition.

I got to this part:

-snip-

We must now compare those weights with the input pattern of 0101:

0 1 0 1

0 -1 1 -1

We will sum only the weights corresponding to the positions that contain a 1 in the input pattern. Therefore, the activation of the first neuron is –1 + –1, or –2. The results of the activation of each neuron are shown below.

N1 = -1 + -1 = -2
N2 = 0 + 1 = 1
N3 = -1 + -1 = -2
N4 = 1 + 0 = 1

Therefore, the output neurons, which are also the input neurons, will report the above activation results. The final output vector will then be –2, 1, –2, 1. These val- ues are meaningless without an activation function. We said earlier that a threshold establishes when a neuron will fire. A threshold is a type of activation function. An activation function determines the range of values that will cause the neuron, in this case the output neuron, to fire. A threshold is a simple activation function that fires when the input is above a certain value.

-snip-

What I don't understand is how and why does this addition happen? How does the logic flow in this case? Why do you add together only the negative numbers? Having a hard time visualizing this logic.

If anyone could provide some input, I'd greatly appreciate it.

The part that's causing me to scratch my head is on page 88 of the book, if that helps. Edited by ysg

Share this post


Link to post
Share on other sites
Advertisement
I don't have the book, and I am not sure I have enough context from what you gave us, but I think this is what's going on.

Let's call the input pattern I, so I[0]=0, I[1]=1, I[2]=0 and I[3]=1. Similarly the first list of weights is W1, with W1[0]=0, W1[1]=-1, W1[2]=1, W1[3]=-1. N1 is the dot product of W1 and I, which means

N1 = W1[0]*I[0] + W1[1]*I[1] + W1[2]*I[2] + W1[3]*I[3] = 0*0 + (-1)*1 + 1*0 + (-1)*1 = -1 + (-1) = -2

There must be other weights W2, W3 and W4 which are used to compute N2, N3 and N4, but I don't know what they are.

Share this post


Link to post
Share on other sites

Hello smile.png

 

The final output vector will then be –2, 1, –2, 1

 

You have to pass this to the activation function which is, typically in Hopfield Network :

f(x)=1 if x>0, -1 otherwise

 

A vector (input/output)  of Hopfield network can only consists in -1 and 1 (At least in discrete model)

 

Good luck smile.png

 

In the stochastic version, you choose randomly an unit, compute its ouput (integrate inputs + activation function), until you get a stable network state (vector)

Edited by Tournicoti

Share this post


Link to post
Share on other sites

I don't have the book, and I am not sure I have enough context from what you gave us, but I think this is what's going on.

Let's call the input pattern I, so I[0]=0, I[1]=1, I[2]=0 and I[3]=1. Similarly the first list of weights is W1, with W1[0]=0, W1[1]=-1, W1[2]=1, W1[3]=-1. N1 is the dot product of W1 and I, which means

N1 = W1[0]*I[0] + W1[1]*I[1] + W1[2]*I[2] + W1[3]*I[3] = 0*0 + (-1)*1 + 1*0 + (-1)*1 = -1 + (-1) = -2

There must be other weights W2, W3 and W4 which are used to compute N2, N3 and N4, but I don't know what they are.

Ok, dotproduct, got it. That was the missing piece. Thank you.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!