• Create Account

Having a hard time understanding how a Hopfield ANN works

Old topic!

Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

3 replies to this topic

#1ysg  Members

192
Like
0Likes
Like

Posted 02 May 2013 - 08:24 AM

Hello, I'm reading a book by Jeff Heaton where he talks about artificial neural networks in C#, the 2nd edition.

I got to this part:

-snip-

We must now compare those weights with the input pattern of 0101:

0 1 0 1

0 -1 1 -1

We will sum only the weights corresponding to the positions that contain a 1 in the input pattern. Therefore, the activation of the first neuron is –1 + –1, or –2. The results of the activation of each neuron are shown below.

N1 = -1 + -1 = -2
N2 = 0 + 1 = 1
N3 = -1 + -1 = -2
N4 = 1 + 0 = 1

Therefore, the output neurons, which are also the input neurons, will report the above activation results. The final output vector will then be –2, 1, –2, 1. These val- ues are meaningless without an activation function. We said earlier that a threshold establishes when a neuron will fire. A threshold is a type of activation function. An activation function determines the range of values that will cause the neuron, in this case the output neuron, to fire. A threshold is a simple activation function that fires when the input is above a certain value.

-snip-

What I don't understand is how and why does this addition happen? How does the logic flow in this case? Why do you add together only the negative numbers? Having a hard time visualizing this logic.

If anyone could provide some input, I'd greatly appreciate it.

The part that's causing me to scratch my head is on page 88 of the book, if that helps.

Edited by ysg, 02 May 2013 - 08:27 AM.

#2Álvaro  Members

20235
Like
1Likes
Like

Posted 02 May 2013 - 09:15 AM

I don't have the book, and I am not sure I have enough context from what you gave us, but I think this is what's going on.

Let's call the input pattern I, so I[0]=0, I[1]=1, I[2]=0 and I[3]=1. Similarly the first list of weights is W1, with W1[0]=0, W1[1]=-1, W1[2]=1, W1[3]=-1. N1 is the dot product of W1 and I, which means

N1 = W1[0]*I[0] + W1[1]*I[1] + W1[2]*I[2] + W1[3]*I[3] = 0*0 + (-1)*1 + 1*0 + (-1)*1 = -1 + (-1) = -2

There must be other weights W2, W3 and W4 which are used to compute N2, N3 and N4, but I don't know what they are.

#3Tournicoti  Prime Members

704
Like
0Likes
Like

Posted 02 May 2013 - 09:46 AM

Hello

The final output vector will then be –2, 1, –2, 1

You have to pass this to the activation function which is, typically in Hopfield Network :

f(x)=1 if x>0, -1 otherwise

A vector (input/output)  of Hopfield network can only consists in -1 and 1 (At least in discrete model)

Good luck

In the stochastic version, you choose randomly an unit, compute its ouput (integrate inputs + activation function), until you get a stable network state (vector)

Edited by Tournicoti, 02 May 2013 - 10:37 AM.

#4ysg  Members

192
Like
0Likes
Like

Posted 02 May 2013 - 09:30 PM

I don't have the book, and I am not sure I have enough context from what you gave us, but I think this is what's going on.

Let's call the input pattern I, so I[0]=0, I[1]=1, I[2]=0 and I[3]=1. Similarly the first list of weights is W1, with W1[0]=0, W1[1]=-1, W1[2]=1, W1[3]=-1. N1 is the dot product of W1 and I, which means

N1 = W1[0]*I[0] + W1[1]*I[1] + W1[2]*I[2] + W1[3]*I[3] = 0*0 + (-1)*1 + 1*0 + (-1)*1 = -1 + (-1) = -2

There must be other weights W2, W3 and W4 which are used to compute N2, N3 and N4, but I don't know what they are.

Ok, dotproduct, got it. That was the missing piece. Thank you.

Old topic!

Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.