• Advertisement
Sign in to follow this  

Neural Network Math Help ? :)

This topic is 2222 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hey, I'm trying to make my first neural network program and its working kind of well at the moment I just need some help with the math.
In the program the value of a neuron is the value of the neuron connected to it with the highest weight.
so if neuron 1 has the value of 5 and a weight of 2 and neuron 2 has the value of 2 and the weight of 5 were connected to the neuron then the neurons value would be 2.
I would like to make it so that it would be more of an average but based on weight.
so the average of neuron 1 and 2 would be 3 due to the weight of neuron 2 being bigger than the weight of 1.
Would anyone know of a formula that would achieve this?

Share this post


Link to post
Share on other sites
Advertisement
In a standard neural network, all neurons influence the neurons they are connected to. When a neuron is activated, it receives a series of inputs that map to a series of weights. The neuron sums each input multiplied by that input's weight (http://en.wikipedia.org/wiki/Dot_product), and then applies its activation function to the result.

http://en.wikipedia.org/wiki/Artificial_neuron

Share this post


Link to post
Share on other sites
Basically if you have 2 input, 2 hidden:
Each input has a connection to each hidden, for 4 connections.
Each connection has a weight. When you pass a value from the input and input node to a hidden node along a connection, you multiply the input nodes value by the weight and add it to the node.

Each hidden node receives input from both input nodes, and the "influence" the input node has on the hidden node is determined by the connections weight. A weight of 0 would mean "no influence".

To make the network do nifty things you will need to add an activation function to each node. The activation function takes the nodes accumulated value as input, and then spits out a new number that is then passed on to each node it connects to. Typically people will choose tanh or another sigmoid type function.

Share this post


Link to post
Share on other sites
Hello

Could you please explain what is the problem you try to solve with an ANN ?
In that way, it'll be possible to propose an ANN type, and a learning algorithm rolleyes.gif

I wish I can help ...

Nico

Share this post


Link to post
Share on other sites
You pretty much answered your own question, you want a weighted average of the inputs.

So instead of output = (n1 + n2 + ... + nx) / x, you will want something like output = (n1 * w1 + n2 * w2 + ... + nx * wx) / x.

Share this post


Link to post
Share on other sites
Hey Guys, I've attempted to solve this 3 times and I think I've got this working to some extent apart from one thing.
I'm using a sigmoid activation function to convert the multiple inputs to a floating point number between 0 and 1.
the weight is a random floating point number between -1 and 1.
The output of the neural network is always between 0.4 - 0.6.
I think this is because I'm giving node's output as the activation function.
Should I only use the activation function for the Output Nodes and the Hidden Nodes output is just its activation number? (i1*W1 + i2 + W2 ... In * Wn)
I think that would work because the node's output wont be too high or too low.

P.S: the program I am making doesn't use a backpropagation function to train the networks it uses a genetic algorithm to train the network.

Share this post


Link to post
Share on other sites

The output of the neural network is always between 0.4 - 0.6.



...
(i1*W1 + i2 + W2 ... In * Wn)


... or maybe is it because you didn't put a bias to your nodes ?
(W0 + i1*W1 + i2*W2 ... In * Wn) , where W0 is a weight with the constant input 1.

A simple way to add a bias is to add a 1.0 component to the input vector

Share this post


Link to post
Share on other sites
[color=#282828][font=helvetica, arial, verdana, tahoma, sans-serif]

... or maybe is it because you didn't put a bias to your nodes ?

[/font]
[color=#282828][font=helvetica, arial, verdana, tahoma, sans-serif]

(

[/font]W0[color=#282828][font=helvetica, arial, verdana, tahoma, sans-serif]

+ i1*W1 + i2*W2 ... In * Wn) , where W0 is a weight with the constant input 1.

[/font]


[color=#282828][font=helvetica, arial, verdana, tahoma, sans-serif]

A simple way to add a bias is to add a 1.0 component to the input vector

[/font][/quote]
Whats the point of Bias?
Is it needed for the neural network to function?
because I haven't put it in :/

Share this post


Link to post
Share on other sites


... or maybe is it because you didn't put a bias to your nodes ?



[color=#282828][font=helvetica, arial, verdana, tahoma, sans-serif]

([/font]


W0

[color=#282828][font=helvetica, arial, verdana, tahoma, sans-serif]

+ i1*W1 + i2*W2 ... In * Wn) , where W0 is a weight with the constant input 1.[/font]





[color=#282828][font=helvetica, arial, verdana, tahoma, sans-serif]

A simple way to add a bias is to add a 1.0 component to the input vector [/font]



Whats the point of Bias?
Is it needed for the neural network to function?
because I haven't put it in :/
[/quote]

Oh and does the bias have to be used for every node or just the input nodes?

Share this post


Link to post
Share on other sites

Whats the point of Bias?
Is it needed for the neural network to function?


The point of Bias is to shift the activation function along x axis (and it can be considered as a constant input for the implementation as I suggested)
It's needed for practical purpose : if you don't use a bias, the function you are approximating (ie the problem you are solving) must pass threw (0,f(0)) where f is the activation function you chose. Otherwise, the net won't converge. With a bias you don't have this limitation anymore.


[color=#282828][font=helvetica, arial, verdana, tahoma, sans-serif]

Oh and does the bias have to be used for every node or just the input nodes?

[/font]
[/quote]
The bias has to be used with any node that does signal integration, so typically all the nodes except input ones (since these are just 'slots' to provide input to the net).

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement