Jump to content

  • Log In with Google      Sign In   
  • Create Account


Neural Network Math Help ? :)


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
17 replies to this topic

#1 CryoGenesis   Members   -  Reputation: 484

Like
0Likes
Like

Posted 09 March 2012 - 03:21 PM

Hey, I'm trying to make my first neural network program and its working kind of well at the moment I just need some help with the math.
In the program the value of a neuron is the value of the neuron connected to it with the highest weight.
so if neuron 1 has the value of 5 and a weight of 2 and neuron 2 has the value of 2 and the weight of 5 were connected to the neuron then the neurons value would be 2.
I would like to make it so that it would be more of an average but based on weight.
so the average of neuron 1 and 2 would be 3 due to the weight of neuron 2 being bigger than the weight of 1.
Would anyone know of a formula that would achieve this?

Sponsor:

#2 lrh9   Members   -  Reputation: 174

Like
0Likes
Like

Posted 10 March 2012 - 05:09 AM

In a standard neural network, all neurons influence the neurons they are connected to. When a neuron is activated, it receives a series of inputs that map to a series of weights. The neuron sums each input multiplied by that input's weight (http://en.wikipedia.org/wiki/Dot_product), and then applies its activation function to the result.

http://en.wikipedia.org/wiki/Artificial_neuron

#3 willh   Members   -  Reputation: 160

Like
0Likes
Like

Posted 10 March 2012 - 09:02 AM

Basically if you have 2 input, 2 hidden:
Each input has a connection to each hidden, for 4 connections.
Each connection has a weight. When you pass a value from the input and input node to a hidden node along a connection, you multiply the input nodes value by the weight and add it to the node.

Each hidden node receives input from both input nodes, and the "influence" the input node has on the hidden node is determined by the connections weight. A weight of 0 would mean "no influence".

To make the network do nifty things you will need to add an activation function to each node. The activation function takes the nodes accumulated value as input, and then spits out a new number that is then passed on to each node it connects to. Typically people will choose tanh or another sigmoid type function.

#4 Tournicoti   Prime Members   -  Reputation: 682

Like
1Likes
Like

Posted 10 March 2012 - 10:18 AM

Hello

Could you please explain what is the problem you try to solve with an ANN ?
In that way, it'll be possible to propose an ANN type, and a learning algorithm Posted Image

I wish I can help ...

Nico

#5 willpowered   Members   -  Reputation: 479

Like
0Likes
Like

Posted 15 March 2012 - 06:55 PM

You pretty much answered your own question, you want a weighted average of the inputs.

So instead of output = (n1 + n2 + ... + nx) / x, you will want something like output = (n1 * w1 + n2 * w2 + ... + nx * wx) / x.

#6 CryoGenesis   Members   -  Reputation: 484

Like
0Likes
Like

Posted 18 March 2012 - 12:29 PM

Hey Guys, I've attempted to solve this 3 times and I think I've got this working to some extent apart from one thing.
I'm using a sigmoid activation function to convert the multiple inputs to a floating point number between 0 and 1.
the weight is a random floating point number between -1 and 1.
The output of the neural network is always between 0.4 - 0.6.
I think this is because I'm giving node's output as the activation function.
Should I only use the activation function for the Output Nodes and the Hidden Nodes output is just its activation number? (i1*W1 + i2 + W2 ... In * Wn)
I think that would work because the node's output wont be too high or too low.

P.S: the program I am making doesn't use a backpropagation function to train the networks it uses a genetic algorithm to train the network.

#7 Tournicoti   Prime Members   -  Reputation: 682

Like
1Likes
Like

Posted 18 March 2012 - 02:04 PM

The output of the neural network is always between 0.4 - 0.6.


...
(i1*W1 + i2 + W2 ... In * Wn)


... or maybe is it because you didn't put a bias to your nodes ?
(W0 + i1*W1 + i2*W2 ... In * Wn) , where W0 is a weight with the constant input 1.

A simple way to add a bias is to add a 1.0 component to the input vector

#8 CryoGenesis   Members   -  Reputation: 484

Like
0Likes
Like

Posted 18 March 2012 - 02:16 PM

... or maybe is it because you didn't put a bias to your nodes ?


(

W0

+ i1*W1 + i2*W2 ... In * Wn) , where W0 is a weight with the constant input 1.




A simple way to add a bias is to add a 1.0 component to the input vector

Whats the point of Bias?
Is it needed for the neural network to function?
because I haven't put it in :/

#9 CryoGenesis   Members   -  Reputation: 484

Like
0Likes
Like

Posted 18 March 2012 - 02:18 PM


... or maybe is it because you didn't put a bias to your nodes ?


(

W0

+ i1*W1 + i2*W2 ... In * Wn) , where W0 is a weight with the constant input 1.




A simple way to add a bias is to add a 1.0 component to the input vector

Whats the point of Bias?
Is it needed for the neural network to function?
because I haven't put it in :/


Oh and does the bias have to be used for every node or just the input nodes?

#10 Tournicoti   Prime Members   -  Reputation: 682

Like
1Likes
Like

Posted 18 March 2012 - 02:37 PM

Whats the point of Bias?
Is it needed for the neural network to function?


The point of Bias is to shift the activation function along x axis (and it can be considered as a constant input for the implementation as I suggested)
It's needed for practical purpose : if you don't use a bias, the function you are approximating (ie the problem you are solving) must pass threw (0,f(0)) where f is the activation function you chose. Otherwise, the net won't converge. With a bias you don't have this limitation anymore.

Oh and does the bias have to be used for every node or just the input nodes?


The bias has to be used with any node that does signal integration, so typically all the nodes except input ones (since these are just 'slots' to provide input to the net).

#11 CryoGenesis   Members   -  Reputation: 484

Like
0Likes
Like

Posted 18 March 2012 - 02:56 PM


Whats the point of Bias?
Is it needed for the neural network to function?


The point of Bias is to shift the activation function along x axis (and it can be considered as a constant input for the implementation as I suggested)
It's needed for practical purpose : if you don't use a bias, the function you are approximating (ie the problem you are solving) must pass threw (0,f(0)) where f is the activation function you chose. Otherwise, the net won't converge. With a bias you don't have this limitation anymore.

Oh and does the bias have to be used for every node or just the input nodes?


The bias has to be used with any node that does signal integration, so typically all the nodes except input ones (since these are just 'slots' to provide input to the net).


Thanks for the info but now it always returns between 0.6 - 0.7...

Here is the source code just in case I've done something wrong -_-

package AI;
public class Node {
public float[] weight;
public float value;
public float activation;
public final float e = 2.7183f;
public final float p =  1;

public boolean in = false;

public Node(float[] weight){
  this.weight = weight;
}

public static void main(String[] args){
  NeuralNetwork net = new NeuralNetwork(1,1,2,2);
  net.createNetwork();
  float[] f = {100f};
  net.input(f);
  System.out.println(net.getOutput(0));
}
public float activationSigmoidMethod(float activation){
  double a = -activation/p;
  double b = e;
 
	 double c = Math.pow(e, a);
	 double e = 1 + c;
	 double f = 1/e;
	
	 return (float) f;
 
}

public void input(Node[] node, int num){
  if(in = true){
  activation += 1;
  }
  for(int i = 0; i < node.length; i++){
   activation += (node[i].value * node[i].weight[num]);
  }
  value = activationSigmoidMethod(activation);
  activation = 0;
}

public float getOutput(){
  return value;
}



}

package AI;
import java.util.Random;
public class NeuralNetwork {
public Node[] in;
public Node[] out;
public Node[][] node;

	  public NeuralNetwork(int ins, int outs, int layers, int num){
	   in = new Node[ins];
	   out = new Node[outs];
	   node = new Node[layers][num];
	  }
	 
	  public float[][] returnInWeights(){
	   float[][] ini = new float[in.length][node[0].length];
	   for(int i = 0; i < in.length; i ++){
	    for(int b = 0; b < node[0].length; b++){
		 ini[i][b] = in[i].weight[b];
	    }
	   }
	   return ini;
	  }
	  public float[][][] returnNodeNormWeights(){
	   float[][][] weight = new float[node.length][node[0].length][node[0][0].weight.length];
	   for(int i = 0; i < node.length - 1; i ++){
	    for(int b = 0; b < node[i].length; b ++){
		 for(int a = 0; a < node[i][b].weight.length; a++){
		  weight[i][b][a] =node[i][b].weight[a];
		 }
	    }
	   }
	   return weight;
	  }
	  public float[][] returnOutNodeWeights(){
	   int length = node.length - 1;
	   float[][] nodes = new float[node[length].length][node[length][node[length].length].weight.length];
	   for(int i = 0; i < node[length].length; i ++){
	    for(int b = 0; b < node[length][node[length].length].weight.length; b++){
		 nodes[i][b] = node[length][i].weight[b];
	    }
	   }
	   return nodes;
	  }
	 
	  public float[] returnRanWeights(int amount){
	   Random a = new Random();
	   float[] weight = new float[amount];
	   for(int i = 0; i < amount; i ++){
	    weight[i] = a.nextFloat() + a.nextFloat() - 1;
	   }
	   return weight;
	  }
	 
	  public void createNetwork(){
	   for(int i = 0; i < in.length; i ++){
	    in[i] = new Node(returnRanWeights(node[0].length));
	    in[i].in = true;
	   }
	   for(int i = 0; i < node.length; i ++){
	    for(int b = 0; b < node[i].length; b ++){
	   
	    if(i < node.length - 1){
		   node[i][b] = new Node(returnRanWeights(node[i + 1].length));
	    }else{
		 node[i][b] = new Node(returnRanWeights(out.length));
	    }
		  
		   }
	   }
	   for(int i = 0; i < out.length; i ++){
	    out[i] = new Node(null);
	   
	   }
	  }
	 
	  public void input(float[] inp){
	   for(int i = 0; i < in.length; i++){
	    in[i].value = inp[i];
	   
	   }
	  
	   for(int i = 0; i < node.length; i ++){
	    for(int b = 0; b < node[i].length; b ++){
	   
	    if(i == 0){
		 node[i][b].input(in, b);
	    }else{
		
	    node[i][b].input(node[i-1],b);
		
	    }
	  
		  
		   }
	   }
	  
	   for(int i = 0; i < out.length; i++){
	    out[i].input(node[node.length - 1], i);
	   }
	  
	  }
	 
	  public float getOutput(int num){
	   return out[num].getOutput();
	  }
	  public float[] getOutput(){
	   float[] a = new float[out.length];
	   for(int i = 0; i < a.length; i++){
	    a[i] = getOutput(i);
	   }
	   return a;
	  }
	 
   
}


#12 CryoGenesis   Members   -  Reputation: 484

Like
0Likes
Like

Posted 18 March 2012 - 02:57 PM

Oops just noticed I put
if(in = true){
activation+= 1;
}

-_-

#13 CryoGenesis   Members   -  Reputation: 484

Like
0Likes
Like

Posted 18 March 2012 - 02:58 PM

Still comes up with the same output though -_-...

#14 Tournicoti   Prime Members   -  Reputation: 682

Like
1Likes
Like

Posted 18 March 2012 - 03:15 PM

(this is not activation+=1 but activation+=W0)

I think you can add a bias to your nodes without modifying too much your code.
Add an extra component to your input array and put 1.0 in it at the start of the program. Your input vectors are now : [1.0,i1,i2,....,in].
Then W0, ie the weigth associated to your constant input value 1.0 will evoluate like any other weight.

#15 CryoGenesis   Members   -  Reputation: 484

Like
0Likes
Like

Posted 18 March 2012 - 03:20 PM

(this is not activation+=1 but activation+=W0)

I think you can add a bias to your nodes without modifying too much your code.
Add an extra component to your input array and put 1.0 in it at the start of the program. Your input vectors are now : [1.0,i1,i2,....,in].
Then W0, ie the weigth associated to your constant input value 1.0 will evoluate like any other weight.

Finally adding a bias to a net is (just) equivalent to add a 1.0 extra input to it Posted Image (more precisely any constant value but nobody cares that's irrelevant and 1.0 is the usual choice)


Would the weight of the bias be 1.0 and the input be 1.0 or the weight a random float (0 - 1.0) and the input 1.0?

#16 Tournicoti   Prime Members   -  Reputation: 682

Like
1Likes
Like

Posted 18 March 2012 - 03:46 PM

An example maybe ? Let's say I want to approximate a 2-parameters (x and y) function with a single node.
So I have to add a third component to the input set to 1.0
Then the node has 2+1 inputs (so 3 weights too) and the input vector is [1.0,x,y]
So the integration is W0*1+W1*x+W2*y

So the code of a node 'without bias' is perfect : just add an extra 1.0 to its input and it's done, you have a node with a bias.

#17 CryoGenesis   Members   -  Reputation: 484

Like
0Likes
Like

Posted 19 March 2012 - 11:31 AM

An example maybe ? Let's say I want to approximate a 2-parameters (x and y) function with a single node.
So I have to add a third component to the input set to 1.0
Then the node has 2+1 inputs (so 3 weights too) and the input vector is [1.0,x,y]
So the integration is W0*1+W1*x+W2*y

So the code of a node 'without bias' is perfect : just add an extra 1.0 to its input and it's done, you have a node with a bias.


Thanks so much. I think it works now. Not sure though but when you input -10 it comes up with (on average) low outputs. When I put big inputs (100 for example) it usually returns relatively high outputs.
Thanks once again.

#18 Tournicoti   Prime Members   -  Reputation: 682

Like
1Likes
Like

Posted 19 March 2012 - 01:34 PM

Hello CryoGenesis
I'm glad I can help :)
NB : it's recommended to 'normalize' your input values so that abs(input)<1
Otherwise you will use huge values in the learning rule, and the weights will oscillate indefinitely instead of stabilize.

For example if you know the min and the max of the values you provide to the ANN, you can apply something like that to each input :
i'= (i-min)/(max-min)
and provide i' instead of i.

Bye !
Nico




Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS