We get it. We use ad blockers too. But GameDev.net displays them so we can continue to be a great platform for you.
Please whitelist GameDev.net and our advertisers.
Also consider a GDNet+ Pro subscription to remove all ads from GameDev.net.
GameDev.net Posting Guidelines (please read before posting)
Subscribe to GameDev.net's newsletters to receive the latest updates and exclusive content.
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.
Posted 09 March 2012 - 03:21 PM
Posted 10 March 2012 - 05:09 AM
Posted 10 March 2012 - 09:02 AM
Posted 10 March 2012 - 10:18 AM
Posted 15 March 2012 - 06:55 PM
Posted 18 March 2012 - 12:29 PM
Posted 18 March 2012 - 02:04 PM
The output of the neural network is always between 0.4 - 0.6.
...
(i1*W1 + i2 + W2 ... In * Wn)
Posted 18 March 2012 - 02:16 PM
Whats the point of Bias?... or maybe is it because you didn't put a bias to your nodes ?
(
W0+ i1*W1 + i2*W2 ... In * Wn) , where W0 is a weight with the constant input 1.
A simple way to add a bias is to add a 1.0 component to the input vector
Posted 18 March 2012 - 02:18 PM
Whats the point of Bias?... or maybe is it because you didn't put a bias to your nodes ?
(
W0+ i1*W1 + i2*W2 ... In * Wn) , where W0 is a weight with the constant input 1.
A simple way to add a bias is to add a 1.0 component to the input vector
Is it needed for the neural network to function?
because I haven't put it in :/
Posted 18 March 2012 - 02:37 PM
Whats the point of Bias?
Is it needed for the neural network to function?
The bias has to be used with any node that does signal integration, so typically all the nodes except input ones (since these are just 'slots' to provide input to the net).Oh and does the bias have to be used for every node or just the input nodes?
Posted 18 March 2012 - 02:56 PM
Whats the point of Bias?
Is it needed for the neural network to function?
The point of Bias is to shift the activation function along x axis (and it can be considered as a constant input for the implementation as I suggested)
It's needed for practical purpose : if you don't use a bias, the function you are approximating (ie the problem you are solving) must pass threw (0,f(0)) where f is the activation function you chose. Otherwise, the net won't converge. With a bias you don't have this limitation anymore.The bias has to be used with any node that does signal integration, so typically all the nodes except input ones (since these are just 'slots' to provide input to the net).Oh and does the bias have to be used for every node or just the input nodes?
package AI; public class Node { public float[] weight; public float value; public float activation; public final float e = 2.7183f; public final float p = 1; public boolean in = false; public Node(float[] weight){ this.weight = weight; } public static void main(String[] args){ NeuralNetwork net = new NeuralNetwork(1,1,2,2); net.createNetwork(); float[] f = {100f}; net.input(f); System.out.println(net.getOutput(0)); } public float activationSigmoidMethod(float activation){ double a = -activation/p; double b = e; double c = Math.pow(e, a); double e = 1 + c; double f = 1/e; return (float) f; } public void input(Node[] node, int num){ if(in = true){ activation += 1; } for(int i = 0; i < node.length; i++){ activation += (node[i].value * node[i].weight[num]); } value = activationSigmoidMethod(activation); activation = 0; } public float getOutput(){ return value; } }
package AI; import java.util.Random; public class NeuralNetwork { public Node[] in; public Node[] out; public Node[][] node; public NeuralNetwork(int ins, int outs, int layers, int num){ in = new Node[ins]; out = new Node[outs]; node = new Node[layers][num]; } public float[][] returnInWeights(){ float[][] ini = new float[in.length][node[0].length]; for(int i = 0; i < in.length; i ++){ for(int b = 0; b < node[0].length; b++){ ini[i][b] = in[i].weight[b]; } } return ini; } public float[][][] returnNodeNormWeights(){ float[][][] weight = new float[node.length][node[0].length][node[0][0].weight.length]; for(int i = 0; i < node.length - 1; i ++){ for(int b = 0; b < node[i].length; b ++){ for(int a = 0; a < node[i][b].weight.length; a++){ weight[i][b][a] =node[i][b].weight[a]; } } } return weight; } public float[][] returnOutNodeWeights(){ int length = node.length - 1; float[][] nodes = new float[node[length].length][node[length][node[length].length].weight.length]; for(int i = 0; i < node[length].length; i ++){ for(int b = 0; b < node[length][node[length].length].weight.length; b++){ nodes[i][b] = node[length][i].weight[b]; } } return nodes; } public float[] returnRanWeights(int amount){ Random a = new Random(); float[] weight = new float[amount]; for(int i = 0; i < amount; i ++){ weight[i] = a.nextFloat() + a.nextFloat() - 1; } return weight; } public void createNetwork(){ for(int i = 0; i < in.length; i ++){ in[i] = new Node(returnRanWeights(node[0].length)); in[i].in = true; } for(int i = 0; i < node.length; i ++){ for(int b = 0; b < node[i].length; b ++){ if(i < node.length - 1){ node[i][b] = new Node(returnRanWeights(node[i + 1].length)); }else{ node[i][b] = new Node(returnRanWeights(out.length)); } } } for(int i = 0; i < out.length; i ++){ out[i] = new Node(null); } } public void input(float[] inp){ for(int i = 0; i < in.length; i++){ in[i].value = inp[i]; } for(int i = 0; i < node.length; i ++){ for(int b = 0; b < node[i].length; b ++){ if(i == 0){ node[i][b].input(in, b); }else{ node[i][b].input(node[i-1],b); } } } for(int i = 0; i < out.length; i++){ out[i].input(node[node.length - 1], i); } } public float getOutput(int num){ return out[num].getOutput(); } public float[] getOutput(){ float[] a = new float[out.length]; for(int i = 0; i < a.length; i++){ a[i] = getOutput(i); } return a; } }
Posted 18 March 2012 - 02:57 PM
Posted 18 March 2012 - 02:58 PM
Posted 18 March 2012 - 03:15 PM
Posted 18 March 2012 - 03:20 PM
(this is not activation+=1 but activation+=W0)
I think you can add a bias to your nodes without modifying too much your code.
Add an extra component to your input array and put 1.0 in it at the start of the program. Your input vectors are now : [1.0,i1,i2,....,in].
Then W0, ie the weigth associated to your constant input value 1.0 will evoluate like any other weight.
Finally adding a bias to a net is (just) equivalent to add a 1.0 extra input to it (more precisely any constant value butnobody caresthat's irrelevant and 1.0 is the usual choice)
Posted 18 March 2012 - 03:46 PM
Posted 19 March 2012 - 11:31 AM
An example maybe ? Let's say I want to approximate a 2-parameters (x and y) function with a single node.
So I have to add a third component to the input set to 1.0
Then the node has 2+1 inputs (so 3 weights too) and the input vector is [1.0,x,y]
So the integration is W0*1+W1*x+W2*y
So the code of a node 'without bias' is perfect : just add an extra 1.0 to its input and it's done, you have a node with a bias.
Posted 19 March 2012 - 01:34 PM
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.
GameDev.net™, the GameDev.net logo, and GDNet™ are trademarks of GameDev.net, LLC.