My neual net totally missunderstood the meaning of XOR!

Started by
29 comments, last by NickGeorgia 18 years, 4 months ago
There are more difficult challenges faced by a real brain than backpropagation algorithms. How would a brain know what the desired output would be? Artificial neural networks are a gigantic oversimplification.

As far as baises, it's best to remember that the artificial neural network is described entirely mathematically. Attempts to implement them using object oriented methodology (creating neuron objects for example) can serve to impair insight into the calculations that are being performed. A layered neural network can be concisely implemented using linear algebra constructs such as matrices. Using such a representation, a bias can be added to the inputs of any layer by simply appending a constant to the input vector. Other implementations may represent this differently, and optimizations may certainly be possible.
Advertisement
Quote:Original post by NickGeorgia
Good idea, but a perceptron has 1 node = 1 output.
Suppose we did like you said, have 3 perceptrons. Then you would have 3 outputs. How would you combine these outputs? ah ha... see, the hidden layer emerges.

Which will work by the way (but have a hidden layer): see second link diagram from above.

i was looking at the link you gave on neural networks. i am unclear about something. the perceptron is part of the neuron or is it a neuron?

Beginner in Game Development?  Read here. And read here.

 

A perceptron is a type of single node neural network. The picture with several nodes is a multilayer neural network (which may have nodes that are perceptrons).
[n00b status="super"]
perceptron is a kind of node (check). so is a node == neuron?
i'm not seeing what exactly a neuron is (or could be).
[/n00b]

Beginner in Game Development?  Read here. And read here.

 

Quote:Original post by Alpha_ProgDes
[n00b status="super"]
perceptron is a kind of node (check). so is a node == neuron?
i'm not seeing what exactly a neuron is (or could be).
[/n00b]


This might help.
Sorry, I may be using terms a little bit loosely and confusing you.

A neuron is a node with input arcs, the input arc weights, and an output arc.
A perceptron is a neuron (since it contains all the above).

w1
---|
w2 |
---|---(node)---> output
w3 |
---|

this is a perceptron = neuron = neural net with one neuron.
a node is part of the graph structure of a neural net.
Quote:Original post by Roboguy
Quote:Original post by Alpha_ProgDes
[n00b status="super"]
perceptron is a kind of node (check). so is a node == neuron?
i'm not seeing what exactly a neuron is (or could be).
[/n00b]


This might help.

I was just there [smile]

@NickGeorgia: i was reading the wiki and answer.com. i guess the confusing part (for me anyway) is that neurons take in inputs and have some inner function that works on the inputs and finally it spits out an output. it seems that perceptrons do the exact same thing.

...hmm... perceptrons (unlike regular neurons) can only take boolean/binary inputs and output (boolean/binary); also they can't handle.... quadratic (x^2) or cubic (x^3) functions only linear (x + y = 4) functions.

am i getting closer?

Beginner in Game Development?  Read here. And read here.

 

Getting closer. Perceptrons can take inputs that are non-binary. Since a perceptron usually has an activation function that is a hard-limiter (threshold to 0 or 1), the output is binary.

Think of this example. A perceptron with one input (no bias).

x input ---weight---(node)--- output

the output is:

if x*weight >= 0 then output is 1
if x*weight < 0 then output is 0

Not a very interesting example unless the weight is negative. If the weight is positive then if x>=0 the output is 1 and if x <0 then output is 0. If the weight is negative then if x>0 then output is 0 and if x <=0 then output is 1. The decision boundary is at the origin.

Now consider two inputs:

if x1*weight1 + x2*weight2 >=0 then output is 1
if x1*weight1 + x2*weight2 < 0 then output is 0

The decision boundary is defined by a line: x1*weight1 + x2*weight2 = 0. The learning determines the weights and thus this decision boundary (line).




Going to three inputs you get a plane as the decision boundary. x1*weight1+x2*weight2 +x3*weight3 = 0

If you add a bias term, you get: x1*weight1 + x2*weight2 + x3*weight3 = bias = -weight4*1. This moves the boundary so that it doesn't have to intersect the origin.

Going further, you get what are called hyperplanes.

Yes, a perceptron (with a hardlimit activation function as it is usually implemented) will only produce outputs 0 and 1. This removes the ability to approximate continuous functions.

To approximate arbitrary functions you need to go past perceptrons and use different activation functions, not just the a hard limiter. Plus you need at least a single hidden layer (a layer between the input layer and the output layer). Some confusion is often brought about since the input "values" themselves are considered nodes in a neural net graph.

Well I better stop before I get you more confused. hehe
Now consider XOR (to confuse you further).

x1 x2 | output
--------------
0 0 0
0 1 1
1 0 1
1 1 0

Since we know XOR has two binary inputs x1 and x, you could graph this function as:




Can you draw a decision boundary (line--two inputs gives you a line as above) that separates the different outputs? The answer should be "No." Therefore, a single perceptron is not enough to model the XOR function.
discrete structures here i come!

so perceptrons can only accept one-to-one functions. functions which are classified as many-to-one or one-to-many need a second (or more) layer of perceptrons.

i'll look up bias before asking any more questions.

[n00b status="great" pointGrade="-1"]
thanks again Nick (i already rated you up) !
[/n00b]

Beginner in Game Development?  Read here. And read here.

 

This topic is closed to new replies.

Advertisement