# My neual net totally missunderstood the meaning of XOR!

This topic is 4428 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

Quite funny actually, I made the simpliest neural net imageinable, two inputs, one output, the goal to learn XOR. However, after around 20 000 training runs, it had learned this: 1 ^ 1 = 1 1 ^ 0 = 0 0 ^ 1 = 0 0 ^ 0 = 1 Not quite what was intented. :P I setup the initial weights like this:
Neuron::Neuron() {
srand( (unsigned)time( NULL ) );

w1 = (float)((rand()%2000)-1000)/1000;
w2 = (float)((rand()%2000)-1000)/1000;
wb = (float)((rand()%2000)-1000)/1000; // bias

cout << w1 << endl << w2 << endl << wb << endl;
}


I train it like this:
int Neuron::Train(int i1, int i2, int correctOutput) {
float output = (i1*w1) + (i2*w2) + (1*wb);
output = hardlimiter(output);	// sum up inputs and weights

// training!

int error = correctOutput - output;
//cout << error << endl;

w1 = w1+(LEARNRATE*error*i1);
w2 = w2+(LEARNRATE*error*i2);
wb = wb+(LEARNRATE*error*1);

return output;
}


LEARNRATE is defined as 0.0001. Then in main I do:
for(int i = 0; i < TRAINTIMES; i++) {
cout << "(1, 1) --- " << Net.Train(1, 1, 0) << endl;
cout << "(1, 0) --- " << Net.Train(1, 0, 1) << endl;
cout << "(0, 1) --- " << Net.Train(0, 1, 1) << endl;
cout << "(0, 0) --- " << Net.Train(0, 0, 0) << endl;
}


So... anyone can tell me why my net got the meaning of XOR in the complete opposite way? :P It isnt really much more except that in the code but ask and I shall post the entire thing.

##### Share on other sites
You can look at these links to see why:

Scroll down a bit: Click
See how to overcome this: Click

##### Share on other sites
Oh... hehe... I thought I set out to do the simpliest thing imaginable, but instead I took on some kind of notorious legend. [smile] Way to go.

Ill make something else then.

 By the way, big thanks to you, NickGeorgia, it was your tutorials in your journal that finally got me started on this. Gave me a great laugh to. [smile] Big thanks!

##### Share on other sites
No problem, perceptrons are fun. But remember, when you need a "universal approximator" you need at least one hidden layer. Now if you ask me how many nodes in each layer and such I will plead the 5th.

##### Share on other sites
Maybe I don't completely understand this.... but couldn't you make a network of 3 (or more) of these perceptrons and make it learn XOR? It seems like this would be possible because you could teach it Z = (A. !B) + (!A . B) = A ^ B, which would be possible since perceptrons can learn NOT, AND and OR (first link). Am I missing something here?

##### Share on other sites
Good idea, but a perceptron has 1 node = 1 output.
Suppose we did like you said, have 3 perceptrons. Then you would have 3 outputs. How would you combine these outputs? ah ha... see, the hidden layer emerges.

Which will work by the way (but have a hidden layer): see second link diagram from above.

[Edited by - NickGeorgia on January 4, 2006 1:19:19 PM]

##### Share on other sites
Quote:
 Original post by NickGeorgiaGood idea, but a perceptron has 1 node = 1 output.Suppose we did like you said, have 3 perceptrons. Then you would have 3 outputs. How would you combine these outputs? ah ha... see, the hidden layer emerges.Which will work by the way: see the second link.

i think what foreignkid means is to have more than 2 inputs in your perceptron, so you should get xor working if you feed it: A, !A, B, !B, that are 4 inputs 1 output...

T2k

##### Share on other sites
Well in that case, sounds good to me. But XOR usually takes 2 inputs. Clever nonetheless.

##### Share on other sites
Um, shouldn't outputs of

1 ^ 1 = 1
1 ^ 0 = 0
0 ^ 1 = 0
0 ^ 0 = 1

for a nueral network be just as difficult as a correctly working XOR? What weights and transfer functions result in that output? XOR can't be learned without a hidden layer, and I don't think that output can either.

##### Share on other sites
Yep, I see. Still the linearly separable problem. Good catch. Hidden layer is our friend again.

##### Share on other sites
Quote:
 Original post by NickGeorgiaYep, I see. Still the linearly separable problem. Good catch. Hidden layer is our friend again.

The AP's reply (which I believe to be correct) indicates that the original implementation is faulty since it supposedly did not rely on a hidden layer. My original reply to this would have been to repeatedly try different random initial weights since the stable pattern stored may be occasionally the inverse of what is desired (depending on the initial conditions and the training algorithm used).

##### Share on other sites
Thanks for all the responses, I gave up XOR and made it learn AND instead. I start trying to do something else with the net, and of course I ran into problems. :P Im gonna try some more then Im going to put up a thread here about it. Feel free to help some more there. [smile]

##### Share on other sites
Why do people use logical operations in NN tests? Is it just because they are SO simple - because to me the whole idea of using a NN to do such operations is a bad example of why you need them in the first place!

Probably I've missed something and doing entirely analytical stuff with NNs (ie long division, cube roots) is a whole field of its own, but I thought I'd check...

##### Share on other sites
I'll try and make an answer:

Other than fundamental testing...well, for one reason, a neural network would have less parameters than a truth table (7 compared to 12 for the AND example).

Now expand the example, say we had a 512 x 512 bitmap image (0 or 1) and we want this image to map to a yes or no that the image is my signature. The truth table would be very large. NN would reduce the number of parameters greatly.

So in conclusion, one reason to use a NN is that the number of parameters are reduced. For small stuff like XOR, maybe not so great, but at least we know something more about NN--i.e. required hidden layer. Anyway, anyone else have any thoughts?

We can also wait for Mizipzor's stuff :)

##### Share on other sites
You want me to make an answer? To think? :P

Well, the reason I choosed to do a logical operation in my neural net was that it was the simpliest operation I could come up with. First, everything is binary, input and output, second, there is only need for a single perceptron with only two inputs.

And why "most people" tend to use them is the same reason I think. They are simple. And often when you read articles or similiar about NN's they are often laid up as tutorials, and in tutorials you show a very basic example so the readers can get a grasp. Maybe you finish of with a "this could be used to manage operations as complex as...".

But I agree, its not in the logical operations that the power of NN's lie.

##### Share on other sites
Out of interest, can inputs/outputs of a NN be vector in nature or must they be scalar? So if we're dealing in 3D coordinates must we triple the number of perceptrons etc?

And on a tangent to my other question - CAN you do long division/cube roots etc in a NN; it's the closest model to our brains, but maybe we're so inefficient that you need billions of neurons to do arithmetic in a net?!

##### Share on other sites
Yep, sqrt, sine, etc. you can do that with neural network within an interval. See my journal for example. How many nodes is a question though.

As for the other question, I've only seen scalar, though you can view the input as a vector and dot product with the input weights is what is normally done first. You could also view the input as a vector to a series of different neural networks also I suppose if you were to do it that way.

##### Share on other sites
What about a complex value. It's the kind of wierd thing I wouldn't be suprised if it were true in our brains!

##### Share on other sites
You could feed a complex value to a neural network as two inputs. A vector of length n could be given as n inputs. As far as what really happens in our brains, real neurons are a lot more complicated that the artificial neurons in an artificial neural network. It takes a supercomputer just to simulate one.

And complex numbers in our brain? Brains work on chemistry, not math. It may be possible to model things using math, but ultimately it's all chemical reactions. Usually the only places complex numbers show up in modeling real world systems is in places where the imaginary parts ultimately cancel out. If you have a model of a system that has imaginary numbers as part of a physical quantity, and there's no sensible alternative way to interpret the imaginary part, something is wrong.

##### Share on other sites
In a real Brain there must be some mechanism for Error Back Propagation on an dendrite by dendrite basis.

Anyone know anything about how this works?

P.S. Anyone know yet if the Hidden Layer needs a bias or if it can just inherit from an input layer bias neuron?

##### Share on other sites
There are more difficult challenges faced by a real brain than backpropagation algorithms. How would a brain know what the desired output would be? Artificial neural networks are a gigantic oversimplification.

As far as baises, it's best to remember that the artificial neural network is described entirely mathematically. Attempts to implement them using object oriented methodology (creating neuron objects for example) can serve to impair insight into the calculations that are being performed. A layered neural network can be concisely implemented using linear algebra constructs such as matrices. Using such a representation, a bias can be added to the inputs of any layer by simply appending a constant to the input vector. Other implementations may represent this differently, and optimizations may certainly be possible.

##### Share on other sites
Quote:
 Original post by NickGeorgiaGood idea, but a perceptron has 1 node = 1 output.Suppose we did like you said, have 3 perceptrons. Then you would have 3 outputs. How would you combine these outputs? ah ha... see, the hidden layer emerges.Which will work by the way (but have a hidden layer): see second link diagram from above.

i was looking at the link you gave on neural networks. i am unclear about something. the perceptron is part of the neuron or is it a neuron?

##### Share on other sites
A perceptron is a type of single node neural network. The picture with several nodes is a multilayer neural network (which may have nodes that are perceptrons).

##### Share on other sites
[n00b status="super"]
perceptron is a kind of node (check). so is a node == neuron?
i'm not seeing what exactly a neuron is (or could be).
[/n00b]

##### Share on other sites
Quote:
 Original post by Alpha_ProgDes[n00b status="super"]perceptron is a kind of node (check). so is a node == neuron?i'm not seeing what exactly a neuron is (or could be).[/n00b]

This might help.