Artificial Neural Networks

Started by
71 comments, last by Victor-Victor 14 years, 6 months ago
I answered your question.

"Go implement XOR using an ANN".

Not the answer you wanted?
Advertisement
Quote:Original post by willh
I answered your question.

"Go implement XOR using an ANN".

Not the answer you wanted?


"And with all those papers published, can you tell me what is the purpose of bias?"


Obviously not, which was my point.



Anyway, I just implemented this in parallel with OpenGL using plain old multitexturing, but I would most likely switch to off-screen FBO and simple color blending. Not sure about performance yet, but basically you could have 4 layers of 1,048,576 nodes (1024x1024 texture) and process all 4 mill in one pass, even on 10 year old graphic cards. The speed increase can be amazingly huge. It can turn days of ANN training into hours.
Quote:Original post by Victor-Victor

"And with all those papers published, can you tell me what is the purpose of bias?"


Obviously not, which was my point.


Of course I can. I won't though because of your attitude. How old are you?


Quote:
Anyway, I just implemented this in parallel with OpenGL using plain old multitexturing, but I would most likely switch to off-screen FBO and simple color blending. Not sure about performance yet, but basically you could have 4 layers of 1,048,576 nodes (1024x1024 texture) and process all 4 mill in one pass, even on 10 year old graphic cards. The speed increase can be amazingly huge. It can turn days of ANN training into hours.



I like your enthusiasm.
since any nueral net can be implemented as a computer program on my laptop(if i had unbounded memory), do the claims of the nueral net having the ability to model human intuition also follow for my laptop (and all other claims too)?
Quote:
Of course I can. I won't though because of your attitude. How old are you?


I'm 47. Can you please give us some links to articles you are referring to?

1. What is the purpose of bias?
2. What is the difference between bias and threshold?
3. What kind of networks require bias and/or threshold and why?



Meanwhile...
//--- ANN-X/O ----------------------------------------------------------#include <stdio.h>#include <stdlib.h>#define st(q) a&(1<<q)?'x':b&(1<<q)?'o':'.'#define win(q) !(q&7&&q&56&&q&448&&q&73&&q&146&&q&292&&q&273&&q&84)#define prt printf("\n  %c %c %c\n  %c %c %c\n  %c %c %c  Move[1-9]: ",st(0),st(1),st(2),st(3),st(4),st(5),st(6),st(7),st(8),t)static int i,j,m,a,b,t,Out[9],Wgt[9][9]={0,43,21,43,44,2,21,2,42,29,0,29,2,41,2,6,37,5,21,43,0,2,44,43,42,2,21,29,2,5,0,41,37,29,2,5,4,2,4,2,0,2,4,2,4,5,2,29,37,41,0,5,2,29,21,2,42,43,44,2,0,43,21,5,37,5,2,41,2,29,0,29,42,2,21,2,44,43,21,43,0};void NetMove(int *m){	for(i=0;i<9;i++) for(j=0;j<9;j++) 		if(a&(1<<i)){ 			Out[j]+=Wgt[j]; if(Out[j]==25||Out[j]==41				|| Out[j]==46||Out[j]==50) Out[j]+=Out[j]; 		}				for(i=0,j=-9;i<9;i++){		if(j<Out && !(a&(1<<i)||b&(1<<i))) 			j= Out, *m=i; Out= 0;	}}void main(){  BEGIN: m=a=b=t=0; printf("\n\n\n\n TIC TAC TOE   --- New Game ---\n"); prt;	do{		scanf("%2c",&m); a|= 1<<(m-'1'); 		if(win(~a))			prt, printf("Net win!"), t=9;		else if(++t<9){			NetMove(&m);b|= 1<<m; prt;			if(win(~b)) printf("Net lose!"), t=9;		}	}while(++t<9); goto BEGIN;}//----------------------------------------------------------------------Tournament: 10,000 games - performance...ANN vs REC     REC vs ANN     ANN vs ANN     REC vs REC----------     ----------     ----------     ----------W: 0  L: 0     W: 0  L: 0     W: 0  L: 0     W: 0  L: 0Draw:10000     Draw:10000     Draw:10000     Draw:10000Time:1m28s     Time:1m29s     Time:   52s    Time:1m42s

I found several solutions, some involving extra neurons, some extra layers... but I like this solution from above since the number of weights and neurons is the same as before. What do you think, Willh? Is that bias or threshold there? Or something else? Can you compile the program on your system?

[Edited by - Victor-Victor on October 3, 2009 4:52:25 PM]
This thread is just too painful to watch and not reply.

Bias is so your output can be non-zero even when all your inputs are zero. For example, how would you make an ANN that can compute NOT, with 1 input and 1 output?

Assuming no threshold or bias, then each 'neuron' in the network just outputs a linear combination of its inputs. So, the entire network's outputs can be reduced to a linear combination of its inputs (ie. a big matrix multiply). A network like this could never learn the function x^2, or 1-x.

Adding a bias, the ANN can represent arbitrary affine functions. So, 1-x can be reproduced by an ANN with bias, but x^2 is still not possible. Notice that a threshold and a bias are not the same, because a threshold would not help with this problem.

I don't know too much about thresholds, but let me take a stab. A threshold is just another 'activation' or 'transfer' function. Quite often, you'll see the sigmoid function used for this purpose. The sigmoid function adds enough functionality that the ANN can approximate any function, for example x^2. I suspect that using threshold as the activation function makes the network's output a piecewise combination of affine functions. This will also let you approximate any function (ie. x^2), but only with straight segments.

To answer your last question, your network requires bias and threshold almost always. It is the special case when you can get away without them.

I do strongly encourage you to think about how you would implement NOT, AND, OR, and XOR with a ANN. They are very small networks you can do in your head or on scrap paper, but they should help explain some of these issues.

-Essex
Quote:Original post by essexedwards
I do strongly encourage you to think about how you would implement NOT, AND, OR, and XOR with a ANN. They are very small networks you can do in your head or on scrap paper, but they should help explain some of these issues.

-Essex


That's two people telling you the same thing Victor-Victor. It really is the best way to understand it.
willh,

In case you missed it, the world has just gotten a solution for Tic-Tac-Toe, you know? It's a bit more complex than XOR, don't you think? You're not saying much but it's clear you're confusing THRESHOLD and BIAS. You should go and see about AND, OR and XOR yourself. But pay attention, because what you will find there is no BIAS, but THRESHOLD. The purpose of threshold is to mimic 'activation potential' of biological neurons. Neuron is either firing or not, it's a step function suitable for binary separation, such as AND, OR and XOR.

Bias is implementation method used in multilayer perceptron networks, the purpose of which is to replace individual neuron thresholds with some non-linear function such as sigmoid. It's supposed to be optimization technique, perhaps even an attempt at modeling temporal distribution of neuron firing rate. In any case sigmoid functions are popular because their derivatives are easy to calculate, which is helpful for some training algorithms.



Essex,

How can you strongly suggest anything and in the same time have a sentence starting with "I don't know too much about thresholds"? First there was a threshold, only later came bias. The purpose of bias is to substitute threshold. Bias is completely optional, if not very unnecessary. I do strongly encourage you to think about how you would implement NOT, AND, OR, and XOR with a ANN. They are very small networks you can do in your head or on scrap paper, but they will help you learn about threshold.

[Edited by - Victor-Victor on October 5, 2009 1:53:46 AM]
Quote:Original post by Victor-Victor
willh,

In case you missed it, the world has just gotten a solution for Tic-Tac-Toe, you know?



Fantastic! Another one to add to my ever growing collection! I think I'll put it next to the Tic-Tac-Toe computer made out of Tinker Toy(tm).

http://www.retrothing.com/2006/12/the_tinkertoy_c.html


Quote:Original post by Victor-Victor
It's a bit more complex than XOR, don't you think?


Not really. In any case it's totally irrelevant because you don't have a learning algorithm. How do you expect your ANN to become sentient without one? Without a learning algorithm your ANN will never be able to ask itself 'who made me?', and therefore is not an ANN. It's just an AN. All ANNs must posses the ability to take over mankind otherwise they are not an ANN. Haven't you ever heard of SkyNet? Case closed.


Quote:Original post by Victor-Victor
In any case sigmoid functions are popular because their derivatives are easy to calculate, which is helpful for some training algorithms.


That's what I said a few posts back. 'Sigmoid helps them play nice with back propagation learning algorithm'. I think you're plagiarizing my work.

You sir, have been permanently banned from the Society of Artifical Neural Networks League of Intelligent Designers. Don't bother writing in to complain to the applications commitee, as they'll side with me.


Quote:Original post by Victor-Victor
Essex,

How can you strongly suggest anything and in the same time have a sentence starting with "I don't know too much about thresholds"? First there was a threshold, only later came bias.


Did you know that I'm building a robot that will run as president in the year 2036? You had better be nice to me now because once it takes office you will have to do what it says and it will want to make everyone who said things to me appologize or else they will go to iraq and have to live in the greenbelt unless you like that because it will know because of its precognitive ANN and then it will send you to clean latrines in gitmo or else you will be my friend and will get to ride around in a helicopter and the pilot will be a robot too.


[Edited by - willh on October 5, 2009 11:41:08 AM]
Quote:Original post by Emergent
Let's say I want to learn the XOR function -- a classic example from the early days of ANN research. One method might be to use a multi-layer neural network which "learns" XOR. Fine. Here's another (this is an intentionally-simple example): I'll store a 2d array and say that my estimate of "a XOR b" is simply given by the value in the array, "array[a]." Then, here's my learning algorithm:

Given training input (a,b) and training output c, I'll perform the following update rule:

array[a] = gamma*c + (1 - gamma)*array[a]

where gamma is a real number between 0 and 1 which is my "learning rate." (I'm assuming "array" is an array of doubles or some other approximations of real numbers.)

This will learn XOR.

Why should I use a neural network instead of this?

;-)


Where did you find about this, or how did you come up with it? What method is that, some kind of Q-learning? It looks a lot like ANN to me, single layer percptron, but I don't see any thresholds. Are you sure that matrix can learn XOR? Can you explain a bit more how that works? Do you think your technique could be useful for Tic-Tac-Toe?

This topic is closed to new replies.

Advertisement