Posted 02 October 2009 - 05:35 PM
Posted 02 October 2009 - 10:44 PM
Quote:
Original post by willh
I answered your question.
"Go implement XOR using an ANN".
Not the answer you wanted?
Posted 03 October 2009 - 05:02 AM
Quote:
Original post by Victor-Victor
"And with all those papers published, can you tell me what is the purpose of bias?"
Obviously not, which was my point.
Quote:
Anyway, I just implemented this in parallel with OpenGL using plain old multitexturing, but I would most likely switch to off-screen FBO and simple color blending. Not sure about performance yet, but basically you could have 4 layers of 1,048,576 nodes (1024x1024 texture) and process all 4 mill in one pass, even on 10 year old graphic cards. The speed increase can be amazingly huge. It can turn days of ANN training into hours.
Posted 03 October 2009 - 06:21 AM
Posted 03 October 2009 - 09:52 AM
Quote:
Of course I can. I won't though because of your attitude. How old are you?
I found several solutions, some involving extra neurons, some extra layers... but I like this solution from above since the number of weights and neurons is the same as before. What do you think, Willh? Is that bias or threshold there? Or something else? Can you compile the program on your system?
//--- ANN-X/O ----------------------------------------------------------
#include <stdio.h>
#include <stdlib.h>
#define st(q) a&(1<<q)?'x':b&(1<<q)?'o':'.'
#define win(q) !(q&7&&q&56&&q&448&&q&73&&q&146&&q&292&&q&273&&q&84)
#define prt printf("\n %c %c %c\n %c %c %c\n %c %c %c Move[1-9]: ",
st(0),st(1),st(2),st(3),st(4),st(5),st(6),st(7),st(8),t)
static int i,j,m,a,b,t,Out[9],Wgt[9][9]={0,43,21,43,44,2,21,2,42,29,0,29,
2,41,2,6,37,5,21,43,0,2,44,43,42,2,21,29,2,5,0,41,37,29,2,5,4,2,4,2,0,2,
4,2,4,5,2,29,37,41,0,5,2,29,21,2,42,43,44,2,0,43,21,5,37,5,2,41,2,29,0,
29,42,2,21,2,44,43,21,43,0};
void NetMove(int *m){
for(i=0;i<9;i++) for(j=0;j<9;j++)
if(a&(1<<i)){
Out[j]+=Wgt[i][j]; if(Out[j]==25||Out[j]==41
|| Out[j]==46||Out[j]==50) Out[j]+=Out[j];
}
for(i=0,j=-9;i<9;i++){
if(j<Out[i] && !(a&(1<<i)||b&(1<<i)))
j= Out[i], *m=i; Out[i]= 0;
}
}
void main(){
BEGIN: m=a=b=t=0;
printf("\n\n\n\n TIC TAC TOE --- New Game ---\n"); prt;
do{
scanf("%2c",&m); a|= 1<<(m-'1');
if(win(~a))
prt, printf("Net win!"), t=9;
else if(++t<9){
NetMove(&m);b|= 1<<m; prt;
if(win(~b)) printf("Net lose!"), t=9;
}
}while(++t<9); goto BEGIN;
}
//----------------------------------------------------------------------
Tournament: 10,000 games - performance...
ANN vs REC REC vs ANN ANN vs ANN REC vs REC
---------- ---------- ---------- ----------
W: 0 L: 0 W: 0 L: 0 W: 0 L: 0 W: 0 L: 0
Draw:10000 Draw:10000 Draw:10000 Draw:10000
Time:1m28s Time:1m29s Time: 52s Time:1m42s
Posted 04 October 2009 - 06:15 AM
Posted 04 October 2009 - 08:29 AM
Quote:
Original post by essexedwards
I do strongly encourage you to think about how you would implement NOT, AND, OR, and XOR with a ANN. They are very small networks you can do in your head or on scrap paper, but they should help explain some of these issues.
-Essex
Posted 04 October 2009 - 06:53 PM
Posted 05 October 2009 - 02:41 AM
Quote:
Original post by Victor-Victor
willh,
In case you missed it, the world has just gotten a solution for Tic-Tac-Toe, you know?
Quote:
Original post by Victor-Victor
It's a bit more complex than XOR, don't you think?
Quote:
Original post by Victor-Victor
In any case sigmoid functions are popular because their derivatives are easy to calculate, which is helpful for some training algorithms.
Quote:
Original post by Victor-Victor
Essex,
How can you strongly suggest anything and in the same time have a sentence starting with "I don't know too much about thresholds"? First there was a threshold, only later came bias.
Posted 05 October 2009 - 04:33 AM
Quote:
Original post by Emergent
Let's say I want to learn the XOR function -- a classic example from the early days of ANN research. One method might be to use a multi-layer neural network which "learns" XOR. Fine. Here's another (this is an intentionally-simple example): I'll store a 2d array and say that my estimate of "a XOR b" is simply given by the value in the array, "array[a][b]." Then, here's my learning algorithm:
Given training input (a,b) and training output c, I'll perform the following update rule:
array[a][b] = gamma*c + (1 - gamma)*array[a][b]
where gamma is a real number between 0 and 1 which is my "learning rate." (I'm assuming "array" is an array of doubles or some other approximations of real numbers.)
This will learn XOR.
Why should I use a neural network instead of this?
;-)
Posted 05 October 2009 - 05:47 AM
Posted 05 October 2009 - 08:01 AM
Quote:
Original post by Victor-Victor Quote:
Original post by Emergent
Let's say I want to learn the XOR function [...]
Where did you find about this, or how did you come up with it? What method is that, some kind of Q-learning? It looks a lot like ANN to me, single layer percptron, but I don't see any thresholds. Are you sure that matrix can learn XOR? Can you explain a bit more how that works? Do you think your technique could be useful for Tic-Tac-Toe?
Posted 05 October 2009 - 12:41 PM
Posted 05 October 2009 - 02:07 PM
Quote:
Original post by Victor-Victor
I don't understand what do you mean by "toy example" and "not practical suggestion".
Quote:
Is it true or not?
Quote:
ANN research was stuck frozen for 30 years just because everyone assumed ANN can't do XOR. If your "matrix", or whatever you call it, and your learning method can learn XOR then of course it is practical. Not only that, it's pretty important too as there is nothing like it on the whole internet.
Quote:
So please, are you sure it can learn XOR? Can you demonstrate the procedure?
Posted 05 October 2009 - 06:17 PM
Initial table:
0.5000 0.5000
0.5000 0.5000
"Training" table...
--- Iteration 1 -------
Input = (0, 1) Output = 1
New table:
0.5000 0.7500
0.5000 0.5000
--- Iteration 2 -------
Input = (1, 1) Output = 0
New table:
0.5000 0.7500
0.5000 0.2500
--- Iteration 3 -------
Input = (0, 0) Output = 0
New table:
0.2500 0.7500
0.5000 0.2500
--- Iteration 4 -------
Input = (0, 1) Output = 1
New table:
0.2500 0.8750
0.5000 0.2500
--- Iteration 5 -------
Input = (0, 1) Output = 1
New table:
0.2500 0.9375
0.5000 0.2500
--- Iteration 6 -------
Input = (0, 1) Output = 1
New table:
0.2500 0.9688
0.5000 0.2500
--- Iteration 7 -------
Input = (0, 1) Output = 1
New table:
0.2500 0.9844
0.5000 0.2500
--- Iteration 8 -------
Input = (1, 1) Output = 0
New table:
0.2500 0.9844
0.5000 0.1250
--- Iteration 9 -------
Input = (1, 0) Output = 1
New table:
0.2500 0.9844
0.7500 0.1250
--- Iteration 10 -------
Input = (1, 0) Output = 1
New table:
0.2500 0.9844
0.8750 0.1250
--- Iteration 11 -------
Input = (0, 1) Output = 1
New table:
0.2500 0.9922
0.8750 0.1250
--- Iteration 12 -------
Input = (1, 0) Output = 1
New table:
0.2500 0.9922
0.9375 0.1250
--- Iteration 13 -------
Input = (1, 1) Output = 0
New table:
0.2500 0.9922
0.9375 0.0625
--- Iteration 14 -------
Input = (1, 1) Output = 0
New table:
0.2500 0.9922
0.9375 0.0312
--- Iteration 15 -------
Input = (1, 1) Output = 0
New table:
0.2500 0.9922
0.9375 0.0156
-- Done ----
Learned (thresholded) values:
0 1
1 0
gamma = 0.5; % Set "Learning rate"
v = 0.5*ones(2,2); % Initialize table
fprintf(1, 'Initial table:\n');
disp(v);
% "Train" table
fprintf(1, '"Training" table...\n');
for k=1:15
fprintf(1, '--- Iteration %g -------\n', k);
% Get a random input/output pair
a = (rand(1) > 0.5);
b = (rand(1) > 0.5);
f = (a | b) & ~(a & b);
% Update table
v(a+1,b+1) = (1 - gamma)*v(a+1,b+1) + gamma*f;
fprintf(1, 'Input = (%g, %g) Output = %g\nNew table:\n', a, b, f);
disp(v);
end
% Threshold the result
v_t = (v > 0.5);
fprintf(1, '-- Done ----\nLearned (thresholded) values:\n');
disp(v_t);
Posted 05 October 2009 - 10:29 PM
//--- OpenGL ANN-X/O ----------------------------------------------------
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <GL\GLee.h>
#include <GL\glut.h>
#pragma comment(lib, "glee.lib")
#define win(q) !(q&7&&q&56&&q&448&&q&73&&q&146&&q&292&&q&273&&q&84)
GLuint In[9]; static int sw,sh,i,j,k,a,b,t=9,m=99,Out[9],Wgt[9][9]=
{0,43,21,43,44,2,21,2,42,29,0,29,2,41,2,6,37,5,21,43,0,2,44,43,42,2,
21,29,2,5,0,41,37,29,2,5,4,2,4,2,0,2,4,2,4,5,2,29,37,41,0,5,2,29,21,
2,42,43,44,2,0,43,21,5,37,5,2,41,2,29,0,29,42,2,21,2,44,43,21,43,0};
char say[100]; GLubyte Board[16];
void NetMove(int *m){
glEnable(GL_BLEND);
glEnable(GL_TEXTURE_2D);
for(i=0;i<9;i++) if(a&(1<<i)){
glBindTexture(GL_TEXTURE_2D, In[i]);
glBegin(GL_QUADS);
glTexCoord2f(0,1); glVertex2f(0, 0);
glTexCoord2f(1,1); glVertex2f(4, 0 );
glTexCoord2f(1,0); glVertex2f(4, 4);
glTexCoord2f(0,0); glVertex2f(0, 4);
glEnd();
}
glReadPixels(0,sh-4, 4, 4,
GL_RED, GL_UNSIGNED_BYTE, Board);
for(i=0,j=0; i<9; i++){
if(i%3==0) j++; Out[i]= Board[i+j];
if(Out[i]==25||Out[i]==41
|| Out[i]==46||Out[i]==50)Out[i]+=Out[i];
}
for(i=0,j=-9;i<9;i++) if(j<Out[i]
&& !(a&(1<<i)||b&(1<<i))) j= Out[i], *m=i;
glDisable(GL_TEXTURE_2D);
glDisable(GL_BLEND);
}
static void TxtOut(int x, int y, char *str){
int n, len= (int)strlen(str);
glRasterPos2f(x,y); for(n=0; n<len; n++)
glutBitmapCharacter(GLUT_BITMAP_9_BY_15, str[n]);
}
static void display(){
glClear(GL_COLOR_BUFFER_BIT);
TxtOut(10,30,"TIC - TAC - TOE");
if(m<99 && t<9){
a|= 1<<m; t++; if(t++<9)
NetMove(&m), b|= 1<<m; m=99;
}else if(t>=9 || win(~a) || win(~b)){
if(win(~a)) TxtOut(120,100,"You win!");
else if(win(~b)) TxtOut(120,100,"You lose!");
else TxtOut(120,100,"Draw..."); t= 99;
}
for(i=0,j=0,k=0; i<9; i++){
k++; if(i%3==0) j++, k=0;
if(a&(1<<i)) TxtOut(30+k*20, 50+j*20, "x");
else if(b&(1<<i)) TxtOut(30+k*20, 50+j*20, "o");
else TxtOut(30+k*20, 50+j*20, ".");
TxtOut(20,150, "Move[1-9]");
} glutSwapBuffers();
}
void init(int w, int h){
glViewport(0,0,sw=w,sh=h);
glMatrixMode(GL_PROJECTION);
glLoadIdentity(); glOrtho(0,w,h, 0,0,1);
glMatrixMode(GL_MODELVIEW); glLoadIdentity();
glDisable(GL_DEPTH_TEST); glClearColor(0,0,0, 0);
glEnable(GL_COLOR_MATERIAL); glShadeModel(GL_FLAT);
glBlendFunc(GL_ONE, GL_ONE); glBlendEquation(GL_FUNC_ADD);
for(t=0;t<9;t++){ for(i=0,j=0; i < 9; i++){
if(i%3==0) j++; Board[i+j]= Wgt[t][i];}
glGenTextures(1,&In[t]); glBindTexture(GL_TEXTURE_2D,In[t]);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexImage2D(GL_TEXTURE_2D,0,1, 4, 4,0, GL_RED,GL_UNSIGNED_BYTE, Board);
}
}
void kbd_loop(unsigned char key, int x, int y){
switch(key){
case 27: exit(0); break;
} m=key-'1'; if(m<0||m>8||(t<9
&& (a&(1<<m)||b&(1<<m)))) m=99;
else if(t>=9 && m<99) a=b=t= 0;
display(); glutPostRedisplay();
}
void main(){
glutInitDisplayMode(GLUT_DOUBLE);
glutInitWindowSize(320, 200);
glutCreateWindow("X/O-ANN");
glutKeyboardFunc(kbd_loop);
glutDisplayFunc(display);
glutReshapeFunc(init);
glutMainLoop();
}
//-----------------------------------------------------------------------
Posted 06 October 2009 - 04:06 AM
Quote:
Original post by Victor-Victor
f = (a | b) & ~(a & b); What are you trying to model, ZX Spectrum? Your function is more complex than XOR, why not: f = a^b?
Quote:
I take it you're joking.
Quote:
You took learning technique from ANNs, you have weights and you have thresholds, and you ask: "Why should I use a neural network instead of this (toy example)?"
Quote:
Anyway, Can you provide one of those minimax algorithms for Tic-Tac-Toe that gives instant solutions? Do you think any of them can beat my ANN-X/O in memory usage or speed?
Posted 06 October 2009 - 06:31 AM
Quote:
Original post by Emergent Quote:
Original post by Victor-Victor
f = (a | b) & ~(a & b); What are you trying to model, ZX Spectrum? Your function is more complex than XOR, why not: f = a^b?
This is XOR! :-) MATLAB, the language I used, does have a "xor" function, but I had forgotten about it when I whipped this up so I just wrote the above, which means "f = (a OR b) AND NOT(a AND b);" hopefully you can see that this is precisely equivalent.
Quote:
Well, although as I've stated above there are a number of things which make just storing a big table impractical for large problems, for small problems there's really nothing wrong with it. And representing a function as a table is the simplest possible example of representing a function as a sum of basis functions; here the bases are just the Kronecker delta functions.
Quote:
So it gives a nice segue into some slightly more sophisticated (albeit still quite simple) methods based on basis expansions (which generalize better and are more applicable to higher-dimensional problems), and this is what I'd hoped my post would hint at.
Quote:
It's true that if you insist you can think of my example as a special case of fancier learning algorithms.
Quote:
For instance, a variety of Q-learning reduces to this for a one-step plan with only terminal cost (but then you've really eliminated the whole reason to use Q-learning, which is that it exploits Bellman's optimality principle). But I don't think this point of view is particularly productive.
What I described is also a little close to a Kohenen map, but not exactly. Kohenen maps are sometimes described as neural networks, though again I do not think this is a particularly useful way to think about them because they bear little resemblance to the sigmoid-of-weighted-sum type networks that people usually mean when they say "ANN."
The "learning" technique I described is just a first-order lowpass filter... Similar ideas are used in a billion different places.
Posted 06 October 2009 - 09:12 AM
private int MakePerfectTicTacToeMoveWithoutNeuralNetwork ( int[][] board, int toPlay)
{
int moveNum = 0;
// What time is it?
for ( int x = 0; x < 3; x++)
{
for ( int y = 0; y < 3; y++)
{
if (board[x][y] != 0)
moveNum++;
}
}
// On first move, always take the center square //
if ( moveNum == 0)
return 4;
// On second move, take the center if possible, otherwise take a corner //
if ( moveNum == 1)
{
if ( board[1][1] == 0)
return 4;
else
return 0;
}
// Third move, always take a corner
if ( moveNum == 2)
{
if ( board[0][0] == 0)
return 0;
else if ( board[2][0] == 0)
return 2;
else if ( board[0][2] == 0)
return 6;
return 8;
}
// All other moves //
// First take the move that wins //
int moveIndex = 0;
for ( int y = 0; y < 3; y++)
{
for ( int x = 0; x < 3; x++)
{
if ( board[x][y] == 0)
{
// Check horizontal //
if ( x == 0)
{
if ( board[x+1][y] == toPlay && board[x+2][y] == toPlay)
return moveIndex;
}
else if ( x == 1)
{
if ( board[x-1][y] == toPlay && board[x+1][y] == toPlay)
return moveIndex;
}
else
{
if ( board[x-2][y] == toPlay && board[x-1][y] == toPlay)
return moveIndex;
}
// Check vertical //
if ( y == 0)
{
if ( board[x][y + 1] == toPlay && board[x][y + 2] == toPlay)
return moveIndex;
}
else if ( y == 1)
{
if ( board[x][y - 1] == toPlay && board[x][y + 1] == toPlay)
return moveIndex;
}
else
{
if ( board[x][y - 2] == toPlay && board[x][y - 1] == toPlay)
return moveIndex;
}
// Check diagonal //
if ( x == 0 && y == 0)
{
if ( board[1][1] == toPlay && board[2][2] == toPlay)
return moveIndex;
}
else if ( x == 2 && y == 0)
{
if ( board[1][1] == toPlay && board[0][2] == toPlay)
return moveIndex;
}
else if ( x == 0 && y == 2)
{
if ( board[1][1] == toPlay && board[2][0] == toPlay)
return moveIndex;
}
else if ( x == 2 && y == 2)
{
if ( board[1][1] == toPlay && board[0][0] == toPlay)
return moveIndex;
}
}
moveIndex++;
}
}
// Make a move that avoids losing //
if ( toPlay == 1)
toPlay = 2;
else
toPlay = 1;
moveIndex = 0;
for ( int y = 0; y < 3; y++)
{
for ( int x = 0; x < 3; x++)
{
if ( board[x][y] == 0)
{
// Check horizontal //
if ( x == 0)
{
if ( board[x+1][y] == toPlay && board[x+2][y] == toPlay)
return moveIndex;
}
else if ( x == 1)
{
if ( board[x-1][y] == toPlay && board[x+1][y] == toPlay)
return moveIndex;
}
else
{
if ( board[x-2][y] == toPlay && board[x-1][y] == toPlay)
return moveIndex;
}
// Check vertical //
if ( y == 0)
{
if ( board[x][y + 1] == toPlay && board[x][y + 2] == toPlay)
return moveIndex;
}
else if ( y == 1)
{
if ( board[x][y - 1] == toPlay && board[x][y + 1] == toPlay)
return moveIndex;
}
else
{
if ( board[x][y - 2] == toPlay && board[x][y - 1] == toPlay)
return moveIndex;
}
// Check diagonal //
if ( x == 0 && y == 0)
{
if ( board[1][1] == toPlay && board[2][2] == toPlay)
return moveIndex;
}
else if ( x == 2 && y == 0)
{
if ( board[1][1] == toPlay && board[0][2] == toPlay)
return moveIndex;
}
else if ( x == 0 && y == 2)
{
if ( board[1][1] == toPlay && board[2][0] == toPlay)
return moveIndex;
}
else if ( x == 2 && y == 2)
{
if ( board[1][1] == toPlay && board[0][0] == toPlay)
return moveIndex;
}
}
moveIndex++;
}
}
// Otherwise just take the next possible move //
if ( board[0][0] == 0)
return 0;
if ( board[2][0] == 0)
return 2;
if ( board[0][2] == 0)
return 6;
if ( board[2][2] == 0)
return 8;
moveIndex = 0;
for ( int x = 0; x < 3; x++)
{
for ( int y = 0; y < 3; y++)
{
if ( board[x][y] == 0)
return moveIndex;
moveIndex++;
}
}
return -1;
}
Posted 06 October 2009 - 09:36 AM
Quote:
Original post by Emergent
Here I want to point you to a particular Java applet I once saw, but I can't seem to find it, so for now I'll point you to the Wikipedia article. I also have some (embarrassingly messy) source code for minimax Tic Tac Toe kicking around; I'll need to get to my laptop before I can post it. It will beat a large ANN in terms of memory usage or gameplay (it plays perfectly), but probably not in execution time (though this is still small).
This program does never lose, but sometimes fails to win, when it could have won. ANN-X/O makes the same mistakes as it is not aware of its own moves, however neural network version is about two times faster, as you can measure yourself.
#define f(X,g,p,N,M)M{return X?a&b?a&b&1<<i&&m(b,a^1<<i,9)>8?i:M:0:9;}main(){
int a=511,b=a,i=4;for(;X;b^=1<<N)a^=1<<g-'1',g,p(N+'1'),p('\n');} /* 123 */
f(i--&&b&7&&b&56&&b&448&&b&73&&b&146&&b&292&&b /* John Rickard */ /* 456 */
&273&&b&84,getchar(),putchar,m(b,a,9),m(a,b,i)) /* xxx@xxxx.xx.xx */ /* 789 */
a(X){/*/X=- a(X){/*/X=-
-1;F;X=- -1;F;X=-
-1;F;}/*/ -1;F;}/*/
char*z[]={"char*z[]={","a(X){/*/X=-","-1;F;X=-","-1;F;}/*/","9999999999 :-| ",
"int q,i,j,k,X,O=0,H;S(x)int*x;{X+=X;O+=O;*x+1?*x+2||X++:O++;*x=1;}L(n){for(*",
"z[i=1]=n+97;i<4;i++)M(256),s(i),M(128),s(i),M(64),N;X*=8;O*=8;}s(R){char*r=z",
"[R];for(q&&Q;*r;)P(*r++);q&&(Q,P(44));}M(m){P(9);i-2||P(X&m?88:O&m?48:32);P(",
"9);}y(A){for(j=8;j;)~A&w[--j]||(q=0);}e(W,Z){for(i-=i*q;i<9&&q;)y(W|(1<<i++&",
"~Z));}R(){for(k=J[*J-48]-40;k;)e(w[k--],X|O);}main(u,v)char**v;{a(q=1);b(1);",
"c(1);*J=--u?O?*J:*v[1]:53;X|=u<<57-*v[u];y(X);K=40+q;q?e(O,X),q&&(K='|'),e(X",
",O),R(),O|=1<<--i:J[*J-48+(X=O=0)]--;L(q=0);for(s(i=0);q=i<12;)s(i++),i>4&&N",
";s(q=12);P(48);P('}');P(59);N;q=0;L(1);for(i=5;i<13;)s(i++),N;L(2);}",0};
b(X){/*/X=- b(X){/*/X=-
-1;F;X=- -1;F;X=-
-1;F;}/*/ -1;F;}/*/
int q,i,j,k,X,O=0,H;S(x)int*x;{X+=X;O+=O;*x+1?*x+2||X++:O++;*x=1;}L(n){for(*
z[i=1]=n+97;i<4;i++)M(256),s(i),M(128),s(i),M(64),N;X*=8;O*=8;}s(R){char*r=z
[R];for(q&&Q;*r;)P(*r++);q&&(Q,P(44));}M(m){P(9);i-2||P(X&m?88:O&m?48:32);P(
9);}y(A){for(j=8;j;)~A&w[--j]||(q=0);}e(W,Z){for(i-=i*q;i<9&&q;)y(W|(1<<i++&
~Z));}R(){for(k=J[*J-48]-40;k;)e(w[k--],X|O);}main(u,v)char**v;{a(q=1);b(1);
c(1);*J=--u?O?*J:*v[1]:53;X|=u<<57-*v[u];y(X);K=40+q;q?e(O,X),q&&(K='|'),e(X
,O),R(),O|=1<<--i:J[*J-48+(X=O=0)]--;L(q=0);for(s(i=0);q=i<12;)s(i++),i>4&&N
;s(q=12);P(48);P('}');P(59);N;q=0;L(1);for(i=5;i<13;)s(i++),N;L(2);}
c(X){/*/X=- c(X){/*/X=-
-1;F;X=- -1;F;X=-
-1;F;}/*/ -1;F;}/*/