# ANN problem

This topic is 3659 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

I made an ANN to generate image. Before the image generating code was implemented, it worked just fine. But now the outputs are always 0. Here's my code:
// bitgen.cpp : Defines the entry point for the console application.
//

#include <windows.h>
#include <stdio.h>
#include <stdlib.h>
#include "ann.h"
#include "bitmap.h"

void generate_input( double * input, int x, int y )
{
int i = 0;

for( int n = 0; n < 7; n++ )
input[i++] = ( ( x & ( 1 << n ) ) == 0 ? 0.0 : 1.0 );

for( int n = 0; n < 7; n++ )
input[i++] = ( ( y & ( 1 << n ) ) == 0 ? 0.0 : 1.0 );
}

Pixel24 translate_output( bool * output )
{
Pixel24 result = { 0, 0, 0 };
int i = 0;

for( int n = 0; n < 8; n++ )
if( output[i++] ) result.r &= ( 1 << n );

for( int n = 0; n < 8; n++ )
if( output[i++] ) result.g &= ( 1 << n );

for( int n = 0; n < 8; n++ )
if( output[i++] ) result.b &= ( 1 << n );

return result;
}

int main( int argc, char ** argv )
{
// initialize
srand( GetTickCount( ) );

ann::network n;
n.init( 14, 24, 10, 10, 0.1 );
n.randomize( );

Image image( 128, 128 );
double input[14];
bool  output[24];

// generate image
for( int y = 0; y < 128; y++ )
{
for( int x = 0; x < 128; x++ )
{
n.reset( );
generate_input( input, x, y );
n.write_input( input );
n.execute( );
image( x, y ) = translate_output( output );
}

printf( "." );
fflush( stdout );
}

image.SaveBitmapFile( "image.bmp" );

// clean up
n.kill( );
return 0;
}


There's 14 inputs and 24 outputs, with 10 layers of 10 nodes in between. There's 7 inputs that represent the bits in x, and 7 that represent y. And there's 24 outputs for the bits in a pixel. The only thing I can think of is that there's something wrong with my generate_input function. But I can't tell because ANNs are very difficult to debug. Any help is appreciated.

##### Share on other sites
I found the problem. I was using the &= operator instead of |=.

##### Share on other sites
Hi, I've never heard of ANN before (which is why I like this site) - I looked it up a bit and got a lot of general/abstract definitions, but would be grateful if you could give an example of a practical use for it? is it purely restricted to processing images (matrices) or could it be used in for example the broad phase of a collision system?

Also does your source in the original post still have the error - i.e. what you have posted should be altered to use the |= operator in translate_output?

##### Share on other sites
Neural networks, in their broadest sense are function approximations.

For a given set of inputs and outputs, you declare some relation y = f(x).

But since you do not know what f is, neural network provides one way of simulating it.

A simple example would be giving a table:
x | y1 | 22 | 43 | 6

We assume that relation between x and y is y = f(x). We then train the neural network using our table. With any luck, and proper choice of function, we'll then be able to input f(6), and receive 12 as a result.

Needless to say, incorrect choice of function, or incorrect model will result in completely bogus results. The above trivial model doesn't work for y = x^2, for example.

But rather than guessing the future, neural networks have practical applications when dealing with inaccurate input. You can, for example, train them to recognize hand-writing, or speech, or even shapes.

##### Share on other sites
Ah, okay - I thought it was Approximate Nearest Neighbour - is this the same thing?

##### Share on other sites
			n.reset( );			generate_input( input, x, y );			n.write_input( input );			n.execute( );			n.read_output( output );			image( x, y ) = translate_output( output );

This is a rather clumsy interface. (The 'init' and 'kill' member functions are especially suspect; that's what destructors are for.) Do you really need such fine control? Why not structure it like this?

			generate_input(input, x, y);			image( x, y ) = translate_output(n.run_on(input));

In fact, with the appropriate definitions in ann::network, you could be doing things like:

// Our declarations look like:namespace ann {	typedef vector<double> input;	typedef vector<bool> output;	class network {		// lots of hidden stuff		public:		network(int, int, int, int, double); // sets data in initializer list,		// and does "randomize" work in the constructor body		output operator()(const input& i); // resets, writes input, runs and returns output.		~network(); // clean up	};}ann::input generate_input(int x, int y) {	ann::input input(14);		for (int n = 0; n < 7; n++) {		input[n] = ((x & (1 << n)) ? 1.0 : 0.0);		input[n + 7] = ((y & (1 << n)) ? 1.0 : 0.0);	}}Pixel24 translate_output(const ann::output& output) {	Pixel24 result = { 0, 0, 0 };		for (int r = 0, g = 8, b = 16; r < 8; ++r, ++g, ++b) {		if (output[r]) result.r |= 1 << r;		if (output[g]) result.g |= 1 << g;		if (output) result.b |= 1 << b;	}	return result;}int main(int argc, char ** argv) {	srand(GetTickCount());		ann::network n(14, 24, 10, 10, 0.1);	Image image(128, 128);		for (int y = 0; y < 128; y++) {		for (int x = 0; x < 128; x++) {			image(x, y) = translate_output(n(generate_input(x, y)));		}		cerr << "."; // the error stream is unbuffered by default	}		image.SaveBitmapFile( "image.bmp" );}

I showed a few ways to simplify the loops, too. Or were you already trying to optimize for cache coherency or something? Did you profile it?

##### Share on other sites
I pretty much just threw this program together in 2 hours. And on top of that, I had never worked with ANN programming before. Someone just described to me the basic theory and I decided to play around and try to make something cool. But I think I have the inputs and outputs completely wrong. The inputs are just width/height bits in the form of 0.0 or 1.0, and the outputs are bits for the color. I think the outputs might be fine, but I need to rethink the inputs. The resulting image always seems to have some kind of grid, and there's normally only 2-3 different colors in it. I tried mixing it up by XORing the width and height and by NOTing every other bit, and I've tried more nodes (100x100 at the most), but the result is always very restricted.

##### Share on other sites
I came up with a possible solution, but I'm not home at the moment so I can't try it out. I'll have a hidden input layer that comes after the first one. The first layer will accept any value. It will use some complex algorithm to combine the width and height values into a fairly unpredictable (but not random) value. Particular neurons in the hidden input layer will fire depending on what range this value is in. Hopefully this will improve the program and produce less predictable images.

##### Share on other sites
First off, this might be better placed in the Artificial Intelligence Forum. Can I get a moderator??

I assume what you're trying to do is to store an image in the ANN and then reproduce it later, correct? I didn't see it in your post, but do you start with a particular image and then train the NN to reproduce that image based upon X and Y inputs mapping to the appropriate 24-bit pixel detail?

You mention in your post that the net has 10 layers. This seems like an extreme number of layers. I believe that it has been proven that 2 hidden layers provides a universal function approximator - this may be associated with classification only, but I think it still holds for continuous problems like this one. The 7 layers add an enormous degree of complexity to your problem.

-Kirk