# When programming a ANN, do you just store the weights or...

This topic is 4752 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

Hi, I'm learning about neural networks from the book "C++ Neural Networks and Fuzzy Logic by Valluru B. Rao". Anyway there was an example to program a hopefield network with 4 neurons. I thought I would program one to make sure i had a good understanding of the example. And when I made it I stored the weights in a 4 by 4 matrix. And then later on the author writes actually writes a program on the same example but instead of storing the weights in a matrix he makes a neuron class that stores 4 connections(including one to itself which is 0). So basically I was wondering what people usually do, do you store the weights in a matrix or inidvidually in a Neuron class? Thanks btw, if anyone's read this book, I think the author's c++ coding style is atrocious!

##### Share on other sites
I prefer to have an actual Neuron class which stores weights of inputs. something like this is appropriate:

class Connection {public:   float weight;   Neuron * source;};class Neuron {private:   std::vector<Connection> inputs;   bool fireState;public:   Neuron(std::vector<Connection> &);   bool getState();   void processInput();   void addInput(Neuron *, float);   void dropInput(Neuron *);   void adjustWeight(Neuron *, float);};

##### Share on other sites
actually I guess that makes alot more sense then just storing the weights in a matrix, because later on when I'm going to be doing feedforward etc networks, it isn't as simple as the hopfield network.

thx

##### Share on other sites
There are some definite advantages to using matrices. Namely, it makes it easier to take advantage of high performance matrix operations libraries like BLAS so that your code uses hardware acceleration where possible. It's also nice to have all of the weights in one place if ever you plan on coding training algorithms other than backprop (like conjugate gradient methods, quasi-newton's methods) that use global information.

The big advantage of using classes for everything is that it lets you construct arbitrary topologies and use the same data structures for all kinds of different networks.

##### Share on other sites
That is exactly why using a Neuron class is such a good idea. If you have reason to prefer grid-matrix later on, you can store matrices in a static matrix member of the class, and just rewrite the accesors to use the matrix, so none of the code actually using the Neurons needs to change.

##### Share on other sites
That is a good point, but unless I misunderstand you, then you run into all of the disadvantages associated with static members (namely not being able to create multiple networks at a time). If performance is important, I think a good compromise between a fine grained OO design and storing everything in one big matrix is to have a Layer class that stores all of the weights for that layer. This lets you keep some of the nice OO design, for example you could have sub classes for output / hidden layers, but also lets you do fast feeding forward (through matrix multiplication).

I should point out though that it really all depends on your requirements. In my implementation I used a Neuron class and a Link class, just because it seemed more natural to me and I didn't care about performance at the time. Something else that occured to me: if you use the Neuron/Link class design one way to implement conventional optimization techniques would be to have a method in your Network class that dumps pointers to all of the Link's into one big matrix, which you could then manipulate.

##### Share on other sites
All of these questions really boil down to what you want to get out of your neural network. Do you want it to be re-usable? Is performance important? Are you looking to solve a specific problem or just write a generic neural network library?

Some designs will be great for one set of needs but poor for another set of needs.

##### Share on other sites
I guess right now Im just writing pretty much generic ANN's. So i may just go with the weights stored in the neuron class for now