Archived

This topic is now archived and is closed to further replies.

SFMonster

Neural Network link/weight genes

Recommended Posts

SFMonster    122
Hi-ho, all! First time posting here, sorry if I screw this up somehow. So after reading this week''s gamedev article on linked lists, I got ot thinking about using it with neural networks. It''s working, and I''ve set up neurons capable of being used, theoretically, in a true free-form network (as opposed to a layered one). The problem I have is now how to efficiently store the information for each neuron on (a) what other destinations to output to and (b) the weight for each output. The idea is to have a network that can evolve its own proper number of inputs and neurons and its own *arrangement* of connections between these. (This is an a-life project, so I''m looking to make the evolution as open-ended as possible.) The information must be coded in a format that can be mutated between generations. Some things I''ve considered: 1. Each network has a num_neurons variable that determines the number of neurons for the network, and two arrays/vectors of values; one listing output specifications for each neuron in the net, and one with the weights for eachof those outputs. The problem here is that I can''t find a good way to mutate the info between generations that allows the bounds to change, and new bounds to automatically be given valid values. 2. Each network has a num_neurons variable, as above, and a single "seed" value. The configuration of the neurons and their output weights is a functon of the seed and the value of num_neurons. This would allow a simulation of genes affecting one another; a change in num_neurons would not only add extra neurons, but change the relationships of the whole network. The problem here is I have no ideea how to make an effective algorithm for it. My first few ad-hoc attempts (based on what I know of rand() structure, ut with static values) led to repeating loops after three or so iterations. 3. Each network has a num_neurons variable, and also a single "string" of digits, stored as a long long. For each output of each neuron, take the appropriate number of ending digits (same as the digits for num_neurons, so for example num_neurons == 4 means last digit, num_neurons == 12 means last two digits). (This number % num_neurons) is the neuron for each connection; when the result is the neuron in question, stop and move to the next neuron. Um, I hope that was clear. If not, e.g., say there are 6 neurons. For each connection in, say, neruon 4, take the last digit of the long long % 6 (+1), and that is the *other* neuron to which it outputs. When a 4 comes up, stop and move to neuron 5. Of course, I came up with all these just this afternoon, so they''re probably all crappy, which is why I''m writing this. Any thoughts? - Sean

Share this post


Link to post
Share on other sites
Zahlman    1682
Try:

Each network has a num_neurons variable, and an array of that many Neuron objects. Each Neuron stores keeps track of which other Neurons it is connected to, and the weight of each connection. It does this by keeping a vector of Connection objects, each of which in turn contains a ''weight'' value, and a ''destination'' pointer to another Neuron.

Neural nets are graphs, basically. I''ve described an adjacency-list sort of way to store it. If connectivity in your network is likely to be high, you might instead want the network to store an adjacency matrix (for n neurons, an n-by-n table of values indicating the weight of the connection of the column neuron and row neuron).

Share this post


Link to post
Share on other sites