potential neuron output values for evolving modular 'nervous systems'

Started by
1 comment, last by mattmrunson 13 years, 4 months ago
I am in the process of formulating a design for a program to solve problems of agent control/decision making in a simulated world/computational ecology via evolution of 'networks of networks' AKA 'nervous systems'. The nervous systems should be able to evolve to process multiple 'sensory' inputs and enact multiple behaviors in a way that effectively achieves given goals. As I presently imagine it, inputs to the system would most naturally occur in different forms, such as binary 0 & 1, graded 0 t n(eg1000), binary +/- (-1 & 1), etc.

What I am trying to decide is whether I should give evolution the option of landing on more than one form of output, such as those above, for a group of neurons or just give it one form to work with--and in either case, which form(s)? The disadvantage of just letting evolution decide whats best is that (I'm assuming) providing too many useless options can hinder evolutionary speed and perhaps even success and is also, in this case, substantially more work to code and more code to deal with.
Advertisement
You have to consider sustainability. The system has to have 1 cycle that runs, records its states, records outcomes, and then links outcomes to states. A way to to that is to create many weights between outputs and memory slots, incrementing weights when: a) the value matches a certain output value and b) the reward is positive.

I find form my experiments with artificial neural networks, that there needs to be a reward, a variable that controls whether the logic is better or worse. That will be the main guide of adaptation.

Also, have you considered how much memory and computational time would be required to do anything useful with an ANN? According to my calculations, to create a system so dynamic, it takes a crapload of computing power.

But anyway, your projects sounds interesting, ANN have intrigued me for some time. If you are interested, I'm starting a project soon to make a neural network using a distributed computing model. To run a piece of the network on many different computers (like SETI@home), each donating a fraction of their hard drive and processing power to process neural data.
My biggest inspiration for this project is from the polyworld project:
Polyworld

Essentially, reward, in this system, is determined not by hebbian learning or any similar training approach, but rather is determined by reproductive success, as in biological evolution. Weights, then, are modified by mutations at the 'birth' of a new agent and the best weights are selected and perpetuated by evolution (reproductive success). However, once the project advances far enough, I plan to implement potential for evolution of reward based weight optimization via some sort of hebbian learning.

As far as how much memory and speed this will take, I am trying to account for that as much as possible, but I don't have an excellent handle on it. Also, because evolutionary algorithms benefit from having as many agents as possible, its basically inevitable that it will be maxing out the capacity of any system it runs on. I anticipate running day or week-long simulations.

As for the cycle thing, I'm not sure it applies with an evolutionary approach. However, because each agent consists of multiple ANNs organized in a semi-hierarchical fashion, where the output from one net becomes the input for another net, it is going to be necessary to save the output from each net into a buffer where it can retrieved for input into the next net during the next brain cycle.

[Edited by - mattmrunson on December 4, 2010 8:00:36 PM]

This topic is closed to new replies.

Advertisement