• FEATURED

View more

View more

View more

Image of the Day Submit

IOTD | Top Screenshots

The latest, straight to your Inbox.

Subscribe to GameDev.net Direct to receive the latest updates and exclusive content.

Feedback

Old topic!

Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

No replies to this topic

#1BoySinister  Members

Posted 21 May 2000 - 09:51 PM

I am having trouble crystallising a thought. I was reading through the very long thread below about (what I believe is often called) digital evolution. My understanding is that in digital evolution, you take two behaviour sets, and cross-pollenate instructions, in much the same way as random rearrangement and crossing over during meiotic cell division. However, as was pointed out, most binary versions of this procedure run the processes after each evolutionary stage, and examine the end result with respect to some predefined goal condition. Take, for example, a robot designed to automatically remove objects from a certain area. Basic instructions would obviously include "pick up object", "move out of area", "put down object". There would also need to be other searching movement instructions. One would generate two lists of instructions and see how many iterations each took to clear the area. Then cross-pollenate, and see if the new process takes less moves. If so, use this one as a base model, do another random rearrangement, cross pollenation, etc. This example demostrates that the goal condition is predefined, and therefore a "fully evolved" instruction set, (that is, one which achieves the goal in a very efficient manner) will only be efficient with respect to this goal condition. I believe the missing component is some kind of feedback model. Keep the binary equivalent of chromosomal random rearrangement and crossing over, but rather than examine the new instruction set in its entirety with respect to some predefined goal, perhaps some kind of feedback could be provided after each individual instruction was carried out. This is the point I haven't really thought through yet. How could one provide some kind of "generic" feedback? The only way I can think of at the moment would be to have a number of static parallel instruction sets providing feedback to our evolving set. Given that the only variable in our environment is the evolving instruction set, the performance of these fixed sets with respect to any task would change only due to changes in the evolving set. The fixed sets could then provide feedback to the evolutionary process itself. If a fixed set reported completing its goal more efficiently this time than last time, then the most recently evolved set should be used as the basis for further evolution. Then, we can evolve the variable set in several different ways, dictated by whichever fixed parrallel sets we use in the evolutionary process. In the end, our variable set may not be able to actually perform any high level tasks on its own, however one could write a simple fixed set to perform the task, and then run the evolved set in parallel as a helper. This idea of parallel sets is not the focus of my post here, its just one example of feedback. I think various feedback, rather than feedback with respect to one specific goal, and if possible per instruction feedback, could be a large benefit to the process of digital evolution. Edited by - BoySinister on 5/22/00 3:54:13 AM

Old topic!

Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.