Personal Project - Biogenesis and QLearning

Started by
-1 comments, last by rwill128 10 years, 10 months ago
Hey all,
I wanted to share the results of the project I've been working on for the last few days. It finally reached a milestone where I have a working "proof-of-concept" program, and now I want to tweak it so it looks prettier and has lots of interesting features. I've been using Biogenesis (Specifically, a mod of it called "Biogenesis Color Mod" : https://sourceforge.net/projects/biogenesiscolor/) and a QLearning framework I found at http://elsy.gdan.pl/
Check out some of the samples on the Elsy website. He's actually got a nifty little library there. He implemented a type of reinforcement learning that uses neural networks, and the "Wanderbot" or "Apollo Lander" examples both show what kind of tasks it's effective for.

Anyway -- I decided I wanted to combine Biogenesis, which I've always found to be a fascinating program because of its relative simplicity (but cool results), with this other guy's QLearning framework.

In Biogenesis, each creature's lines have different functions based on their color, and each creature's "genes" just store the overall shape of the creature. So more effective shapes are selected for. But movement patterns are largely unintelligent. But now, as of about 5PM yesterday, I've got a working prototype of a Biogenesis mod where each creature is connected to its own (surprisingly effective) neural net, and can make movement decisions based on the information passed to that net.

---

I thought I'd come and ask if anyone's experimented with either of these programs before, and if they have, do they have any input on this concept? Any ideas of how you'd like to see it implemented? How do you think each creature's brain should be given reward feedback for the reinforcement learning framework? Should their "preferences" be stored as hereditary information?

Right now it just has information from the "eye" segments that it has evolved to have. Each eye is going to feed information on what color segment it sees and how far away the seen segment is.

But I haven't yet tested the performance of this framework. If it can handle larger numbers of inputs without slowing down, I'd like to give each creature more information... each frame it could get an array of doubles representing its current structure (that way, when the structure of the creature changes over evolution, the brain has inputs letting it know so.) It could also be given input on its own health, its reproductive success, etc.

Essentially, the brain could be full of different inputs, and the reward system for the learned behaviors could be totally different too. There's ton of cool possibilities, but I'm thinking about implementing a system where each creature's genetic code holds information on how to implement that creature's brain: this way the wiring of the rewards and inputs can adapt itself over time to whatever's most effective. I'd just add an energy cost per input for each organism to limit unnecessary complexity, and then let it go!

I suppose it would be very similar to other ways of training neural networks that I've read about, where they use genetic algorithms. Except this way the genetic algorithm is implemented visually, and rather than assessing each through a fitness function (which is way more common) the fitness of each NN's behavior is pruned incidentally, through the survival of the organisms.

Anyway, I won't be working on it more until after Friday -- work's getting in the way. But I'm excited to see the results after that, and I remember GameDev being a great community for discussion last time I was here. (Many years ago.. possibly as many as 10, at this point. Wow.)

This topic is closed to new replies.

Advertisement