An AI that creates other AI's

Started by
11 comments, last by myahmac 20 years, 2 months ago
Well with skynet like AI''s its not that they are programmed for world domination or what have you. The idea is that you have a defense system that is aware for itself. Its knowlege of how to defend an objective or attack a target is built in. The weird part is when you have an AI that nkows it is itself. Which is where for now I think that is a long way off. I wouldnt mind beinig surprised though.

decision trees are great if i were programig for one game, one genre, or even limiting it to just the game. The idea is an acutal concept that i intend to use for thesis later. But I would like to try it out now in a game while i have time as an undergrad doing mostly basic tasks with school. The idea came from the fix and continue option found in xcode where the program can load some recompiled files on the fly. Just thought it would make for an nice idea to have a program edit itself on the fly. Not that it would be the most effiiecnt way as of yet to do it.

In terms of GA''s thats kinda what the framework I''m builidng is set to do. Its a GA that build NN''s from a framework i already setup. Its not meant for working on a single pc though. designed to run couple hundred NNs at once. But i have to finish the whole thing and write up a proposal b4 I can get to play with it on a cluster.

But yeah part concept, part what is feasible now.
dual 867 g4, 1GB RAM, dual video cards, dual dvd burners, n dual hds. My das always be prepared.
Advertisement
more specifically you may want to look for Grammatical Evolution, part of the field of Genetic Programming...to get a good idea of what this is, research something called the "Santa Fe Trail" this objective is commonly used to test Genetic Programming paradigms.
-Lucas
-Lucas
quote:Original post by Anonymous Poster
However, you also seem to be interested in "machine teaching", that is one AI communicating its models to another AI, to enhance the second AI''s models.

Instead of simply replacing the subject''s models (which would be simply retraining...) one method I''ve seen builds "sample cases" based on the trainer''s models, then communicates these sample cases to the subject, which "learns" them, adding them to its own model.
This has the advantage of not destroying any domain knowledge that the subject had gained itself...



this sounds like the "talking head" experiment i heard once
google for it

however, for the original post, i think that you may have a chance if you consider the semantics (object and class), the axiology (build around the goal, the supervising set) and topology (weigthed relation between semantics thanks to axiology)
you can even made than the axiology set is considering like a semantics and could be rewrite on the fly, these three term i think , help to see more clearly how to implement such a system (i''m studying a similr case for social agent, where the problem is slighty more complex, because there is multiple goal in competition which lead to conflict, but conflict is the spice of drama )

>>>>>>>>>>>>>>>
be good
be evil
but do it WELL
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>be goodbe evilbut do it WELL>>>>>>>>>>>>>>>

This topic is closed to new replies.

Advertisement