Learning AI

Started by
29 comments, last by BeerNutts 22 years, 8 months ago
quote:Original post by Mathematix
I'm just brainstorming here!

If you are using a network, I would carefully randomly select a few synaptic weights to randomly initialise to some other value. This should have the effect of the network either completly, or partially forgetting what it has learned from previous experiences. This does not really follow any specific methodology with respect to AI, but it may make you game play that bit more realistically.


Would you be so kind to tell what a neuroparadigm do you use?
Because if it's BackProp MLP, there is a metodology named OBD (Optimal Brain Damaging) and OBS (Optimal Brain Surgeon). It make an effect of "forgetting" by pruning the network weights. If it's Hopfield network, so there are some methods for increasing the size of memory too (for example, Reznik's works). Any of these leads to partial forgetting, but is there any reason in "forgetting" itself?.. Realistic playing could be achieved by good generalizing.
I'll really appreciate if you answer.
Best wishes,
Halloween.



Edited by - Halloween on August 13, 2001 3:56:59 AM

This topic is closed to new replies.

Advertisement