Jump to content
  • Advertisement
Sign in to follow this  
Nice Coder

Generalisation algorithm

This topic is 5053 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

This is a simple generalisation algorithm. First you have the "learning" side of it. You simply add the input value, and the correct output value to the storage array. No the generaliser. You are given an input. that can be a value, or a vector of values, it doesn't matter much either way (just different fucntions.). You use the comparison function, to compare the input, with every other input in the storage array. you then use that in a weighted average function, and you eventually arive with a number. That number is your output. The comparison function, takes two arguments, and returns how similar they are. The weighted average function, takes two sets of inputs, and returns an average, which is changed based on how large its corresponding weight value is. (for eg. an inp with a weight of 30 would have a much larger change in he output then one with a weight of say 2). Is this a nice litle algorithm? From, Nice coder

Share this post


Link to post
Share on other sites
Advertisement
Guest Anonymous Poster
It's a nice algorithm, but I'm afraid you're not the first person to think of it. In machine learning what you described is called kernel regression (kernel refering to the comparison function). In statistics it is called parzen window density estimation. There is a neural network interpretation called general regression neural networks.

There are lots of varieties as well, for example instead of taking a weighted average of all of the instances, only take the average of the k closest instances (this is called the k nearest neighbors algorithm), or use a comparision function such that the weight for far away instances is zero (restrict your average to a certain range).

There are also ways to make this almost as fast as traditional learning algorithms with cool data structures, see Andrew Moore at Carnegie Mellon's work, especially his tutorials on KD-Trees.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!