Jump to content
  • Advertisement
Sign in to follow this  
bashung

RBF vs. Mixture models

This topic is 4858 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I'm wondering about something. What could be the reasons to prefer a Radial Basis Function neural network over a (Gaussian) mixture model (and vice-versa) ? From my point of view, the GMM seems to offer more accuracy as you can have a covariance matrix for each component but this could also be a problem to get a good generalization, right ? Do you know if these kind of classification methods have been successfully used in some games ? They somehow look quite interesting for things like economical simulations (among other possible kind of games)

Share this post


Link to post
Share on other sites
Advertisement
Guest Anonymous Poster
What do you mean by gaussian mixture models having better accuracy? Gaussian mixture models are usually used for unsupervised clustering of data, while RBF Networks are used for function approximation, so I'm not sure how you are comparing them. Also, there's no reason why you can't use covariance matrices within RBF networks. The standard training procedure for RBF networks is to use some clustering algorithm (you can even use EM w/ gaussian mixtures) to learn the basis function centers and possibly covariances, then after that learn the weights connecting the basis function nodes to the output node using standard backprop.

Share this post


Link to post
Share on other sites
Quote:
What do you mean by gaussian mixture models having better accuracy? Gaussian mixture models are usually used for unsupervised clustering of data, while RBF Networks are used for function approximation, so I'm not sure how you are comparing them.


Both can be used to describe a distribution and hence make an estimation of the probabilities using a supervision (if you know the classes). What I read about RBF is that the variance is fixed for all components and only the amplitudes are modified during training. In GMM, everything can change : variance, mean and amplitude, optimized using EM.

Quote:
Also, there's no reason why you can't use covariance matrices within RBF networks. The standard training procedure for RBF networks is to use some clustering algorithm (you can even use EM w/ gaussian mixtures) to learn the basis function centers and possibly covariances, then after that learn the weights connecting the basis function nodes to the output node using standard backprop.


Ok, in the lecture I had, they said that for a RBF, the user would set the centers and their numbers before training, the weights are the only parameters affected by the training in a RBF. But I understand you can mix both methods, I was just wondering about the exact definition of a RBF.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!