RBF vs. Mixture models

This topic is 5014 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

Recommended Posts

I'm wondering about something. What could be the reasons to prefer a Radial Basis Function neural network over a (Gaussian) mixture model (and vice-versa) ? From my point of view, the GMM seems to offer more accuracy as you can have a covariance matrix for each component but this could also be a problem to get a good generalization, right ? Do you know if these kind of classification methods have been successfully used in some games ? They somehow look quite interesting for things like economical simulations (among other possible kind of games)

Share on other sites
What do you mean by gaussian mixture models having better accuracy? Gaussian mixture models are usually used for unsupervised clustering of data, while RBF Networks are used for function approximation, so I'm not sure how you are comparing them. Also, there's no reason why you can't use covariance matrices within RBF networks. The standard training procedure for RBF networks is to use some clustering algorithm (you can even use EM w/ gaussian mixtures) to learn the basis function centers and possibly covariances, then after that learn the weights connecting the basis function nodes to the output node using standard backprop.

Share on other sites
Quote:
 What do you mean by gaussian mixture models having better accuracy? Gaussian mixture models are usually used for unsupervised clustering of data, while RBF Networks are used for function approximation, so I'm not sure how you are comparing them.

Both can be used to describe a distribution and hence make an estimation of the probabilities using a supervision (if you know the classes). What I read about RBF is that the variance is fixed for all components and only the amplitudes are modified during training. In GMM, everything can change : variance, mean and amplitude, optimized using EM.

Quote:
 Also, there's no reason why you can't use covariance matrices within RBF networks. The standard training procedure for RBF networks is to use some clustering algorithm (you can even use EM w/ gaussian mixtures) to learn the basis function centers and possibly covariances, then after that learn the weights connecting the basis function nodes to the output node using standard backprop.

Ok, in the lecture I had, they said that for a RBF, the user would set the centers and their numbers before training, the weights are the only parameters affected by the training in a RBF. But I understand you can mix both methods, I was just wondering about the exact definition of a RBF.

• What is your GameDev Story?

In 2019 we are celebrating 20 years of GameDev.net! Share your GameDev Story with us.

• 10
• 11
• 13
• 9
• 11
• Forum Statistics

• Total Topics
634090
• Total Posts
3015432
×