Scientist at Google · Upvoted by , Ph.D Computer Vision & Machine Learning and , PhD in ML and AI · Author has 640 answers and 3.1M answer views · 12y ·
Justin Rising has already explained the situation where the reference to RBF is in the context of a kernel. I will answer for the differences between GMMs and Gaussian RBF networks, since they are quite similar except for a probabilistic imposition on GMMs.
- GMMs are probability distributions used in clustering and density estimation.
- RBFs are real-valued rotation-invariant (localized e.g. Gaussian or non-localized e.g. multiquadric) functions used in a wide variety of contexts. Two of the most important ML contexts are: a) In RBF networks where linear combination of RBFs is used in regression, b) In any kernelized ML algorithm where RBFs are used as similarity functions. In fact, SVM with Gaussian RBF kernel is a particular case of RBF networks where you learn only the weights involved in the linear combination at the output layer, but the functions at the hidden layer are fixed as Gaussian RBFs centered at the input data points.
- GMM:-
- [math]P(x_i)[/math] is the probability of a data point represented as a sum of gaussian PDFs and normalized appropriately.
- RBF Network:-
- [math]\phi[/math] can possibly be the Gaussian Radial Basis Function which locally approximates non-linear functions. [math]O(x_i)[/math] is the output of the RBF network.
- Note how similar the two formulae are, and how the additional probabilistic interpretation shapes GMMs while the addition rotation-invariant RBF defines RBF networks.
6.4K views ·
View upvotes
· View 1 share
· Answer requested by 1 of 2 answers
Something went wrong. Wait a moment and try again.