Content deleted Content added
→Approximations: sections |
m simple clarification that both samples live in the same k-dimensional space |
||
(15 intermediate revisions by 9 users not shown) | |||
Line 1:
{{Short description|Machine learning kernel function}}
In [[machine learning]], the '''[[radial basis function]] kernel''', or '''RBF kernel''', is a popular [[Positive-definite kernel|kernel function]] used in various [[kernel method|kernelized]] learning algorithms. In particular, it is commonly used in [[support vector machine]] [[statistical classification|classification]].<ref name="Chang2010">{{cite journal | last1 = Chang | first1 = Yin-Wen | last2 = Hsieh | first2 = Cho-Jui | last3 = Chang | first3 = Kai-Wei | last4 = Ringgaard | first4 = Michael | last5 = Lin | first5 = Chih-Jen | year = 2010 | title = Training and testing low-degree polynomial data mappings via linear SVM | url =
The RBF kernel on two samples <math>\mathbf{x},\mathbf{x'}\in \mathbb{R}^{k}</math>
:<math>K(\mathbf{x}, \mathbf{x'}) = \exp\left(-\frac{\|\mathbf{x} - \mathbf{x'}\|^2}{2\sigma^2}\right)</math>
Line 10:
:<math>K(\mathbf{x}, \mathbf{x'}) = \exp(-\gamma\|\mathbf{x} - \mathbf{x'}\|^2)</math>
Since the value of the RBF kernel decreases with distance and ranges between zero (in the infinite-distance limit) and one (when {{math|'''x''' {{=}} '''x''''}}), it has a ready interpretation as a [[similarity measure]].<ref name="primer"/>
The [[feature space]] of the kernel has an infinite number of dimensions; for <math>\sigma = 1</math>, its expansion using the [[multinomial theorem]] is:<ref>{{cite arXiv
|last=Shashua
Line 23:
\begin{alignat}{2}
\exp\left(-\frac{1}{2}\|\mathbf{x} - \mathbf{x'}\|^2\right)
&= \exp(\frac{2}{2}\mathbf{x}^\top \mathbf{x'} - \frac{1}{2}\|\mathbf{x}\|^2 - \frac{1}{2}\|\mathbf{x'}\|^2)\\[5pt]
&= \exp(\mathbf{x}^\top \mathbf{x'}) \exp( - \frac{1}{2}\|\mathbf{x}\|^2) \exp( - \frac{1}{2}\|\mathbf{x'}\|^2) \\[5pt]
&= \sum_{j=0}^\infty \frac{(\mathbf{x}^\top \mathbf{x'})^j}{j!} \exp\left(-\frac{1}{2}\|\mathbf{x}\|^2\right) \exp\left(-\frac{1}{2}\|\mathbf{x'}\|^2\right)\\[5pt]
&= \sum_{j=0}^\infty \quad \sum_{n_1+n_2+\dots +n_k=j}
\exp\left(-\frac{1}{2}\|\mathbf{x}\|^2\right)
\frac{x_1^{n_1}\cdots x_k^{n_k} }{\sqrt{n_1! \cdots n_k! }}
\exp\left(-\frac{1}{2}\|\mathbf{x'}\|^2\right)
\frac{{x'}_1^{n_1}\cdots {x'}_k^{n_k} }{\sqrt{n_1! \cdots n_k! }} \\[5pt]
&=\langle \varphi(\mathbf{x}), \varphi(\mathbf{x'}) \rangle
\end{alignat}
Line 39:
=
\exp\left(-\frac{1}{2}\|\mathbf{x}\|^2\right)
\left(a^{(0)}_{
</math>
where <math>
:<math>
a^{(j)}_{
</math>
==Approximations==
Because support vector machines and other models employing the [[kernel trick]] do not scale well to large numbers of training samples or large numbers of features in the input space, several approximations to the RBF kernel (and similar kernels) have been introduced.<ref>Andreas Müller (2012). [
Typically, these take the form of a function ''z'' that maps a single vector to a vector of higher dimensionality, approximating the kernel:
Line 54:
=== Fourier random features ===
{{Main|Random Fourier feature}}
One way to construct such a ''z'' is to randomly sample from the [[Fourier transformation]] of the kernel<ref>{{Cite journal |
'''Theorem:''' <math> \
'''Proof:''' It suffices to prove the case of <math>D=1</math>. Use the trigonometric identity <math>\cos(a-b) = \cos(a)\cos(b) + \sin(a)\sin(b)</math>, the spherical symmetry of
: <math>\int_{-\infty}^\infty \frac{\cos (k x) e^{-x^2 / 2}}{\sqrt{2 \pi}} d x=e^{-k^2 / 2}. </math>
'''Theorem:''' The variance of the estimator is <math>\propto D^{-1}</math>. (Appendix A.2<ref>{{Cite journal |last=Peng |first=Hao |last2=Pappas |first2=Nikolaos |last3=Yogatama |first3=Dani |last4=Schwartz |first4=Roy |last5=Smith |first5=Noah A. |last6=Kong |first6=Lingpeng |date=2021-03-19 |title=Random Feature Attention |url=http://arxiv.org/abs/2103.02143 |journal=arXiv:2103.02143 [cs]}}</ref>).▼
▲'''Theorem:'''
=== Nyström method ===
Another approach uses the [[Nyström method]] to approximate the [[eigendecomposition]] of the [[Gramian matrix|Gram matrix]] ''K'', using only a random sample of the training set.<ref>{{cite journal |
==See also==
Line 73 ⟶ 76:
* [[Radial basis function]]
* [[Radial basis function network]]
* [[Obst
==References==
|