Content deleted Content added
Taobrienlbl (talk | contribs) Added "Objective and data-drive kernel selection" section |
WP:CHECKWIKI error fix #90. Wikipedia being used as a reference or external link. Do general fixes and cleanup if needed. -, typo(s) fixed: the the → the (2) using AWB (11971) |
||
Line 1:
[[Kernel density estimation]] is a [[nonparametric]] technique for [[density estimation]] i.e., estimation of [[probability density function]]s, which is one of the fundamental questions in [[statistics]]. It can be viewed as a generalisation of [[histogram]] density estimation with improved statistical properties. Apart from histograms, other types of density estimators include [[parametric statistics|parametric]], [[spline interpolation|spline]], [[wavelet]] and [[Fourier series]]. Kernel density estimators were first introduced in the scientific literature for [[univariate]] data in the 1950s and 1960s<ref>{{Cite journal| doi=10.1214/aoms/1177728190 | last=Rosenblatt | first=M.| title=Remarks on some nonparametric estimates of a density function | journal=Annals of Mathematical Statistics | year=1956 | volume=27 | pages=832–837}}</ref><ref>{{Cite journal| doi=10.1214/aoms/1177704472| last=Parzen | first=E.| title=On estimation of a probability density function and mode | journal=Annals of Mathematical Statistics| year=1962 | volume=33 | pages=1065–1076}}</ref> and subsequently have been widely adopted. It was soon recognised that analogous estimators for multivariate data would be an important addition to [[multivariate statistics]]. Based on research carried out in the 1990s and 2000s, '''multivariate kernel density estimation''' has reached a level of maturity comparable to its univariate counterparts.<ref name="simonoff1996">{{Cite book| author=Simonoff, J.S. | title=Smoothing Methods in Statistics | publisher=Springer | year=1996 | isbn=0-387-94716-7}}</ref>
==Motivation==
Line 22:
* '''H''' is the bandwidth (or smoothing) ''d×d'' matrix which is [[symmetric matrix|symmetric]] and [[positive definite matrix|positive definite]];
* ''K'' is the [[kernel (statistics)|kernel]] function which is a symmetric multivariate density;
* {{nowrap|''K''<sub>'''H'''</sub>('''x''') {{=}}
The choice of the kernel function ''K'' is not crucial to the accuracy of kernel density estimators, so we use the standard [[multivariate normal distribution|multivariate normal]] kernel throughout: {{nowrap|''K''('''x''') {{=}} (2''π'')<sup>−''d''/2</sup> exp(−{{frac|2}}'''x'''<sup>''T''</sup>'''x''')}} where H plays the role of the [[covariance matrix]]. On the other hand, the choice of the bandwidth matrix <strong>H</strong> is the single most important factor affecting its accuracy since it controls the amount and orientation of smoothing induced.<ref name="WJ1995">{{Cite book| author1=Wand, M.P | author2=Jones, M.C. | title=Kernel Smoothing | publisher=Chapman & Hall/CRC | ___location=London | year=1995 | isbn = 0-412-55270-1}}</ref>{{rp|36–39}} That the bandwidth matrix also induces an orientation is a basic difference between multivariate kernel density estimation from its univariate analogue since orientation is not defined for 1D kernels. This leads to the choice of the parametrisation of this bandwidth matrix. The three main parametrisation classes (in increasing order of complexity) are ''S'', the class of positive scalars times the identity matrix; ''D'', diagonal matrices with positive entries on the main diagonal; and ''F'', symmetric positive definite matrices. The ''S'' class kernels have the same amount of smoothing applied in all coordinate directions, ''D'' kernels allow different amounts of smoothing in each of the coordinates, and ''F'' kernels allow arbitrary amounts and orientation of the smoothing. Historically ''S'' and ''D'' kernels are the most widespread due to computational reasons, but research indicates that important gains in accuracy can be obtained using the more general ''F'' class kernels.<ref>{{cite journal | author1=Wand, M.P. | author2=Jones, M.C. | title=Comparison of smoothing parameterizations in bivariate kernel density estimation | journal=Journal of the American Statistical Association | year=1993 | volume=88 | pages=520–528 | doi=10.1080/01621459.1993.10476303 | jstor=2290332}}</ref><ref name="DH2003">{{Cite journal| doi=10.1080/10485250306039 | author1=Duong, T. | author2=Hazelton, M.L. | title=Plug-in bandwidth matrices for bivariate kernel density estimation | journal=Journal of Nonparametric Statistics | year=2003 | volume=15 | pages=17–30}}</ref>
Line 80:
Silverman's rule of thumb suggests using <math>\sqrt{\mathbf{H}_{ii}} = \left(\frac{4}{d+2}\right)^{\frac{1}{d+4}} n^{\frac{-1}{d+4}} \sigma_i</math> where <math>\sigma_i</math> is the standard deviation of the ith variable and <math>\mathbf{H}_{ij} = 0, i\neq j</math>. Scott's rule is <math>\sqrt{\mathbf{H}_{ii}} = n^{\frac{-1}{d+4}} \sigma_i</math>.
=== Objective and data-driven kernel selection ===
[[File:Empirical Characteristic Function.jpg|alt=An x-shaped region of empirical characteristic function in Fourier space.|thumb|Demonstration of the filter function <math>I_{\vec{A}}(\vec{t})</math>. The square of the empirical distribution function <math>|\hat{\varphi}|^2</math> from ''N''=10,000 samples of the ‘transition distribution’ discussed in Section
Recent research has shown that the kernel and its bandwidth can both be optimally and objectively chosen from the input data itself without making any assumptions about
<math>\hat{\psi_h}(\vec{t}) \equiv \frac{N}{2(N-1)} \left[ 1 + \sqrt{1 - \frac{4(N-1)}{N^2 |\hat{\varphi}(\vec{t})|^2}} I_{\vec{A}}(\vec{t}) \right]</math> <ref name=":22"
<math>\hat{f}(x) = \frac{1}{(2\pi)^d} \int \hat\varphi(\vec{t})\psi_h(\vec{t}) e^{-i\vec{t} \cdot \vec{x}}d\vec{t}</math>
where, ''N'' is the number of data points, ''d'' is the number of dimensions (variables), and <math>I_{\vec{A}}(\vec{t})</math> is a filter that is equal to 1 for 'accepted frequencies' and 0 otherwise. There are various ways to define this filter function, and a simple one that works for univariate or multivariate samples is called the 'lowest contiguous hypervolume filter'; <math>I_{\vec{A}}(\vec{t})</math> is chosen such that the only accepted frequencies are a contiguous subset of frequencies surrounding the origin for which <math>|\hat{\varphi}(\vec{t})|^2 \ge 4(N-1)N^{-2}</math> (see <ref name=":
Note that direct calculation of the ''empirical characteristic function'' (ECF) is slow, since it essentially involves a direct Fourier transform of the data samples. However, it has been found that the ECF can be approximated accurately using a [[Non-uniform discrete Fourier transform|non-uniform fast Fourier transform]] (nuFFT) method<ref name=":1" /><ref name=":2" />, which increases the calculation speed by several orders of magnitude (depending on the dimensionality of the problem). The combination of this objective KDE method and the nuFFT-based ECF approximation has been referred to as ''[https://bitbucket.org/lbl-cascade/fastkde fastKDE]'' in the literature<ref name=":2" />.▼
[[File:FastKDE_example.jpg|link=https://en.wikipedia.org/wiki/File:FastKDE_example.jpg|alt=A demonstration of fastKDE relative to a sample PDF. (a) True PDF, (b) a good representation with fastKDE, and (c) a slightly blurry representation.|none|thumb|664x664px|A non-trivial mixture of normal distributions: (a) the underlying PDF, (b) a fastKDE estimate on 1,000,000 samples, and (c) a fastKDE estimate on 10,000 samples.]]▼
▲Note that direct calculation of the ''empirical characteristic function'' (ECF) is slow, since it essentially involves a direct Fourier transform of the data samples. However, it has been found that the ECF can be approximated accurately using a [[Non-uniform discrete Fourier transform|non-uniform fast Fourier transform]] (nuFFT) method,<ref name=":1" /><ref name=":
▲[[
==Asymptotic analysis==
Line 137 ⟶ 133:
points(faithful, cex=0.5, pch=16)
</source>
==Density estimation with a diagonal bandwidth matrix==
[[File:Bivariate example.png|thumb|250px|alt=Kernel density estimate with diagonal bandwidth for synthetic normal mixture data. |Kernel density estimate with diagonal bandwidth for synthetic normal mixture data.]]
|