Content deleted Content added
RjwilmsiBot (talk | contribs) m →Optimal bandwidth matrix selection: fixing page range dashes using AWB (9535) |
|||
(90 intermediate revisions by 61 users not shown) | |||
Line 1:
{{Short description|Concept in statistics mathematics}}
[[Kernel density estimation]] is a [[nonparametric]] technique for [[density estimation]] i.e., estimation of [[probability density function]]s, which is one of the fundamental questions in [[statistics]]. It can be viewed as a generalisation of [[histogram]] density estimation with improved statistical properties. Apart from histograms, other types of density estimators include [[parametric statistics|parametric]], [[spline interpolation|spline]], [[wavelet]] and [[Fourier series]]. Kernel density estimators were first introduced in the scientific literature for [[univariate]] data in the 1950s and 1960s<ref>{{Cite journal| doi=10.1214/aoms/1177728190 | last=Rosenblatt | first=M.| title=Remarks on some nonparametric estimates of a density function | journal=Annals of Mathematical Statistics | year=1956 | volume=27 | issue=3 | pages=832–837| doi-access=free }}</ref><ref>{{Cite journal| doi=10.1214/aoms/1177704472| last=Parzen | first=E.| title=On estimation of a probability density function and mode | journal=Annals of Mathematical Statistics| year=1962 | volume=33 | issue=3 | pages=1065–1076| doi-access=free }}</ref> and subsequently have been widely adopted. It was soon recognised that analogous estimators for multivariate data would be an important addition to [[multivariate statistics]]. Based on research carried out in the 1990s and 2000s, '''multivariate kernel density estimation''' has reached a level of maturity comparable to
==Motivation==
We take an illustrative [[Synthetic data|synthetic]] [[bivariate data|bivariate]] data set of 50 points to illustrate the construction of histograms. This requires the choice of an anchor point (the lower left corner of the histogram grid). For the histogram on the left, we choose (−1.5, −1.5): for the one on the right, we shift the anchor point by 0.125 in both directions to (−1.625, −1.625). Both histograms have a binwidth of 0.5, so any differences are due to the change in the anchor point only. The colour-coding indicates the number of data points which fall into a bin: 0=white, 1=pale yellow, 2=bright yellow, 3=orange, 4=red. The left histogram appears to indicate that the upper half has a higher density than the lower half, whereas
[[File:Synthetic data 2D histograms.png|thumb|center|500px|alt=Left. Histogram with anchor point at (−1.5, -1.5). Right. Histogram with anchor point at (−1.625, −1.625). Both histograms have a bin width of 0.5, so differences in appearances of the two histograms are due to the placement of the anchor point.|Comparison of 2D histograms. Left. Histogram with anchor point at (−1.5, -1.5). Right. Histogram with anchor point at (−1.625, −1.625). Both histograms have a
One possible solution to this anchor point placement problem is to remove the histogram binning grid completely. In the left figure below, a kernel (represented by the grey lines) is centred at each of the 50 data points above. The result of summing these kernels is given on the right figure, which is a kernel density estimate. The most striking difference between kernel density estimates and histograms is that the former are easier to interpret since they do not contain artifices induced by a binning grid.
Line 12:
[[File:Synthetic data 2D KDE.png|thumb|center|500px|alt=Left. Individual kernels. Right. Kernel density estimate.|Construction of 2D kernel density estimate. Left. Individual kernels. Right. Kernel density estimate.]]
The goal of density estimation is to take a finite sample of data and to make inferences about the
==Definition==
The previous figure is a graphical representation of kernel density estimate, which we now define in an exact manner. Let '''x'''<sub>1</sub>, '''x'''<sub>2</sub>,
: <math>
\hat{f}_\
</math>
where
Line 23:
* '''H''' is the bandwidth (or smoothing) ''d×d'' matrix which is [[symmetric matrix|symmetric]] and [[positive definite matrix|positive definite]];
* ''K'' is the [[kernel (statistics)|kernel]] function which is a symmetric multivariate density;
* <math>K_\mathbf{H}(\mathbf{x})=|\mathbf{H}|^{-1/2}K(\mathbf{H}^{-1/2}\mathbf{x} )</math>.
The choice of the kernel function ''K'' is not crucial to the accuracy of kernel density estimators, so we use the standard [[multivariate normal distribution|multivariate normal]] kernel throughout: <math display="inline">K_\mathbf{
[[File:Kernel parametrisation class.png|thumb|center|500px|alt=Comparison of the three main bandwidth matrix parametrisation classes. Left. S positive scalar times the identity matrix. Centre. D diagonal matrix with positive entries on the main diagonal. Right. F symmetric positive definite matrix.|Comparison of the three main bandwidth matrix parametrisation classes. Left. ''S'' positive scalar times the identity matrix. Centre. ''D'' diagonal matrix with positive entries on the main diagonal. Right. ''F'' symmetric positive definite matrix.]]
Line 32:
The most commonly used optimality criterion for selecting a bandwidth matrix is the MISE or [[mean integrated squared error]]
: <math>\operatorname{MISE} (\
This in general does not possess a [[closed-form expression]], so it is usual to use its asymptotic approximation (AMISE) as a proxy
: <math>\operatorname{AMISE} (\
(\operatorname{vec}^T \
where
* <math>R(K) = \int K(\mathbf{x})^2 \, d\mathbf{x}</math>, with {{nowrap|''R''(''K'') {{=}} (4''π'')<sup>''−d''/2</sup>}} when ''K'' is a normal kernel
:with <strong>I</strong><sub>d</sub> being the ''d × d'' [[identity matrix]], with ''m''<sub>2</sub> = 1 for the normal kernel
* <
▲<li>vec is the vector operator which stacks the columns of a matrix into a single vector e.g. <math>\operatorname{vec}\begin{bmatrix}a & c \\ b & d\end{bmatrix} = \begin{bmatrix}a & b & c & d\end{bmatrix}^T.</math>
The quality of the AMISE approximation to the MISE<ref name="WJ1995"/>{{rp|97}} is given by
: <math>\operatorname{MISE} (\
where ''o'' indicates the usual [[big O notation|small o notation]]. Heuristically this statement implies that the AMISE is a 'good' approximation of the MISE as the sample size <
It can be shown that any reasonable bandwidth selector '''H''' has '''H''' = ''O''(''n''<sup>
: <math>\
Since this ideal selector contains the unknown density function ''ƒ'', it cannot be used directly. The many different varieties of data-based bandwidth selectors arise from the different estimators of the AMISE. We concentrate on two classes of selectors which have been shown to be the most widely applicable in
===Plug-in===
The plug-in (PI) estimate of the AMISE is formed by replacing '''Ψ'''<sub>4</sub> by its estimator <math>\hat{\
: <math>\operatorname{PI}(\
(\operatorname{vec}^T \
where <math>\hat{\
\sum_{j=1}^n [(\operatorname{vec} \, \operatorname{D}^2) (\operatorname{vec}^T \operatorname{D}^2)] K_\
===Smoothed cross validation===
Smoothed cross validation (SCV) is a subset of a larger class of [[cross-validation (statistics)|cross validation]] techniques. The SCV estimator differs from the plug-in estimator in the second term
: <math>\operatorname{SCV}(\
n^{-2} \sum_{i=1}^n \sum_{j=1}^n (K_{2\
+ K_{2\
Thus <math>\hat{\
These references also contain algorithms on optimal estimation of the pilot bandwidth matrix <strong>G</strong> and establish that <math>\hat{\
=== Rule of thumb ===
Silverman's rule of thumb suggests using <math>\sqrt{\mathbf{H}_{ii}} = \left(\frac{4}{d+2}\right)^{\frac{1}{d+4}} n^{\frac{-1}{d+4}} \sigma_i</math>, where <math>\sigma_i</math> is the standard deviation of the ith variable and <math>d</math> is the number of dimensions, and <math>\mathbf{H}_{ij} = 0, i\neq j</math>. Scott's rule is <math>\sqrt{\mathbf{H}_{ii}} = n^{\frac{-1}{d+4}} \sigma_i</math>.
==Asymptotic analysis==
In the optimal bandwidth selection section, we introduced the MISE. Its construction relies on the [[expected value]] and the [[variance]] of the density esimator<ref name="WJ1995"/>{{rp|97}}▼
▲In the optimal bandwidth selection section, we introduced the MISE. Its construction relies on the [[expected value]] and the [[variance]] of the density
:<math>\operatorname{E} \hat{f}(\bold{x};\bold{H}) = K_\bold{H} * f (\bold{x}) = f(\bold{x}) + \frac{1}{2} m_2(K) \int \operatorname{tr} (\bold{H} \operatorname{D}^2 f(\bold{x})) \, d\bold{x} + O(\operatorname{tr} \, \bold{H}^2)</math> ▼
▲:<math>\operatorname{E} \hat{f}(\
where * is the [[convolution]] operator between two functions, and
:<math>\operatorname{Var} \hat{f}(\
For these two expressions to be well-defined, we require that all elements of '''H''' tend to 0 and that ''n''<sup>
:<math>\operatorname{MSE} \, \hat{f}(\
we have that the MSE tends to 0, implying that the kernel density estimator is (mean square) consistent and hence converges in probability to the true density ''f''. The rate of convergence of the MSE to 0 is the necessarily the same as the MISE rate noted previously ''O''(''n''<sup>
For the data-based bandwidth selectors considered, the target is the AMISE bandwidth matrix. We say that a data-based selector converges to the AMISE selector at relative rate ''O<sub>p</sub>''(''n''<sup>
:<math>\operatorname{vec} (\hat{\
It has been established that the plug-in and smoothed cross validation selectors (given a single pilot bandwidth '''G''') both converge at a relative rate of ''O<sub>p</sub>''(''n''<sup>
==Density estimation
[[File:Old Faithful Geyser KDE with plugin bandwidth.png|thumb|250px|alt=Old Faithful Geyser data kernel density estimate with plug-in bandwidth matrix.|Old Faithful Geyser data kernel density estimate with plug-in bandwidth matrix.]]
The [
272 records with two measurements each: the duration time of an eruption (minutes) and the
waiting time until the next eruption (minutes) of the [[Old Faithful Geyser]] in Yellowstone National Park, USA.
The code fragment computes the kernel density estimate with the plug-in bandwidth matrix <math>\hat{\
<
library(ks)
data(faithful)
H <- Hpi(x=faithful)
fhat <- kde(x=faithful, H=H)
plot(fhat, display="filled.
</syntaxhighlight>
==Density estimation
[[File:Bivariate example.png|thumb|250px|alt=Kernel density estimate with diagonal bandwidth for synthetic normal mixture data. |Kernel density estimate with diagonal bandwidth for synthetic normal mixture data.]]
We consider estimating the density of the Gaussian mixture
{{
+ (4''π'')<sup>−1</sup> exp(−{{frac|2}} ((''x''<sub>1</sub> - 3.5)<sup>2</sup> + ''x''<sub>2</sub><sup>2</sup>))}},
from 500 randomly generated points. We employ the Matlab routine for
Line 175 ⟶ 142:
| year = 2010
| doi = 10.1214/10-AOS799
| arxiv = 1011.2602}}
</ref>
The figure shows the joint density estimate that results from using the automatically selected bandwidth.
Line 186 ⟶ 153:
in the current directory.
<
data=[randn(500, 2);
</syntaxhighlight>
The MISE is the expected integrated ''L<sub>2</sub>'' distance between the density estimate and the true density function ''f''. It is the most widely used, mostly due to its tractability and most software implement MISE-based bandwidth selectors.
There are alternative optimality criteria, which attempt to cover cases where MISE is not an appropriate measure.<ref name="simonoff1996" />{{rp|
: <math>\operatorname{MIAE} (\
Its mathematical analysis is considerably more difficult than the MISE ones. In
: <math>\operatorname{MUAE} (\
which has been investigated only briefly.<ref>{{cite journal | author1=Cao, R. | author2=Cuevas, A. | author3=Manteiga, W.G.| title=A comparative study of several smoothing methods in density estimation | journal = Computational Statistics and Data Analysis | year=1994 | volume=17 | issue=2 | pages=153–176 | doi=10.1016/0167-9473(92)00066-Z}}</ref> Likelihood error criteria include those based on the Mean [[
: <math>\operatorname{MKL} (\
and the Mean [[Hellinger distance]]
: <math>\operatorname{MH} (\
The KL can be estimated using a cross-validation method, although KL cross-validation selectors can be sub-optimal even if it remains [[Consistent estimator|consistent]] for bounded density functions.<ref>{{cite journal | author=Hall, P. | title=On Kullback-Leibler loss and density estimation | journal=Annals of Statistics | volume=15 | issue=4 | year=1989 | pages=589–605 | doi=10.1214/aos/1176350606| doi-access=free }}</ref> MH selectors have been briefly examined in the literature.<ref>{{cite journal | author1=Ahmad, I.A. | author2=Mugdadi, A.R. | title=Weighted Hellinger distance as an error criterion for bandwidth selection in kernel estimation | journal=Journal of Nonparametric Statistics | volume=18 | issue=2 | year=2006 | pages=215–226 | doi=10.1080/10485250600712008}}</ref>
All these optimality criteria are distance based measures, and do not always correspond to more intuitive notions of closeness, so more visual criteria have been developed in response to this concern.<ref>{{cite journal | author1=Marron, J.S. | author2=Tsybakov, A. | title=Visual error criteria for qualitative smoothing | journal = Journal of the American Statistical Association | year=1996 | volume=90 | issue=430 | pages=499–507 |
== Objective and data-driven kernel selection ==
==References==▼
[[File:Empirical Characteristic Function.jpg|alt=An x-shaped region of empirical characteristic function in Fourier space.|thumb|Demonstration of the filter function <math>I_{\vec{A}}(\vec{t})</math>. The square of the empirical distribution function <math>|\hat{\varphi}|^2</math> from ''N''=10,000 samples of the ‘transition distribution’ discussed in Section 3.2 (and shown in Fig. 4), for <math>|\hat{\varphi}|^2 \ge 4(N-1)N^{-2}</math>. There are two color schemes present in this figure. The predominantly dark, multicolored colored ‘X-shaped’ region in the center corresponds to values of <math>|\hat{\varphi}|^2</math> for the lowest contiguous hypervolume (the area containing the origin); the colorbar at right applies to colors in this region. The lightly colored, monotone areas away from the first contiguous hypervolume correspond to additional contiguous hypervolumes (areas) with <math>|\hat{\varphi}|^2 \ge 4(N-1)N^{-2}</math>. The colors of these areas are arbitrary and only serve to visually differentiate nearby contiguous areas from one another.]]
{{Reflist}}▼
Recent research has shown that the kernel and its bandwidth can both be optimally and objectively chosen from the input data itself without making any assumptions about the form of the distribution.<ref name=":0">{{Cite journal|last = Bernacchia|first = Alberto|last2 = Pigolotti|first2 = Simone|date = 2011-06-01|title = Self-consistent method for density estimation|journal = Journal of the Royal Statistical Society, Series B|language = en|volume = 73|issue = 3|pages = 407–422|doi = 10.1111/j.1467-9868.2011.00772.x|issn = 1467-9868|arxiv = 0908.3856}}</ref> The resulting kernel density estimate converges rapidly to the true probability distribution as samples are added: at a rate close to the <math>n^{-1}</math> expected for parametric estimators.<ref name=":0" /><ref name=":1">{{Cite journal|last = O’Brien|first = Travis A.|last2 = Collins|first2 = William D.|last3 = Rauscher|first3 = Sara A.|last4 = Ringler|first4 = Todd D.|date = 2014-11-01|title = Reducing the computational cost of the ECF using a nuFFT: A fast and objective probability density estimation method|journal = Computational Statistics & Data Analysis|volume = 79|pages = 222–234|doi = 10.1016/j.csda.2014.06.002|doi-access = free}}</ref><ref name=":22">{{Cite journal|last = O’Brien|first = Travis A.|last2 = Kashinath|first2 = Karthik|last3 = Cavanaugh|first3 = Nicholas R.|last4 = Collins|first4 = William D.|last5 = O’Brien|first5 = John P.|title = A fast and objective multidimensional kernel density estimation method: fastKDE|journal = Computational Statistics & Data Analysis|volume = 101|pages = 148–160|doi = 10.1016/j.csda.2016.02.014|year = 2016|url = https://escholarship.org/content/qt9g56181p/qt9g56181p.pdf?t=p7qvyp|doi-access = free}}</ref> This kernel estimator works for univariate and multivariate samples alike. The optimal kernel is defined in Fourier space—as the optimal damping function <math>\hat{\psi_h}(\vec{t})</math> (the Fourier transform of the kernel <math>\hat{K}(\vec{x})</math> )-- in terms of the Fourier transform of the data <math>\hat{\varphi}(\vec{t})</math>, the ''[[Characteristic function (probability theory)|empirical characteristic function]]'' (see [[Kernel density estimation]]):
<math>\hat{\psi_h}(\vec{t}) \equiv \frac{N}{2(N-1)} \left[ 1 + \sqrt{1 - \frac{4(N-1)}{N^2 |\hat{\varphi}(\vec{t})|^2}} I_{\vec{A}}(\vec{t}) \right]</math> <ref name=":22"/>
<math>\hat{f}(x) = \frac{1}{(2\pi)^d} \int \hat\varphi(\vec{t})\psi_h(\vec{t}) e^{-i\vec{t} \cdot \vec{x}}d\vec{t}</math>
where, ''N'' is the number of data points, ''d'' is the number of dimensions (variables), and <math>I_{\vec{A}}(\vec{t})</math> is a filter that is equal to 1 for 'accepted frequencies' and 0 otherwise. There are various ways to define this filter function, and a simple one that works for univariate or multivariate samples is called the 'lowest contiguous hypervolume filter'; <math>I_{\vec{A}}(\vec{t})</math> is chosen such that the only accepted frequencies are a contiguous subset of frequencies surrounding the origin for which <math>|\hat{\varphi}(\vec{t})|^2 \ge 4(N-1)N^{-2}</math> (see <ref name=":22"/> for a discussion of this and other filter functions).
==External links==▼
Note that direct calculation of the ''empirical characteristic function'' (ECF) is slow, since it essentially involves a direct Fourier transform of the data samples. However, it has been found that the ECF can be approximated accurately using a [[Non-uniform discrete Fourier transform|non-uniform fast Fourier transform]] (nuFFT) method,<ref name=":1" /><ref name=":22"/> which increases the calculation speed by several orders of magnitude (depending on the dimensionality of the problem). The combination of this objective KDE method and the nuFFT-based ECF approximation has been referred to as ''[https://github.com/LBL-EESA/fastkde fastKDE]'' in the literature.<ref name=":22"/>
* [http://libagf.sf.net libagf] A [[C++]] library for multivariate, [[variable bandwidth kernel density estimation]].▼
[[File:FastKDE_example.jpg|alt=A demonstration of fastKDE relative to a sample PDF. (a) True PDF, (b) a good representation with fastKDE, and (c) a slightly blurry representation.|none|thumb|664x664px|A non-trivial mixture of normal distributions: (a) the underlying PDF, (b) a fastKDE estimate on 1,000,000 samples, and (c) a fastKDE estimate on 10,000 samples.]]
==See also==
* [[Kernel density estimation]] – univariate kernel density estimation.
* [[Variable kernel density estimation]] – estimation of multivariate densities using the kernel with variable bandwidth
▲==References==
▲{{Reflist}}
▲==External links==
* [http://www.mvstat.net/mvksa <em>Multivariate Kernel Smoothing and Its Applications</em>] is a comprehensive book on many topics in kernel smoothing, including density estimation. Includes [https://cran.r-project.org/web/packages/ks/index.html ks package] code snippets in [[R_programming language|R]].
* [http://www.mathworks.com/matlabcentral/fileexchange/17204-kernel-density-estimation kde2d.m] A [[Matlab]] function for bivariate kernel density estimation.
▲* [http://libagf.sf.net libagf] A [[C++]] library for multivariate, [[variable bandwidth kernel density estimation]].
* [http://www.mathworks.com/matlabcentral/fileexchange/58312-kernel-density-estimator-for-high-dimensions akde.m] A [[Matlab]] m-file for multivariate, [[variable bandwidth kernel density estimation]].
* [https://github.com/thaines/helit/tree/master/ms helit] and [http://pythonhosted.org/PyQt-Fit/mod_kde.html pyqt_fit.kde Module] in the [https://pypi.python.org/packages/source/P/PyQt-Fit/PyQt-Fit-1.3.4.tar.gz PyQt-Fit package] are [[Python (programming language)|Python]] libraries for multivariate kernel density estimation.
[[Category:Estimation of densities]]
[[Category:
[[Category:Computational statistics]]
[[Category:Multivariate statistics]]
[[Category:Articles with example MATLAB/Octave code]]
|