Content deleted Content added
+ {{mergeto}} |
Added description bandwidth matrix parametrisations. Added missing figure cations.p |
||
Line 1:
{{mergeto|Kernel density estimation|date=September 2010}}
[[Kernel density estimation]] is a [[nonparametric]] technique for [[density estimation]] i.e., estimation of [[probability density function]]s, which is one of the fundamental questions in [[statistics]]. It can be viewed as a generalisation of [[histogram]] density estimation with improved statistical properties. Apart from histograms, other types of density estimators include [[parametric statistics | parametric]], [[spline interpolation |spline]], [[wavelet]] and [[Fourier series]]. Kernel density estimators were first introduced in the scientific literature for [[univariate]] data in the 1950s and 1960s<ref>{{Cite journal| doi=10.1214/aoms/1177728190 | last=Rosenblatt | first=M.| title=Remarks on some nonparametric estimates of a density function | journal=Annals of Mathematical Statistics | year=1956 | volume=27 | pages=832–837}}</ref><ref>{{Cite journal| doi=10.1214/aoms/1177704472| last=Parzen | first=E.| title=On estimation of a probability density function and mode | journal=Annals of Mathematical Statistics| year=1962 | volume=33 | pages=1065–1076}}</ref> and subsequently have been widely adopted. It was soon recognised that analogous estimators for multivariate data would be an important addition to [[multivariate statistics]]. Based on research carried out in the 1990s and 2000s, multivariate kernel density estimation has reached a level of maturity comparable to their univariate counterparts.<ref>{{Cite book| author=Simonoff, J.S. | title=Smoothing Methods in Statistics | publisher=Springer | date=1996 | isbn=0387947167}}</ref>▼
▲Kernel density estimators were first introduced in the scientific literature for [[univariate]] data in the 1950s and 1960s<ref>{{Cite journal| doi=10.1214/aoms/1177728190 | last=Rosenblatt | first=M.| title=Remarks on some nonparametric estimates of a density function | journal=Annals of Mathematical Statistics | year=1956 | volume=27 | pages=832–837}}</ref><ref>{{Cite journal| doi=10.1214/aoms/1177704472| last=Parzen | first=E.| title=On estimation of a probability density function and mode | journal=Annals of Mathematical Statistics| year=1962 | volume=33 | pages=1065–1076}}</ref> and subsequently have been widely adopted. It was soon recognised that analogous estimators for multivariate data would be an important addition to [[multivariate statistics]]. Based on research carried out in the 1990s and 2000s, multivariate kernel density estimation has reached a level of maturity comparable to their univariate counterparts.<ref>{{Cite book| author=Simonoff, J.S. | title=Smoothing Methods in Statistics | publisher=Springer | date=1996 | isbn=0387947167}}</ref>
==Motivation==
We take
[[Image:Synthetic data 2D histograms.png|thumb|center|500px|alt=Left. Histogram with anchor point at (−1.5, 1.5). (Right) Histogram with anchor point at (−1.625, −1.625). Both histograms have a bin width of 0.5, so differences in appearances of the two histograms are due to the placement of the anchor point.|Comparison of 2D histograms. Left. Histogram with anchor point at (−1.5, 1.5). (Right) Histogram with anchor point at (−1.625, −1.625). Both histograms have a bin width of 0.5, so differences in appearances of the two histograms are due to the placement of the anchor point.]]
One possible solution to this anchor point placement problem to remove the histogram binning grid completely. In the left figure below, a kernel (represented by the dashed grey lines) is centred at each of the 50 data points above. The result of summing these kernels is given on the right figure, which is a kernel density estimate. The most striking difference between kernel density estimates and histograms is that the former are easier to interpret since they do not contain artifices induced by a binning grid.
The coloured contours correspond to the smallest region which contains the respective probability mass: red = 25%, orange + red = 50%, yellow + orange + red = 75%, thus indicating that a single central region contains the highest density.
[[Image:Synthetic data 2D KDE.png|thumb|center|500px|alt=Left. Individual kernels. Right. Kernel density estimate.|Construction of 2D kernel density estimate. Left. Individual kernels. Right. Kernel density estimate.]]
==Definition==
Line 26 ⟶ 23:
* {{nowrap|''K''<sub>'''H'''</sub>('''x''') {{=}} {{!}}'''H'''{{!}}<sup>−1/2</sup> ''K''('''H'''<sup>−1/2</sup>'''x''')}}.
The choice of the kernel function ''K'' is not crucial to the accuracy of kernel density estimators, so we use the standard [[multivariate normal distribution|multivariate normal]] kernel throughout: {{nowrap|''K''('''x''') {{=}} (2''π'')<sup>−''d''/2</sup> exp(−{{frac|2}}'''x'''<sup>''T''</sup>'''x''')}}. Whereas the choice of the bandwidth matrix <strong>H</strong> is the single most important factor affecting its accuracy since it controls the amount of and orientation of smoothing induced.<ref name="WJ1995">{{Cite book| author1=Wand, M.P | author2=Jones, M.C. | title=Kernel Smoothing | publisher=Chapman & Hall/CRC | ___location=London | date=1995 | isbn = 0412552701}}</ref>{{rp|36–39}} That the bandwidth matrix also induces an orientation is a basic difference between multivariate kernel density estimation from its univariate analogue since orientation is not defined for 1D kernels. This leads to the choice of the parametrisation of this bandwidth matrix. The three main parametrisation classes (in increasing order of complexity) are ''S'', the class of positive scalars times the identity matrix; ''D'', diagonal matrices with positive entries on the main diagonal; and ''F'', symmetric positive definite matrices. The ''S'' class kernels have the same amount of smoothing applied in all coordinate directions, ''D'' kernels allow different amounts of smoothing in each of the coordinates, and ''F'' kernels allow arbitrary amounts and orientation of the smoothing. Historically ''S'' and ''D'' kernels are the most widespread due to computational reasons, but research indicates that important gains in accuracy can be obtained using the more general ''F'' class kernels<ref>{{cite journal | author1=Wand, M.P. | author2=Jones, M.C. | title=Comparison of smoothing parameterizations in bivariate kernel density estimation | journal=Journal of the American Statistical Association | year=1993 | volume=88 | pages=520-528 | url=http://www.jstor.org/stable/2290332}}</ref><ref name="DH2005">{{Cite journal| doi=10.1080/10485250306039 | author1=Duong, T. | author2=Hazelton, M.L. | title=Plug-in bandwidth matrices for bivariate kernel density estimation | journal=Journal of Nonparametric Statistics | year=2003 | volume=15 | pages=17–30}}</ref> so we concentrate on this latter class.
[[Image:Kernel parametrisation class.png |thumb|center|500px|alt=Comparison of the three main bandwidth matrix parametrisation classes. Left. S positive scalar times the identity matrix. Centre. D diagonal matrix with positive entries on the main diagonal. Right. F symmetric positive definite matrix.|Comparison of the three main bandwidth matrix parametrisation classes. Left. ''S'' positive scalar times the identity matrix. Centre. ''D'' diagonal matrix with positive entries on the main diagonal. Right. ''F'' symmetric positive definite matrix.]]
==Optimal bandwidth matrix selection==
Line 54 ⟶ 56:
where ''o'' indicates the usual [[big O notation|small o notation]]. Heuristically this statement implies that the AMISE is a 'good' approximation of the MISE as the sample size <em>n → ∞<em>. An ideal optimal bandwidth selector is
: <math>\bold{H}_{\operatorname{AMISE}} = \operatorname{argmin}_{\bold{H} \in F} \, \operatorname{AMISE} (\bold{H}).</math>
Since this ideal selector contains the unknown density function ''ƒ'', it cannot be used directly. The many different varieties of data-based bandwidth selectors arise from the different estimators of the AMISE. We concentrate on two classes of selectors which have been shown to be the most widely applicable in practise: smoothed cross validation and plug-in selectors.
Line 66 ⟶ 67:
where <math>\hat{\bold{\Psi}}_4 (\bold{G}) = n^{-2} \sum_{i=1}^n
\sum_{j=1}^n [(\operatorname{vec} \, \operatorname{D}^2) (\operatorname{vec}^T \operatorname{D}^2)] K_\bold{G} (\bold{X}_i - \bold{X}_j)</math>. Thus <math>\hat{\bold{H}}_{\operatorname{PI}} = \operatorname{argmin}_{\bold{H} \in F} \, \operatorname{PI} (\bold{H})</math> is the plug-in selector.<ref>{{Cite journal| author1=Wand, M.P. | author2=Jones, M.C. | title=Multivariate plug-in bandwidth selection | journal=Computational Statistics | year=1994 | volume=9 | pages=97–177}}</ref><ref
===Smoothed cross validation===
|