Multivariate kernel density estimation: Difference between revisions

Content deleted Content added
Drleft (talk | contribs)
Drleft (talk | contribs)
No edit summary
Line 6:
 
[[Kernel density estimation]] is one of the most popular techniques for density estimation. It can be viewed as a generalisation of [[histogram]] density estimation with improved statistical properties.
Kernel density estimators were first introduced in the scientific literature for [[univariate]] data in the 1950s and 1960s<ref>{{cite journal | doi=10.1214/aoms/1177728190 | last=Rosenblatt | first=M.| title=Remarks on some nonparametric estimates of a density function |url=http://projecteuclid.org/euclid.aoms/1177728190| journal=[[Annals of Mathematical Statistics]] | year=1956 | volume=27 | pages=832-837}}</ref><ref>{{cite journal | doi=10.1214/aoms/1177704472| last=Parzen | first=E.| title=On estimation of a probability density function and mode |url=http://projecteuclid.org/euclid.aoms/1177704472| journal=[[Annals of Mathematical Statistics]]| year=1962 | volume=33 | pages=1065-1076}}</ref> and subsequently have been widely adopted. It was soon recognised that analagous estimators for multivariate data would be an important addition to [[multivariate statistics]]. Based on research carried out in the 1990s and 2000s, multivariate kernel density estimation has reached a level of maturity comparable to their univariate counterparts.
 
 
Line 27:
</ul>
 
The choice of the kernel function <em>K</em> is not crucial to the accuracy of kernel density estimators, whereas the choice of the bandwidth matrix <strong>H</strong> is the single most important factor affecting its accuracy. <ref name="WJ1995">{{cite book | author1=Wand, M.P | author2=Jones, M.C. | title=Kernel Smoothing | publisher=Chapman & Hall | ___location=London | date=1995 | page=36-39 | isbn = 0412552701}}</ref>(pp. 36-39). So we use the standard [[multivariate normal distribution|multivariate normal]] or Gaussian density function as our kernel <em>K</em>
 
<math>K (\bold{x}) = (2\pi)^{-d/2} \exp(-\tfrac{1}{2} \, \bold{x}^T \bold{x}).</math>
Line 39:
 
<math>\operatorname{AMISE} (\bold{H}) = n^{-1} |\bold{H}|^{-1/2} R(K) + \tfrac{1}{4} m_2(K)^2
(\int \operatorname{vectr}^T2 (\bold{H}) \bold{R} (\operatorname{vec} \, \operatorname{D}^2 f) (\operatornamebold{vecx})) \, d\bold{Hx}) </math>
 
where
Line 45:
<li><math>R(K) = \int K(\bold{x})^2 \, d\bold{x}</math>. For the normal kernel <math>K</math>, <math>R(K) = (4 \pi)^{-d/2}</math>
<li><math>\operatorname{D}^2 f</math> is the <em>d x d</em> Hessian matrix of second order partial derivatives of <math>f</math>
<li><math>\bold{R}(\operatorname{D}^2 f) = \int (\operatorname{vec} \, \operatorname{D}^2 f (\bold{x})) (\operatorname{vec} \, \operatorname{D}^2 f (\bold{x}))^T \, d\bold{x}</math>
<li>vec is the vector operator which stacks the columns of matrix into a single vector e.g.
<math>\operatorname{vec} \begin{bmatrix} a & c & e\\ b & d & f\end{bmatrix} = \begin{bmatrix} a & b & c & d & e & f\end{bmatrix}^T</math>.
</ul>
This formula of the AMISE is due to <ref name="CD2010WJ1995"> /</ref>(p. 97). The quality of the AMISE approximation to the MISE is given by
 
The accuracy of the AMISE approximation is quantified in
 
<math>\operatorname{MISE} (\bold{H}) = \operatorname{AMISE} (\bold{H}) + o(n^{-1} |\bold{H}|^{-1/2}) + O(\operatorname{tr} \, \bold{H}^2)</math>
Line 57 ⟶ 52:
where o, O indicate the usual small and [[big O notation]]. Heuristically this statement implies that the AMISE is a 'good' approximation of the MISE as the sample size <em>n → ∞<em>.
 
The ideal optimal bandwidth selector is
The many different varieties of bandwidth selectors arise from the different estimators of the MISE or AMISE. We concentrate on two classes of selectors which have been shown to be the most widely applicable in practise: smoothed cross validation and plug-in selectors.
<math>\bold{H}_{\operatorname{AMISE}} = \operatorname{argmin}_{\bold{H} \in F} \, \operatorname{AMISE} (\bold{H})</math>
where <em>F</em> is the space of all symmetric, positive definite matrices.
Since this ideal selector contains the unknown density function >em>f</em>, it cannot be used directly. The many different varieties of data-based bandwidth selectors arise from the different estimators of the MISE or AMISE. We concentrate on two classes of selectors which have been shown to be the most widely applicable in practise: smoothed cross validation and plug-in selectors.
 
=== Plug-in ===
The plug-in (PI) selector of the AMISE is formed by replacing the Hessian matrix by its estimator
 
<math>\operatorname{PI}(\bold{H}) = n^{-1} |\bold{H}|^{-1/2} R(K) + \tfrac{1}{4} m_2(K)^2
<li><math>\bold{R}(\operatorname{D}^2 f) = \int (\operatorname{vec} \, \operatorname{Dtr}^2 f (\bold{xH})) (\operatornamewidehat{vec} \, \operatorname{D}^2 f} (\bold{x}))^T \, d\bold{x}</math>
 
and <math>\hat{\bold{H}}_{\operatorname{PI}} = \operatorname{argmin}_{\bold{H} \in F} \, \operatorname{PI} (\bold{H})</math> is the plug-in selector<ref>{{cite journal | author1=Wand, M.P. | author2=Jones, M.C. | title=Multivariate plug-in bandwidth selection | journal=Computational Statistics | year=1994 | volume=9 | pages=97-177}}</ref><ref>{{cite journal | doi=10.1080/10485250306039 | author1=Duong, T. | author2=Hazelton, M.L. | title=Plug-in bandwidth matrices for bivariate kernel density estimation | journal=Journal of Nonparametric Statistics | year=2003 | volume=15 | pages=17-30}}</ref>
 
 
<ref name="CD2010">{{cite journal|doi=10.1007/s11749-009-0168-4 | author1=Chacón, J.E | author2=Duong, T. | title=Multivariate plug-in bandwidth selection with unconstrained pilot bandwidth matrices | journal=[[Test]] | year=2010 | volume=19 | pages=375-398}}</ref>.
 
 
 
Line 74 ⟶ 83:
<!--- Categories --->
[[Category:Articles created via the Article Wizard]]
ℝℝℝℝ