Content deleted Content added
No edit summary |
m Date maintenance tags and general fixes: build 540:, added orphan tag |
||
Line 1:
{{Orphan|date=September 2010}}
{{New unreviewed article}}▼
▲{{New unreviewed article|date=September 2010}}
[[Kernel density estimation]] is one of the most popular techniques for [[density estimation]] i.e., estimation of [[probability density function|probability density functions]], which is one of the fundamental questions in [[statistics]].▼
It can be viewed as a generalisation of [[histogram]] density estimation with improved statistical properties. ▼
Kernel density estimators were first introduced in the scientific literature for [[univariate]] data in the 1950s and 1960s<ref>{{cite journal | doi=10.1214/aoms/1177728190 | last=Rosenblatt | first=M.| title=Remarks on some nonparametric estimates of a density function | journal=Annals of Mathematical Statistics | year=1956 | volume=27 | pages=832-837}}</ref><ref>{{cite journal | doi=10.1214/aoms/1177704472| last=Parzen | first=E.| title=On estimation of a probability density function and mode | journal=Annals of Mathematical Statistics| year=1962 | volume=33 | pages=1065-1076}}</ref> and subsequently have been widely adopted. It was soon recognised that analogous estimators for multivariate data would be an important addition to [[multivariate statistics]]. Based on research carried out in the 1990s and 2000s, multivariate kernel density estimation has reached a level of maturity comparable to their univariate counterparts.<ref>{{cite book | author=Simonoff, J.S. | title=Smoothing Methods in Statistics | publisher=Springer | date=1996 | isbn=0387947167}}</ref> ▼
▲[[Kernel density estimation]] is one of the most popular techniques for [[density estimation]] i.e., estimation of [[probability density function
▲It can be viewed as a generalisation of [[histogram]] density estimation with improved statistical properties.
▲Kernel density estimators were first introduced in the scientific literature for [[univariate]] data in the 1950s and 1960s<ref>{{
==
We take a illustrative synthetic [[bivariate]] data set of 50 points to illustrate the construction of histograms. This requires the choice of an anchor point (the lower left corner of the histogram grid). For the histogram on the left, we choose (-1.5, -1.5): for the one on the right, we shift the anchor point by 0.125 in both directions to (-1.625, -1.625). Both histograms have a binwidth of 0.5, so any differences are due to the change in the anchor point only. The colour coding indicates the number of data points which fall into a bin: 0=white, 1=pale yellow, 2=bright yellow, 3=orange, 4=red. The left histogram appears to indicate that the upper half has a higher density than the lower half, whereas it is the reverse is the case for the right-hand histogram, confirming that histograms are highly sensitive the placement of the anchor point.<ref>{{
▲We take a illustrative synthetic [[bivariate]] data set of 50 points to illustrate the construction of histograms. This requires the choice of an anchor point (the lower left corner of the histogram grid). For the histogram on the left, we choose (-1.5, -1.5): for the one on the right, we shift the anchor point by 0.125 in both directions to (-1.625, -1.625). Both histograms have a binwidth of 0.5, so any differences are due to the change in the anchor point only. The colour coding indicates the number of data points which fall into a bin: 0=white, 1=pale yellow, 2=bright yellow, 3=orange, 4=red. The left histogram appears to indicate that the upper half has a higher density than the lower half, whereas it is the reverse is the case for the right-hand histogram, confirming that histograms are highly sensitive the placement of the anchor point.<ref>{{cite book | author=Silverman, B.W. | title=Density Estimation for Statistics and Data Analysis | publisher=Chapman & Hall/CRC | date=1986 | isbn=0412246201 | pages=7-11}}</ref>
[[Image:Synthetic data 2D histograms.png|center|500px|alt=Left. Histogram with anchor point at (-1.5, 1.5). (Right) Histogram with anchor point at (-1.625, -1.625). Both histograms have a bin width of 0.5, so differences in appearances of the two histograms are due to the placement of the anchor point.|Left. Histogram with anchor point at (-1.5, 1.5). (Right) Histogram with anchor point at (-1.625, -1.625). Both histograms have a bin width of 0.5, so differences in appearances of the two histograms are due to the placement of the anchor point.]]
One possible solution to this anchor point placement problem to remove the histogram binning grid completely. In the left figure below, a kernel (represented by the dashed grey lines) is centred at each of the 50 data points above. The result of summing these kernels is given on the right figure, which is a kernel density estimate. The most striking difference between kernel density estimates and histograms is that the former are easier to interpret since they do not contain artifices induced by a binning grid.
The coloured contours correspond to the smallest regions which contains that corresponding probability mass: red = 25%, orange + red = 50%, yellow + orange + red = 75%, thus indicating that a single central region contains the highest density.
[[Image:Synthetic data 2D KDE.png|center|500px|alt=Left. Individual kernels. Right. Kernel density estimate.|Left. Individual kernels. Right. Kernel density estimate.]]
The previous figure is a graphical representation of kernel density estimate, which we now define it in an exact manner. Let <math>\bold{X}_1, \bold{X}_2, \dots, \bold{X}_n</math> be a
▲== Definition ==
▲The previous figure is a graphical representation of kernel density estimate, which we now define it in an exact manner. Let <math>\bold{X}_1, \bold{X}_2, \dots, \bold{X}_n</math> be a <em>d</em>-variate random sample drawn from a common density function <em>f</em>. The kernel density estimate is defined to be
<math>\hat{f}_\bold{H}(\bold{x})= n^{-1} |\bold{H}|^{-1/2} \sum_{i=1}^n K_\bold{H} (\bold{x} - \bold{X}_i)</math>
where
<ul>
<li><math>\bold{x} = (x_1, x_2, \dots, x_d)^T</math>, <math>\bold{X}_i = (X_{i1}, X_{i2}, \dots, X_{id})^T, i=1, 2, \dots, n</math> are
<li>
<li><strong>H</strong> is the bandwidth (or smoothing) matrix which is a symmetric, [[positive definite matrix|positive definite]]
</ul>
The choice of the kernel function
==
The most commonly used optimality criterion for selecting a bandwidth matrix is the MISE or Mean Integrated Squared Error
<math>\operatorname{MISE} (\bold{H}) = E \int [\hat{f}_\bold{H} (\bold{x}) - f(\bold{x})]^2 \, d\bold{x} .</math>
This in general does not possess a closed form expression, so it is usual to use its asymptotic approximation (AMISE) as a proxy
<math>\operatorname{AMISE} (\bold{H}) = n^{-1} |\bold{H}|^{-1/2} R(K) + \tfrac{1}{4} m_2(K)^2
(\operatorname{vec}^T \bold{H}) \bold{\Psi}_4 (\operatorname{vec}^T \bold{H})</math>
where
Line 47 ⟶ 45:
<li><math>R(K) = \int K(\bold{x})^2 \, d\bold{x}</math>, with <math>R(K) = (4 \pi)^{-d/2}</math> when <math>K</math> is a normal kernel
<li><math>\int \bold{x} \bold{x}^T K(\bold{x})^2 \, d\bold{x} = m_2(K) \bold{I}_d</math>,
with <strong>I</strong><sub>d</sub> being the
<li><math>\operatorname{D}^2 f</math> is the
<li><math>\bold{\Psi}_4 = \int (\operatorname{vec} \, \operatorname{D}^2 f(\bold{x})) (\operatorname{vec}^T \operatorname{D}^2 f(\bold{x})) \, d\bold{x}</math> is a
partial derivatives of
<li>vec is the vector operator which stacks the columns of a matrix into a single vector e.g. <math>\operatorname{vec}\begin{bmatrix}a & c \\ b & d\end{bmatrix} = \begin{bmatrix}a & b & c & d\end{bmatrix}^T.</math>
</ul>
The quality of the AMISE approximation to the MISE<ref name="WJ1995"
<math>\operatorname{MISE} (\bold{H}) = \operatorname{AMISE} (\bold{H}) + o(n^{-1} |\bold{H}|^{-1/2} + \operatorname{tr} \, \bold{H}^2)</math>
where
<math>\bold{H}_{\operatorname{AMISE}} = \operatorname{argmin}_{\bold{H} \in F} \, \operatorname{AMISE} (\bold{H})</math>
where
Since this ideal selector contains the unknown density function
===
The plug-in (PI) estimate of the AMISE is formed by replacing <math>\bold{\Psi}_4</math> by its estimator <math>\hat{\bold{\Psi}}_4</math>
<math>\operatorname{PI}(\bold{H}) = n^{-1} |\bold{H}|^{-1/2} R(K) + \tfrac{1}{4} m_2(K)^2
Line 71 ⟶ 69:
where <math>\hat{\bold{\Psi}}_4 (\bold{G}) = n^{-2} \sum_{i=1}^n
\sum_{j=1}^n [(\operatorname{vec} \, \operatorname{D}^2) (\operatorname{vec}^T \operatorname{D}^2)] K_\bold{G} (\bold{X}_i - \bold{X}_j)</math>. Thus <math>\hat{\bold{H}}_{\operatorname{PI}} = \operatorname{argmin}_{\bold{H} \in F} \, \operatorname{PI} (\bold{H})</math> is the plug-in selector<ref>{{
Smoothed cross validation (SCV) is a subset of a larger class of [[cross-
▲=== Smoothed cross validation ===
▲Smoothed cross validation (SCV) is a subset of a larger class of [[cross-validation_(statistics) | cross validation]] techniques. The SCV estimator differs from the plug-in estimator in the second term
<math>\operatorname{SCV}(\bold{H}) = n^{-1} |\bold{H}|^{-1/2} R(K) +
n^{-2} \sum_{i=1}^n \sum_{j=1}^n (K_{2\bold{H} +2\bold{G}} - 2K_{\bold{H} +2\bold{G}}
+ K_{2\bold{G}}) (\bold{X}_i - \bold{X}_j)</math>
Thus <math>\hat{\bold{H}}_{\operatorname{SCV}} = \operatorname{argmin}_{\bold{H} \in F} \, \operatorname{SCV} (\bold{H})</math> is the SCV selector<ref>{{
These references also contain algorithms on optimal estimation of the pilot bandwidth matrix <strong>G</strong> and establish that <math>\hat{\bold{H}}_{\operatorname{SCV}}</math> converges in probability to <math>\bold{H}_{\operatorname{AMISE}}</math>.
==
[[Image:Old Faithful Geyser KDE with plugin bandwidth.png|thumb|250px|alt=Old Faith Geyser data kernel density estimate with plug-in bandwidth matrix.|Old Faith Geyser data kernel density estimate with plug-in bandwidth matrix.]]
The [http://cran.r-project.org/web/packages/ks/index.html ks package]<ref>{{
272 records with two measurements each: the duration time of an eruprion (minutes) and the
waiting time until the next eruption (minutes) of the [[Old Faithful Geyser]] in Yellowstone National Park, USA.
The code fragment computes the kernel density estimate with the plug-in bandwidth matrix <math>\hat{\bold{H}}_\operatorname{PI} = \begin{bmatrix}0.052 & 0.510 \\ 0.510 & 8.882\end{bmatrix}.</math> Again, the coloured contours correspond to the smallest regions which contains that corresponding probability mass: red = 25%, orange + red = 50%, yellow + orange + red = 75%. To compute the SCV selector, <code>Hpi</code> is replaced with <code>Hscv</code>. This is not displayed here since it is mostly similar to the plug-in estimate for this example.
▲The code fragment computes the kernel density estimate with the plug-in bandwidth matrix <math>\hat{\bold{H}}_\operatorname{PI} = \begin{bmatrix}0.052 & 0.510 \\ 0.510 & 8.882\end{bmatrix}.</math> Again, the coloured contours correspond to the smallest regions which contains that corresponding probability mass: red = 25%, orange + red = 50%, yellow + orange + red = 75%. To compute the SCV selector, <code>Hpi</code> is replaced with <code>Hscv</code>. This is not displayed here since it is mostly similar to the plug-in estimate for this example.
<pre style="overflow:auto;">
library(ks)
Line 102 ⟶ 99:
</pre>
▲== References ==
{{Reflist}}
==
* [http://www.mvstat.net/tduong/research www.mvstat.net/tduong/research] A collection of peer-reviewed articles of the mathematical details of multivariate kernel density estimation and their bandwidth selectors.
|