Multivariate kernel density estimation: Difference between revisions

Content deleted Content added
Drleft (talk | contribs)
No edit summary
cleaned up some math notation and punctuation
Line 6:
 
==Motivation==
We take a illustrative synthetic [[bivariate]] data set of 50 points to illustrate the construction of histograms. This requires the choice of an anchor point (the lower left corner of the histogram grid). For the histogram on the left, we choose (-&minus;1.5, -&nbsp;&minus;1.5): for the one on the right, we shift the anchor point by 0.125 in both directions to (-&minus;1.625, -&nbsp;&minus;1.625). Both histograms have a binwidth of 0.5, so any differences are due to the change in the anchor point only. The colour coding indicates the number of data points which fall into a bin: 0=white, 1=pale yellow, 2=bright yellow, 3=orange, 4=red. The left histogram appears to indicate that the upper half has a higher density than the lower half, whereas it is the reverse is the case for the right-hand histogram, confirming that histograms are highly sensitive the placement of the anchor point.<ref>{{Cite book| author=Silverman, B.W. | title=Density Estimation for Statistics and Data Analysis | publisher=Chapman & Hall/CRC | date=1986 | isbn=0412246201 | pages=7–11}}</ref>
 
[[Image:Synthetic data 2D histograms.png|center|500px|alt=Left. Histogram with anchor point at (-&minus;1.5, &nbsp;1.5). (Right) Histogram with anchor point at (-&minus;1.625, -&nbsp;&minus;1.625). Both histograms have a bin width of 0.5, so differences in appearances of the two histograms are due to the placement of the anchor point.|Left. Histogram with anchor point at (-&minus;1.5, &nbsp;1.5). (Right) Histogram with anchor point at (-&minus;1.625, -&nbsp;&minus;1.625). Both histograms have a bin width of 0.5, so differences in appearances of the two histograms are due to the placement of the anchor point.]]
 
One possible solution to this anchor point placement problem to remove the histogram binning grid completely. In the left figure below, a kernel (represented by the dashed grey lines) is centred at each of the 50 data points above. The result of summing these kernels is given on the right figure, which is a kernel density estimate. The most striking difference between kernel density estimates and histograms is that the former are easier to interpret since they do not contain artifices induced by a binning grid.
Line 18:
The previous figure is a graphical representation of kernel density estimate, which we now define it in an exact manner. Let <math>\bold{X}_1, \bold{X}_2, \dots, \bold{X}_n</math> be a ''d''-variate random sample drawn from a common density function ''f''. The kernel density estimate is defined to be
 
: <math>\hat{f}_\bold{H}(\bold{x})= n^{-1} |\bold{H}|^{-1/2} \sum_{i=1}^n K_\bold{H} (\bold{x} - \bold{X}_i)</math>
 
where
Line 27:
</ul>
 
The choice of the kernel function ''K'' is not crucial to the accuracy of kernel density estimators, so we use the standard [[multivariate normal distribution|multivariate normal]] or Gaussian density function as our kernel ''K'' throughout: <math>K (\bold{x}) = (2\pi)^{-d/2} \exp(-\tfrac{1}{2} \, \bold{x}^T \bold{x})</math>. Whereas the choice of the bandwidth matrix <strong>H</strong> is the single most important factor affecting its accuracy since it controls the amount of and orientation of smoothing induced.<ref name="WJ1995">{{Cite book| author1=Wand, M.P | author2=Jones, M.C. | title=Kernel Smoothing | publisher=Chapman & Hall/CRC | ___location=London | date=1995 | isbn = 0412552701}}</ref>{{rp|36-&ndash;39}}
 
==Optimal bandwidth matrix selection==
Line 55:
where ''o'' indicates the usual [[big O notation|small o notation]]. Heuristically this statement implies that the AMISE is a 'good' approximation of the MISE as the sample size <em>n → ∞<em>. An ideal optimal bandwidth selector is
 
: <math>\bold{H}_{\operatorname{AMISE}} = \operatorname{argmin}_{\bold{H} \in F} \, \operatorname{AMISE} (\bold{H})</math>
 
where ''F'' is the space of all symmetric, positive definite matrices.
Line 61:
 
===Plug-in===
The plug-in (PI) estimate of the AMISE is formed by replacing <math>\bold{\Psi}_4</math> by its estimator <math>\hat{\bold{\Psi}}_4</math>
 
: <math>\bold{\Psi}_4</math> by its estimator <math>\hat{\bold{\Psi}}_4</math>
<math>\operatorname{PI}(\bold{H}) = n^{-1} |\bold{H}|^{-1/2} R(K) + \tfrac{1}{4} m_2(K)^2
 
: <math>\operatorname{PI}(\bold{H}) = n^{-1} |\bold{H}|^{-1/2} R(K) + \tfrac{1}{4} m_2(K)^2
(\operatorname{vec}^T \bold{H}) \hat{\bold{\Psi}}_4 (\bold{G}) (\operatorname{vec} \, \bold{H})</math>
 
Line 72 ⟶ 74:
Smoothed cross validation (SCV) is a subset of a larger class of [[cross-validation (statistics)|cross validation]] techniques. The SCV estimator differs from the plug-in estimator in the second term
 
: <math>\operatorname{SCV}(\bold{H}) = n^{-1} |\bold{H}|^{-1/2} R(K) +
n^{-2} \sum_{i=1}^n \sum_{j=1}^n (K_{2\bold{H} +2\bold{G}} - 2K_{\bold{H} +2\bold{G}}
+ K_{2\bold{G}}) (\bold{X}_i - \bold{X}_j)</math>