Multivariate kernel density estimation: Difference between revisions

Content deleted Content added
m Replacing deprecated latex syntax mw:Extension:Math/Roadmap
Citation bot (talk | contribs)
m Alter: isbn, pages. Removed parameters. Formatted dashes. | You can use this bot yourself. Report bugs here. | User-activated.
Line 1:
[[Kernel density estimation]] is a [[nonparametric]] technique for [[density estimation]] i.e., estimation of [[probability density function]]s, which is one of the fundamental questions in [[statistics]]. It can be viewed as a generalisation of [[histogram]] density estimation with improved statistical properties. Apart from histograms, other types of density estimators include [[parametric statistics|parametric]], [[spline interpolation|spline]], [[wavelet]] and [[Fourier series]]. Kernel density estimators were first introduced in the scientific literature for [[univariate]] data in the 1950s and 1960s<ref>{{Cite journal| doi=10.1214/aoms/1177728190 | last=Rosenblatt | first=M.| title=Remarks on some nonparametric estimates of a density function | journal=Annals of Mathematical Statistics | year=1956 | volume=27 | issue=3 | pages=832–837}}</ref><ref>{{Cite journal| doi=10.1214/aoms/1177704472| last=Parzen | first=E.| title=On estimation of a probability density function and mode | journal=Annals of Mathematical Statistics| year=1962 | volume=33 | issue=3 | pages=1065–1076}}</ref> and subsequently have been widely adopted. It was soon recognised that analogous estimators for multivariate data would be an important addition to [[multivariate statistics]]. Based on research carried out in the 1990s and 2000s, '''multivariate kernel density estimation''' has reached a level of maturity comparable to its univariate counterparts.<ref name="simonoff1996">{{Cite book| author=Simonoff, J.S. | title=Smoothing Methods in Statistics | publisher=Springer | year=1996 | isbn=978-0-387-94716-7}}</ref>
 
==Motivation==
We take an illustrative [[Synthetic data|synthetic]] [[bivariate data|bivariate]] data set of 50 points to illustrate the construction of histograms. This requires the choice of an anchor point (the lower left corner of the histogram grid). For the histogram on the left, we choose (−1.5,&nbsp;−1.5): for the one on the right, we shift the anchor point by 0.125 in both directions to (−1.625,&nbsp;−1.625). Both histograms have a binwidth of 0.5, so any differences are due to the change in the anchor point only. The colour-coding indicates the number of data points which fall into a bin: 0=white, 1=pale yellow, 2=bright yellow, 3=orange, 4=red. The left histogram appears to indicate that the upper half has a higher density than the lower half, whereas the reverse is the case for the right-hand histogram, confirming that histograms are highly sensitive to the placement of the anchor point.<ref>{{Cite book| author=Silverman, B.W. | title=Density Estimation for Statistics and Data Analysis | publisher=Chapman & Hall/CRC | year=1986 | isbn=978-0-412-24620-13 | pages=7–11}}</ref>
 
[[File:Synthetic data 2D histograms.png|thumb|center|500px|alt=Left. Histogram with anchor point at (−1.5,&nbsp;-1.5). Right. Histogram with anchor point at (−1.625,&nbsp;−1.625). Both histograms have a bin width of 0.5, so differences in appearances of the two histograms are due to the placement of the anchor point.|Comparison of 2D histograms. Left. Histogram with anchor point at (−1.5,&nbsp;-1.5). Right. Histogram with anchor point at (−1.625,&nbsp;−1.625). Both histograms have a bin width of 0.5, so differences in appearances of the two histograms are due to the placement of the anchor point.]]
Line 24:
* <math>K_\mathbf{H}(\mathbf{x})=|\mathbf{H}|^{-1/2}K(\mathbf{H}^{-1/2}\mathbf{x} )</math>.
 
The choice of the kernel function ''K'' is not crucial to the accuracy of kernel density estimators, so we use the standard [[multivariate normal distribution|multivariate normal]] kernel throughout: <math display="inline">K_\mathbf{H}(\mathbf{x})={(2 \pi)^{-d/2}} \mathbf{|H|}^{-1/2} e^{ -\frac{1}{2}\mathbf{x^T}\mathbf{H^{-1}}\mathbf{x} }</math>, where H plays the role of the [[covariance matrix]]. On the other hand, the choice of the bandwidth matrix <strong>H</strong> is the single most important factor affecting its accuracy since it controls the amount and orientation of smoothing induced.<ref name="WJ1995">{{Cite book| author1=Wand, M.P | author2=Jones, M.C. | title=Kernel Smoothing | publisher=Chapman & Hall/CRC | ___location=London | year=1995 | isbn = 978-0-412-55270-10}}</ref>{{rp|36–39}} That the bandwidth matrix also induces an orientation is a basic difference between multivariate kernel density estimation from its univariate analogue since orientation is not defined for 1D kernels. This leads to the choice of the parametrisation of this bandwidth matrix. The three main parametrisation classes (in increasing order of complexity) are ''S'', the class of positive scalars times the identity matrix; ''D'', diagonal matrices with positive entries on the main diagonal; and ''F'', symmetric positive definite matrices. The ''S'' class kernels have the same amount of smoothing applied in all coordinate directions, ''D'' kernels allow different amounts of smoothing in each of the coordinates, and ''F'' kernels allow arbitrary amounts and orientation of the smoothing. Historically ''S'' and ''D'' kernels are the most widespread due to computational reasons, but research indicates that important gains in accuracy can be obtained using the more general ''F'' class kernels.<ref>{{cite journal | author1=Wand, M.P. | author2=Jones, M.C. | title=Comparison of smoothing parameterizations in bivariate kernel density estimation | journal=Journal of the American Statistical Association | year=1993 | volume=88 | issue=422 | pages=520–528 | doi=10.1080/01621459.1993.10476303 | jstor=2290332}}</ref><ref name="DH2003">{{Cite journal| doi=10.1080/10485250306039 | author1=Duong, T. | author2=Hazelton, M.L. | title=Plug-in bandwidth matrices for bivariate kernel density estimation | journal=Journal of Nonparametric Statistics | year=2003 | volume=15 | pages=17–30}}</ref>
 
[[File:Kernel parametrisation class.png|thumb|center|500px|alt=Comparison of the three main bandwidth matrix parametrisation classes. Left. S positive scalar times the identity matrix. Centre. D diagonal matrix with positive entries on the main diagonal. Right. F symmetric positive definite matrix.|Comparison of the three main bandwidth matrix parametrisation classes. Left. ''S'' positive scalar times the identity matrix. Centre. ''D'' diagonal matrix with positive entries on the main diagonal. Right. ''F'' symmetric positive definite matrix.]]
Line 189:
== Objective and data-driven kernel selection ==
[[File:Empirical Characteristic Function.jpg|alt=An x-shaped region of empirical characteristic function in Fourier space.|thumb|Demonstration of the filter function <math>I_{\vec{A}}(\vec{t})</math>. The square of the empirical distribution function <math>|\hat{\varphi}|^2</math> from ''N''=10,000 samples of the ‘transition distribution’ discussed in Section 3.2 (and shown in Fig. 4), for <math>|\hat{\varphi}|^2 \ge 4(N-1)N^{-2}</math>. There are two color schemes present in this figure. The predominantly dark, multicolored colored ‘X-shaped’ region in the center corresponds to values of <math>|\hat{\varphi}|^2</math> for the lowest contiguous hypervolume (the area containing the origin); the colorbar at right applies to colors in this region. The lightly-colored, monotone areas away from the first contiguous hypervolume correspond to additional contiguous hypervolumes (areas) with <math>|\hat{\varphi}|^2 \ge 4(N-1)N^{-2}</math>. The colors of these areas are arbitrary and only serve to visually differentiate nearby contiguous areas from one another.]]
Recent research has shown that the kernel and its bandwidth can both be optimally and objectively chosen from the input data itself without making any assumptions about the form of the distribution.<ref name=":0">{{Cite journal|last = Bernacchia|first = Alberto|last2 = Pigolotti|first2 = Simone|date = 2011-06-01|title = Self-consistent method for density estimation|url = http://onlinelibrary.wiley.com/doi/10.1111/j.1467-9868.2011.00772.x/abstract|journal = Journal of the Royal Statistical Society, Series B|language = en|volume = 73|issue = 3|pages = 407–422|doi = 10.1111/j.1467-9868.2011.00772.x|issn = 1467-9868|arxiv = 0908.3856}}</ref> The resulting kernel density estimate converges rapidly to the true probability distribution as samples are added: at a rate close to the <math>n^{-1}</math> expected for parametric estimators.<ref name=":0" /><ref name=":1">{{Cite journal|last = O’Brien|first = Travis A.|last2 = Collins|first2 = William D.|last3 = Rauscher|first3 = Sara A.|last4 = Ringler|first4 = Todd D.|date = 2014-11-01|title = Reducing the computational cost of the ECF using a nuFFT: A fast and objective probability density estimation method|url = http://www.sciencedirect.com/science/article/pii/S016794731400173X|journal = Computational Statistics & Data Analysis|volume = 79|pages = 222–234|doi = 10.1016/j.csda.2014.06.002}}</ref><ref name=":22">{{Cite journal|last = O’Brien|first = Travis A.|last2 = Kashinath|first2 = Karthik|last3 = Cavanaugh|first3 = Nicholas R.|last4 = Collins|first4 = William D.|last5 = O’Brien|first5 = John P.|title = A fast and objective multidimensional kernel density estimation method: fastKDE|url = http://www.sciencedirect.com/science/article/pii/S0167947316300408|journal = Computational Statistics & Data Analysis|volume = 101|pages = 148148–160|doi = 10.1016/j.csda.2016.02.014|year = 2016}}</ref> This kernel estimator works for univariate and multivariate samples alike. The optimal kernel is defined in Fourier space—as the optimal damping function <math>\hat{\psi_h}(\vec{t})</math> (the Fourier transform of the kernel <math>\hat{K}(\vec{x})</math> )-- in terms of the Fourier transform of the data <math>\hat{\varphi}(\vec{t})</math>, the ''[[Characteristic function (probability theory)|empirical characteristic function]]'' (see [[Kernel density estimation]]):
 
<math>\hat{\psi_h}(\vec{t}) \equiv \frac{N}{2(N-1)} \left[ 1 + \sqrt{1 - \frac{4(N-1)}{N^2 |\hat{\varphi}(\vec{t})|^2}} I_{\vec{A}}(\vec{t}) \right]</math> <ref name=":22"/>