Content deleted Content added
Taobrienlbl (talk | contribs) Added "Objective and data-drive kernel selection" section |
|||
Line 80:
Silverman's rule of thumb suggests using <math>\sqrt{\mathbf{H}_{ii}} = \left(\frac{4}{d+2}\right)^{\frac{1}{d+4}} n^{\frac{-1}{d+4}} \sigma_i</math> where <math>\sigma_i</math> is the standard deviation of the ith variable and <math>\mathbf{H}_{ij} = 0, i\neq j</math>. Scott's rule is <math>\sqrt{\mathbf{H}_{ii}} = n^{\frac{-1}{d+4}} \sigma_i</math>.
=== Objective and data-driven kernel selection ===
[[File:Empirical Characteristic Function.jpg|alt=An x-shaped region of empirical characteristic function in Fourier space.|thumb|Demonstration of the filter function <math>I_{\vec{A}}(\vec{t})</math>. The square of the empirical distribution function <math>|\hat{\varphi}|^2</math> from ''N''=10,000 samples of the ‘transition distribution’ discussed in Section 3.2 (and shown in Fig. 4), for <math>|\hat{\varphi}|^2 \ge 4(N-1)N^{-2}</math>. There are two color schemes present in this figure. The predominantly dark, multicolored colored ‘X-shaped’ region in the center corresponds to values of <math>|\hat{\varphi}|^2</math> for the lowest contiguous hypervolume (the area containing the origin); the colorbar at right applies to colors in this region. The lightly-colored, monotone areas away from the first contiguous hypervolume correspond to additional contiguous hypervolumes (areas) with <math>|\hat{\varphi}|^2 \ge 4(N-1)N^{-2}</math>. The colors of these areas are arbitrary and only serve to visually differentiate nearby contiguous areas from one another.]]
Recent research has shown that the kernel and its bandwidth can both be optimally and objectively chosen from the input data itself without making any assumptions about the the form of the distribution<ref name=":0">{{Cite journal|last = Bernacchia|first = Alberto|last2 = Pigolotti|first2 = Simone|date = 2011-06-01|title = Self-consistent method for density estimation|url = http://onlinelibrary.wiley.com/doi/10.1111/j.1467-9868.2011.00772.x/abstract|journal = Journal of the Royal Statistical Society: Series B (Statistical Methodology)|language = en|volume = 73|issue = 3|pages = 407–422|doi = 10.1111/j.1467-9868.2011.00772.x|issn = 1467-9868}}</ref>. The resulting kernel density estimate converges rapidly to the true probability distribution as samples are added: at a rate close to to the <math>n^{-1}</math> expected for parametric estimators <ref name=":0" /><ref name=":1">{{Cite journal|last = O’Brien|first = Travis A.|last2 = Collins|first2 = William D.|last3 = Rauscher|first3 = Sara A.|last4 = Ringler|first4 = Todd D.|date = 2014-11-01|title = Reducing the computational cost of the ECF using a nuFFT: A fast and objective probability density estimation method|url = http://www.sciencedirect.com/science/article/pii/S016794731400173X|journal = Computational Statistics & Data Analysis|volume = 79|pages = 222–234|doi = 10.1016/j.csda.2014.06.002}}</ref><ref name=":2">{{Cite journal|last = O’Brien|first = Travis A.|last2 = Kashinath|first2 = Karthik|last3 = Cavanaugh|first3 = Nicholas R.|last4 = Collins|first4 = William D.|last5 = O’Brien|first5 = John P.|title = A fast and objective multidimensional kernel density estimation method: fastKDE|url = http://www.sciencedirect.com/science/article/pii/S0167947316300408|journal = Computational Statistics & Data Analysis|doi = 10.1016/j.csda.2016.02.014}}</ref>. This kernel estimator works for univariate and multivariate samples alike. The optimal kernel is defined in Fourier space--as the optimal damping function <math>\hat{\psi_h}(\vec{t})</math> (the Fourier transform of the kernel <math>\hat{K}(\vec{x})</math> )-- in terms of the Fourier transform of the data <math>\hat{\varphi}(\vec{t})</math>, the ''[[Characteristic function (probability theory)|empirical characteristic function]]'' (see [[Kernel density estimation]]):
<math>\hat{\psi_h}(\vec{t}) \equiv \frac{N}{2(N-1)} \left[ 1 + \sqrt{1 - \frac{4(N-1)}{N^2 |\hat{\varphi}(\vec{t})|^2}} I_{\vec{A}}(\vec{t}) \right]</math> <ref name=":22">{{Cite journal|last = O’Brien|first = Travis A.|last2 = Kashinath|first2 = Karthik|last3 = Cavanaugh|first3 = Nicholas R.|last4 = Collins|first4 = William D.|last5 = O’Brien|first5 = John P.|title = A fast and objective multidimensional kernel density estimation method: fastKDE|url = http://www.sciencedirect.com/science/article/pii/S0167947316300408|journal = Computational Statistics & Data Analysis|doi = 10.1016/j.csda.2016.02.014}}</ref>
<math>\hat{f}(x) = \frac{1}{(2\pi)^d} \int \hat\varphi(\vec{t})\psi_h(\vec{t}) e^{-i\vec{t} \cdot \vec{x}}d\vec{t}</math>
where, ''N'' is the number of data points, ''d'' is the number of dimensions (variables), and <math>I_{\vec{A}}(\vec{t})</math> is a filter that is equal to 1 for 'accepted frequencies' and 0 otherwise. There are various ways to define this filter function, and a simple one that works for univariate or multivariate samples is called the 'lowest contiguous hypervolume filter'; <math>I_{\vec{A}}(\vec{t})</math> is chosen such that the only accepted frequencies are a contiguous subset of frequencies surrounding the origin for which <math>|\hat{\varphi}(\vec{t})|^2 \ge 4(N-1)N^{-2}</math> (see <ref name=":2" /> for a discussion of this and other filter functions).
Note that direct calculation of the ''empirical characteristic function'' (ECF) is slow, since it essentially involves a direct Fourier transform of the data samples. However, it has been found that the ECF can be approximated accurately using a [[Non-uniform discrete Fourier transform|non-uniform fast Fourier transform]] (nuFFT) method<ref name=":1" /><ref name=":2" />, which increases the calculation speed by several orders of magnitude (depending on the dimensionality of the problem). The combination of this objective KDE method and the nuFFT-based ECF approximation has been referred to as ''[https://bitbucket.org/lbl-cascade/fastkde fastKDE]'' in the literature<ref name=":2" />.
[[File:FastKDE_example.jpg|link=https://en.wikipedia.org/wiki/File:FastKDE_example.jpg|alt=A demonstration of fastKDE relative to a sample PDF. (a) True PDF, (b) a good representation with fastKDE, and (c) a slightly blurry representation.|none|thumb|664x664px|A non-trivial mixture of normal distributions: (a) the underlying PDF, (b) a fastKDE estimate on 1,000,000 samples, and (c) a fastKDE estimate on 10,000 samples.]]
==Asymptotic analysis==
In the optimal bandwidth selection section, we introduced the MISE. Its construction relies on the [[expected value]] and the [[variance]] of the density estimator<ref name="WJ1995" />{{rp|97}}
:<math>\operatorname{E} \hat{f}(\bold{x};\bold{H}) = K_\bold{H} * f (\bold{x}) = f(\bold{x}) + \frac{1}{2} m_2(K) \int \operatorname{tr} (\bold{H} \operatorname{D}^2 f(\bold{x})) \, d\bold{x} + O(\operatorname{tr} \, \bold{H}^2)</math>
Line 100 ⟶ 118:
:<math>\operatorname{vec} (\hat{\bold{H}} - \bold{H}_{\operatorname{AMISE}}) = O(n^{-2\alpha}) \operatorname{vec} \bold{H}_{\operatorname{AMISE}}.</math>
It has been established that the plug-in and smoothed cross validation selectors (given a single pilot bandwidth '''G''') both converge at a relative rate of ''O<sub>p</sub>(n<sup>−2/(d+6)</sup>)'' <ref name="DH2005" /><ref>{{Cite journal| doi=10.1016/j.jmva.2004.04.004 | author1=Duong, T. | author2=Hazelton, M.L. | title=Convergence rates for unconstrained bandwidth matrix selectors in multivariate kernel density estimation | journal=Journal of Multivariate Analysis | year=2005 | volume=93 | pages=417–433}}</ref> i.e., both these data-based selectors are consistent estimators.
==Density estimation with a full bandwidth matrix==
Line 168 ⟶ 186:
===Alternative optimality criteria===
The MISE is the expected integrated ''L<sub>2</sub>'' distance between the density estimate and the true density function ''f''. It is the most widely used, mostly due to its tractability and most software implement MISE-based bandwidth selectors.
There are alternative optimality criteria, which attempt to cover cases where MISE is not an appropriate measure.<ref name="simonoff1996" />{{rp|34–37,78}} The equivalent ''L<sub>1</sub>'' measure, Mean Integrated Absolute Error, is
: <math>\operatorname{MIAE} (\bold{H}) = \operatorname{E}\, \int |\hat{f}_\bold{H} (\bold{x}) - f(\bold{x})| \, d\bold{x}.</math>
|