Multivariate kernel density estimation

This is an old revision of this page, as edited by Drleft (talk | contribs) at 15:02, 15 September 2010. The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.


Statistics is a field of quantitative analysis concerned with quantifying uncertainty. The main building block of statistical analysis is a random variable. A random variable is a mathematics function which assigns a numerical value to each possible value of the variable of interest. The complete behaviour of a random variable is contained in its distribution function. For continuous random variables, the partial derivative of the distribution function is known as probability density function or pdf. So density estimation is a fundamental question in statistics.

Kernel density estimation is one of the most popular techniques for density estimation. It can be viewed as a generalisation of histogram density estimation with improved statistical properties. Kernel density estimators were first introduced in the scientific literature for univariate data in the 1950s and 1960s[1][2] and subsequently have been widely adopted. It was soon recognised that analagous estimators for multivariate data would be an important addition to multivariate statistics. Based on research carried out in the 1990s and 2000s, multivariate kernel density estimation has reached a level of maturity comparable to their univariate counterparts.


Motivation

To motivate the definition of multivariate kernel density estimators, we take as an illustrative bivariate data set drawn from ....

Problems with bivariate histograms.


Definition

Let   be a d-variate random sample drawn from a common density function f. The kernel density estimate is defined to be

 

where

  •  ,   are d-vectors
  • K is the kernel function which is a symmetric density function, with  
  • H is the bandwidth (or smoothing) matrix which is a symmetric, positive definite d x d matrix.

The choice of the kernel function K is not crucial to the accuracy of kernel density estimators, whereas the choice of the bandwidth matrix H is the single most important factor affecting its accuracy [3](pp. 36-39). So we use the standard multivariate normal or Gaussian density function as our kernel K

 

Optimal bandwidth matrix selection

The most commonly used optimality criterion for selecting a bandwidth matrix is the MISE or Mean Integrated Squared Error

 

This is in general does not possess a closed form expression, so it is usual to use its asymptotic approximation (AMISE) as a proxy

 

where

  •  . For the normal kernel  ,  
  •   = m_2(K) \bold{I}_d</math>, with   is the d x d identity matrix
  •   is the d x d Hessian matrix of second order partial derivatives of  

This formula of the AMISE is due to [3](p. 97). The quality of the AMISE approximation to the MISE is given by

 

where o, O indicate the usual small and big O notation. Heuristically this statement implies that the AMISE is a 'good' approximation of the MISE as the sample size n → ∞.

The ideal optimal bandwidth selector is   where F is the space of all symmetric, positive definite matrices. Since this ideal selector contains the unknown density function >em>f, it cannot be used directly. The many different varieties of data-based bandwidth selectors arise from the different estimators of the AMISE. We concentrate on two classes of selectors which have been shown to be the most widely applicable in practise: smoothed cross validation and plug-in selectors.

Plug-in

The plug-in (PI) selector of the AMISE is formed by replacing the Hessian matrix by its estimator

 

and   is the plug-in selector.[4][5]

Smoothed cross validation

Smoothed cross validation is a subset of a larger class of cross validation techniques.

[6].


Smoothed cross validation

References

  1. ^ Rosenblatt, M. (1956). "Remarks on some nonparametric estimates of a density function". Annals of Mathematical Statistics. 27: 832–837. doi:10.1214/aoms/1177728190.
  2. ^ Parzen, E. (1962). "On estimation of a probability density function and mode". Annals of Mathematical Statistics. 33: 1065–1076. doi:10.1214/aoms/1177704472.
  3. ^ a b Wand, M.P; Jones, M.C. (1995). Kernel Smoothing. London: Chapman & Hall. ISBN 0412552701. Cite error: The named reference "WJ1995" was defined multiple times with different content (see the help page).
  4. ^ Wand, M.P.; Jones, M.C. (1994). "Multivariate plug-in bandwidth selection". Computational Statistics. 9: 97–177.
  5. ^ Duong, T.; Hazelton, M.L. (2003). "Plug-in bandwidth matrices for bivariate kernel density estimation". Journal of Nonparametric Statistics. 15: 17–30. doi:10.1080/10485250306039.
  6. ^ Chacón, J.E; Duong, T. (2010). "Multivariate plug-in bandwidth selection with unconstrained pilot bandwidth matrices". Test. 19: 375–398. doi:10.1007/s11749-009-0168-4.

ℝℝℝℝ