Spectral clustering

This is an old revision of this page, as edited by 207.237.169.70 (talk) at 17:07, 26 August 2012 (Algorithms). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

In multivariate statistics and the clustering of data, spectral clustering techniques make use of the spectrum (eigenvalues) of the similarity matrix of the data to perform dimensionality reduction before clustering in fewer dimensions. The similarity matrix is provided as an input and consists of a quantitative assessment of the relative similarity of each pair of points in the dataset.

Algorithms

Given a set of data points A, the similarity matrix may be defined as a matrix  , where   represents a measure of the similarity between points  .

One spectral clustering technique is the normalized cuts algorithm or Shi–Malik algorithm introduced by Jianbo Shi and Jitendra Malik,[1] commonly used for image segmentation. It partitions points into two sets   based on the eigenvector   corresponding to the second-smallest eigenvalue of the Laplacian matrix

 

of  , where   is the diagonal matrix

 

This partitioning may be done in various ways, such as by taking the median   of the components in  , and placing all points whose component in   is greater than   in  , and the rest in  . The algorithm can be used for hierarchical clustering by repeatedly partitioning the subsets in this fashion.

A related algorithm is the Meila–Shi algorithm[2], which takes the eigenvectors corresponding to the k largest eigenvalues of the matrix   for some k, and then invokes another algorithm (e.g. k-means clustering) to cluster points by their respective k components in these eigenvectors.

An efficiency improvement of spectral clustering is the spectral neighborhood (SPAN) algorithm[3], which performs spectral clustering without explicitly computing the similarity matrix, and therefore dramatically improves the scalability of the standard spectral clustering algorithm.

Relationship with k-means

The kernel k-means problem is an extension of the k-means problem where the input data points are mapped non-linearly into a higher-dimensional feature space via a kernel function  . The weighted kernel k-means problem further extends this problem by defining a weight   for each cluster as the reciprocal of the number of elements in the cluster,

 

Suppose   is a matrix of the normalizing coefficients for each point for each cluster   if   and zero otherwise. Suppose   is the kernel matrix for all points. The weighted kernel k-means problem with n points and k clusters is given as,

 

such that,

 
 

such that  . In addition, there are identity constrains on   given by,

 

where   represents a vector of ones.

 

This problem can be recast as,

 

This problem is equivalent to the spectral clustering problem when the identity constraints on   are relaxed. In particular, the weighted kernel k-means problem can be reformulated as a spectral clustering (graph partitioning) problem and vice-versa. The output of the algorithms are eigenvectors which do not satisfy the identity requirements for indicator variables defined by  . Hence, post-processing of the eigenvectors is required for the equivalence between the problems.[4] Transforming the spectral clustering problem into a weighted kernel k-means problem greatly reduces the computational burden.[5]

See also

References

  1. ^ Jianbo Shi and Jitendra Malik, "Normalized Cuts and Image Segmentation", IEEE Transactions on PAMI, Vol. 22, No. 8, Aug 2000.
  2. ^ Marina Meilă & Jianbo Shi, "Learning Segmentation by Random Walks", Neural Information Processing Systems 13 (NIPS 2000), 2001, pp. 873–879.
  3. ^ Liangcai Shu, Aiyou Chen, Ming Xiong, Weiyi Meng, "Efficient Spectral Neighborhood Blocking for Entity Resolution", IEEE International Conference on Data Engineering (ICDE), pp.1067-1078, Hannover, Germany, April 2011.
  4. ^ Dhillon, I.S. and Guan, Y. and Kulis, B. (2004). "Kernel k-means: spectral clustering and normalized cuts". Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining. pp. 551--556. {{cite conference}}: Unknown parameter |booktitle= ignored (|book-title= suggested) (help); Unknown parameter |organization= ignored (help)CS1 maint: multiple names: authors list (link)
  5. ^ Dhillon, Inderjit (November 2007). "Weighted Graph Cuts without Eigenvectors: A Multilevel Approach". IEEE Transactions on Pattern Analysis and Machine Intelligence. 29 (11): 1–14. {{cite journal}}: |access-date= requires |url= (help); Unknown parameter |coauthors= ignored (|author= suggested) (help)CS1 maint: date and year (link)