Content deleted Content added
No edit summary |
|||
Line 1:
Given a set of data points A, the [[similarity matrix]] may be defined as a matrix <math>S</math>, where <math>S_{ij}</math> represents a measure of the similarity between points <math>i, j\in A</math>. '''Spectral clustering''' techniques make use of the [[Spectrum of a matrix|spectrum]] of the similarity matrix of the data to perform [[dimensionality reduction]] for clustering in fewer dimensions.
== Algorithms ==
One such technique is the '''[[Segmentation_based_object_categorization#Normalized_Cuts|Normalized Cuts algorithm]]''' or ''Shi–Malik algorithm'' introduced by Jianbo Shi and Jitendra Malik,<ref>Jianbo Shi and Jitendra Malik, [http://www.cs.berkeley.edu/~malik/papers/SM-ncut.pdf "Normalized Cuts and Image Segmentation"], IEEE Transactions on PAMI, Vol. 22, No. 8, Aug 2000.</ref> commonly used for [[segmentation (image processing)|image segmentation]]. It partitions points into two sets <math>(S_1,S_2)</math> based on the [[eigenvector]] <math>v</math> corresponding to the second-smallest [[eigenvalue]] of the [[Laplacian matrix]]
:<math>L = I - D^{-1/2}SD^{-1/2} \, </math>
of <math>S</math>, where <math>D</math> is the diagonal matrix
:<math>D_{ii} = \
This partitioning may be done in various ways, such as by taking the median <math>m</math> of the components in <math>v</math>, and placing all points whose component in <math>v</math> is greater than <math>m</math> in <math>S_1</math>, and the rest in <math>S_2</math>. The algorithm can be used for hierarchical clustering by repeatedly partitioning the subsets in this fashion.
A related algorithm is the '''[[Meila–Shi algorithm]]'''<ref>Marina Meilă & Jianbo Shi, "[http://www.citeulike.org/user/mpotamias/article/498897 Learning Segmentation by Random Walks]", Processing Systems 13 (NIPS 2000), 2001, pp.
== Relationship with
The
:<math>
\max_{C_i} \sum_{r=1}^
</math>
Suppose <math>F</math> is a matrix of the normalizing coefficients for each point for each cluster <math>F_{ij} = w_r</math> if <math>i,j \in C_r</math> and zero otherwise. Suppose <math>K</math> is the kernel matrix for all points. The
:<math>
\max_{F} \
</math>
such that,
Line 42:
\max_G \text{ trace }\left(G^TG\right).
</math>
This problem is equivalent to the spectral clustering problem when the identity constraints on <math>F</math> are relaxed. In particular, the weighted kernel
| author = Dhillon, I.S. and Guan, Y. and Kulis, B.
| year = 2004
| title = Kernel ''k''-means: spectral clustering and normalized cuts
| booktitle = Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining
| pages = 551--556
|