Content deleted Content added
→Definitions: introduced subsections to improve readability |
→Laplacian matrix normalization: explained the difference between symmetrically and left normalized Laplacians |
||
Line 18:
===[[Laplacian_matrix#Laplacian_matrix_normalization_2|Laplacian matrix normalization]]===
The goal of normalization is making the diagonal entries of the Laplacian matrix to be all unit, also scaling off-diagonal entries correspondingly. In a weighted graph, a vertex may have a large degree because of a small number of connected edges but with large weights just as well as due to a large number of connected edges with unit weights.
A popular related spectral clustering technique is the '''[[Segmentation based object categorization#Normalized cuts|normalized cuts algorithm]]''' or ''Shi–Malik algorithm'' introduced by Jianbo Shi and Jitendra Malik,<ref name=":0">Jianbo Shi and Jitendra Malik, [http://www.cs.berkeley.edu/~malik/papers/SM-ncut.pdf "Normalized Cuts and Image Segmentation"], IEEE Transactions on PAMI, Vol. 22, No. 8, Aug 2000.</ref> commonly used for [[segmentation (image processing)|image segmentation]]. It partitions points into two sets <math>(B_1,B_2)</math> based on the [[eigenvector]] <math>v</math> corresponding to the second-smallest [[eigenvalue]] of the [[Laplacian matrix#Symmetric normalized Laplacian|symmetric normalized Laplacian]] defined as▼
▲A popular
: <math>L^\text{norm}:=I-D^{-1/2}AD^{-1/2}.</math>▼
▲: <math>L^\text{norm}:=I-D^{-1/2}AD^{-1/2}.</math>
A mathematically equivalent algorithm <ref>Marina Meilă & Jianbo Shi, "[http://www.citeulike.org/user/mpotamias/article/498897 Learning Segmentation by Random Walks]", Neural Information Processing Systems 13 (NIPS 2000), 2001, pp. 873–879.</ref> takes the [[eigenvector]] corresponding to the largest [[eigenvalue]] of the [[adjacency matrix|random walk normalized adjacency]] matrix <math>P = D^{-1}A</math>.▼
The vector <math>v</math> is also the [[eigenvector]] corresponding to the second-largest [[eigenvalue]] of the symmetrically normalized [[adjacency matrix]] <math>D^{-1/2}AD^{-1/2}.</math>
The '''random walk (or left) normalized Laplacian''' is defined as
: <math>L^\text{rw} := D^{-1} L = I - D^{-1} A</math>
▲and can also be used for spectral clustering. A mathematically equivalent algorithm <ref>Marina Meilă & Jianbo Shi, "[http://www.citeulike.org/user/mpotamias/article/498897 Learning Segmentation by Random Walks]", Neural Information Processing Systems 13 (NIPS 2000), 2001, pp. 873–879.</ref> takes the [[eigenvector]] <math>u</math> corresponding to the largest [[eigenvalue]] of the [[adjacency matrix|random walk normalized adjacency]] matrix <math>P = D^{-1}A</math>.
The eigenvector <math>v</math> of the symmetrically normalized Laplacian and the eigenvector <math>u</math> of the left normalized Laplacian are related by the identity <math>D^{-1/2} v = u.</math>
===[[Cluster analysis]] via Spectral Embedding===
|