Content deleted Content added
m →Time and space analysis: missing period |
Link suggestions feature: 3 links added. |
||
(68 intermediate revisions by 15 users not shown) | |||
Line 1:
{{Short description|Stack-based method for clustering}}
In the theory of [[cluster analysis]], the '''nearest-neighbor chain algorithm''' is a method that can be used to perform several types of [[agglomerative hierarchical clustering]], in which a hierarchy of clusters is created by repeatedly merging pairs of smaller clusters to form larger clusters. In particular it can be used for [[Ward's method]], [[complete-linkage clustering]], and [[single-linkage clustering]], which all work by merging the closest two clusters under different definitions of the distance between clusters.▼
{{good article}}
▲In the theory of [[cluster analysis]], the '''nearest-neighbor chain algorithm''' is
The main idea of the algorithm is to find pairs of clusters to merge by following [[Path (graph theory)|paths]] in the [[nearest neighbor graph]] of the clusters.
The nearest-neighbor chain algorithm
==Background==
[[File:Hierarchical clustering diagram.png|thumb|upright=1.35|A hierarchical clustering of six points. The points to be clustered are at the top of the diagram, and the nodes below them represent clusters.]]
The input to a clustering problem consists of a set of points. A ''cluster'' is any proper subset of the points, and a hierarchical clustering is a [[maximal element|maximal]] family of clusters with the property that any two clusters in the family are either nested or [[disjoint set|disjoint]].▼
Many problems in [[data analysis]] concern [[Cluster analysis|clustering]], grouping data items into clusters of closely related items. [[Hierarchical clustering]] is a version of cluster analysis in which the clusters form a hierarchy or tree-like structure rather than a strict partition of the data items. In some cases, this type of clustering may be performed as a way of performing cluster analysis at multiple different scales simultaneously. In others, the data to be analyzed naturally has an unknown tree structure and the goal is to recover that structure by performing the analysis. Both of these kinds of analysis can be seen, for instance, in the application of hierarchical clustering to [[Taxonomy (biology)|biological taxonomy]]. In this application, different living things are grouped into clusters at different scales or levels of similarity ([[Taxonomic rank|species, genus, family, etc]]). This analysis simultaneously gives a multi-scale grouping of the organisms of the present age, and aims to accurately reconstruct the [[branching process]] or [[Phylogenetic tree|evolutionary tree]] that in past ages produced these organisms.<ref>{{citation
| last = Gordon | first = Allan D.
| editor1-last = Arabie | editor1-first = P.
| editor2-last = Hubert | editor2-first = L. J.
| editor3-last = De Soete | editor3-first = G.
| contribution = Hierarchical clustering
| contribution-url = https://books.google.com/books?id=HbfsCgAAQBAJ&pg=PA65
| isbn = 9789814504539
| ___location = River Edge, NJ
| pages = 65–121
| publisher = World Scientific
| title = Clustering and Classification
| year = 1996}}.</ref>
▲The input to a clustering problem consists of a set of points.<ref name="murtagh-tcj"/> A ''cluster'' is any proper subset of the points, and a hierarchical clustering is a [[maximal element|maximal]] family of clusters with the property that any two clusters in the family are either nested or [[disjoint set|disjoint]].
In agglomerative clustering methods, the input also includes a distance function defined on the points, or a numerical measure of their dissimilarity that is symmetric (insensitive to the ordering within each pair of points) but (unlike a distance) may not satisfy the triangle inequality. Depending on the method, this dissimilarity function can be extended in several different ways to pairs of clusters; for instance, in the [[single-linkage clustering]] method, the distance between two clusters is defined to be the minimum distance between any two points from each cluster. Given this distance between clusters, a hierarchical clustering may be defined by a [[greedy algorithm]] that initially places each point in its own single-point cluster and then repeatedly merges the [[closest pair]] of clusters.<ref name="murtagh-tcj"/>▼
Alternatively, a hierarchical clustering may be represented as a [[binary tree]] with the points at its leaves; the clusters of the clustering are the sets of points in subtrees descending from each node of the tree.<ref>{{citation|title=Clustering|volume=10|series=IEEE Press Series on Computational Intelligence|first1=Rui|last1=Xu|first2=Don|last2=Wunsch|publisher=John Wiley & Sons|year=2008|isbn=978-0-470-38278-3|page=31|contribution-url=https://books.google.com/books?id=kYC3YCyl_tkC&pg=PA31|contribution=3.1 Hierarchical Clustering: Introduction}}.</ref>
In agglomerative clustering methods, the input also includes a distance function defined on the points, or a numerical measure of their dissimilarity.
However, known methods for repeatedly finding the closest pair of clusters in a dynamic set of clusters either require superlinear space to maintain a [[data structure]] that can find closest pairs quickly, or they take greater than linear time to find each closest pair.<ref name="e-jea">{{citation▼
The distance or dissimilarity should be symmetric: the distance between two points does not depend on which of them is considered first.
However, unlike the distances in a [[metric space]], it is not required to satisfy the [[triangle inequality]].<ref name="murtagh-tcj"/>
▲
The bottleneck of this greedy algorithm is the subproblem of finding which two clusters to merge in each step.
▲
| last = Eppstein | first = David | authorlink = David Eppstein
| arxiv = cs.DS/9912014
| issue = 1
| journal =
| pages = 1–23
| publisher = ACM
Line 21 ⟶ 42:
| url = http://www.jea.acm.org/2000/EppsteinDynamic/
| volume = 5
| year = 2000| doi = 10.1145/351827.351829 | bibcode = 1999cs.......12014E | s2cid = 1357701 }}.</ref><ref name="day-edels">{{citation
| last1 = Day | first1 = William H. E.
| last2 = Edelsbrunner | first2 = Herbert | author2-link = Herbert Edelsbrunner
Line 31 ⟶ 52:
| url = http://www.cs.duke.edu/~edels/Papers/1984-J-05-HierarchicalClustering.pdf
| volume = 1
| year = 1984| s2cid = 121201396
==The algorithm==
[[File:Nearest-neighbor chain algorithm animated.gif|frame
Intuitively, the nearest neighbor chain algorithm repeatedly follows a chain of clusters {{math|''A'' → ''B'' → ''C'' → ...}} where each cluster is the nearest neighbor of the previous one, until reaching a pair of clusters that are mutual nearest neighbors.<ref name="murtagh-tcj">{{citation
| last = Murtagh | first = Fionn
Line 41 ⟶ 63:
| volume = 26 | issue = 4 | pages = 354–359
| year = 1983
| url = http://
| doi = 10.1093/comjnl/26.4.354| doi-access = free
}}.</ref> | last = Murtagh | first = Fionn
| editor1-last = Abello | editor1-first = James M.
| editor2-last = Pardalos | editor2-first = Panos M.
| editor3-last = Resende | editor3-first = Mauricio G. C. | editor3-link = Mauricio Resende
| contribution = Clustering in massive data sets
| isbn = 978-1-4020-0489-6
Line 55 ⟶ 78:
| series = Massive Computing
| title = Handbook of massive data sets
| contribution-url = https://books.google.com/books?id=_VI0LITp3ecC&pg=PA513
| volume = 4
| year = 2002
}}.</ref>
*Initialize the set of active clusters to consist of {{mvar|n}} one-point clusters, one for each input point.
*Let {{mvar|S}} be a [[Stack (data structure)|stack data structure]], initially empty, the elements of which will be active clusters.
Line 66 ⟶ 90:
**Otherwise, if {{mvar|D}} is not already in {{mvar|S}}, push it onto {{mvar|S}}.
and then select (among the equal nearest neighbors) the one with the smallest index number. This rule prevents certain kinds of inconsistent behavior in the algorithm; for instance, without such a rule, the neighboring cluster {{mvar|D}} might occur earlier in the stack than as the predecessor of {{mvar|C}}.<ref>For this tie-breaking rule, and an example of how tie-breaking is needed to prevent cycles in the nearest neighbor graph, see {{citation|contribution=Figure 20.7|page=244|title=Algorithms in Java, Part 5: Graph Algorithms|first=Robert|last=Sedgewick|authorlink=Robert Sedgewick (computer scientist)|edition=3rd|publisher=Addison-Wesley|year=2004|isbn=0-201-36121-3}}.</ref>
==Time and space analysis==
Each iteration of the loop performs a single search for the nearest neighbor of a cluster, and either adds one cluster to the stack or removes two clusters from it. Every cluster is only ever added once to the stack, because when it is removed again it is immediately made inactive and merged. There are a total of {{math|2''n'' − 2}} clusters that ever get added to the stack: {{math|''n''}} single-point clusters in the initial set, and {{math|''n'' − 2}} internal nodes other than the root in the binary tree representing the clustering. Therefore, the algorithm performs {{math|2''n'' − 2}} pushing iterations and {{math|''n'' − 1}} popping iterations.<ref name="murtagh-tcj"/>
Each of these iterations may spend time scanning as many as {{math|''n'' − 1}} inter-cluster distances to find the nearest neighbor.
The total number of distance calculations it makes is therefore less than {{math|3''n''<sup>2</sup>}}.
For the same reason, the total time used by the algorithm outside of these distance calculations is {{math|O(''n''<sup>2</sup>)}}.<ref name="murtagh-tcj"/>
Since the only data structure is the set of active clusters and the stack containing a subset of the active clusters, the space required is linear in the number of input points.<ref name="murtagh-tcj"/>
==Correctness==
For the algorithm to be correct, it must be the case that popping and merging the top two clusters from the algorithm's stack preserves the property that the remaining clusters on the stack form a chain of nearest neighbors.
The correctness of this algorithm relies on a property of its distance function called ''reducibility'', identified by {{harvtxt|Bruynooghe|1977}} in connection with an earlier clustering method that used mutual nearest neighbor pairs but not chains of nearest neighbors.<ref name="b77">{{citation|first=Michel|last=Bruynooghe|title=Méthodes nouvelles en classification automatique de données taxinomiqes nombreuses|journal=Statistique et Analyse des Données|volume=3|pages=24–42|year=1977|url=http://www.numdam.org/item?id=SAD_1977__2_3_24_0}}.</ref> A distance function {{mvar|d}} on clusters is defined to be reducible if, for every three clusters {{mvar|A}}, {{mvar|B}} and {{mvar|C}} in the greedy hierarchical clustering such that {{mvar|A}} and {{mvar|B}} are mutual nearest neighbors, the following inequality holds:▼
Additionally, it should be the case that all of the clusters produced during the algorithm are the same as the clusters produced by a [[greedy algorithm]] that always merges the closest two clusters, even though the greedy algorithm
will in general perform its merges in a different order than the nearest-neighbor chain algorithm. Both of these properties depend on the specific choice of how to measure the distance between clusters.<ref name="murtagh-tcj"/>
▲The correctness of this algorithm relies on a property of its distance function called ''reducibility''
:{{math|''d''(''A'' ∪ ''B'', ''C'') ≥ min(d(''A'',''C''), d(''B'',''C''))}}.
If a distance function has the reducibility property, then merging two clusters {{mvar|C}} and {{mvar|D}} can only cause the nearest neighbor of {{mvar|E}} to change if that nearest neighbor was one of {{mvar|C}} and {{mvar|D}}. This has two important consequences for the nearest neighbor chain algorithm. First, it can be shown using this property that, at each step of the algorithm, the clusters on the stack {{mvar|S}} form a valid chain of nearest neighbors, because whenever a nearest neighbor becomes invalidated it is immediately removed from the stack.<ref name="murtagh-tcj"/>
Second, and even more importantly, it follows from this property that, if two clusters {{mvar|C}} and {{mvar|D}} both belong to the greedy hierarchical clustering, and are mutual nearest neighbors at any point in time, then they will be merged by the greedy clustering, for they must remain mutual nearest neighbors until they are merged. It follows that each mutual nearest neighbor pair found by the nearest neighbor chain algorithm is also a pair of clusters found by the greedy algorithm, and therefore that the nearest neighbor chain algorithm computes exactly the same clustering (although in a different order) as the greedy algorithm.<ref name="murtagh-tcj"/>
==Application to specific clustering distances==
===Ward's method===
[[Ward's method]] is an agglomerative clustering method in which the dissimilarity between two clusters {{mvar|A}} and {{mvar|B}} is measured by the amount by which merging the two clusters into a single larger cluster would increase the average squared distance of a point to its cluster [[centroid]].<ref name="mirkin">{{citation
Line 116 ⟶ 145:
| year = 2011}}.</ref> Alternatively, this distance can be seen as the difference in [[:en:k-means clustering|k-means cost]] between the new cluster and the two old clusters.
Ward's distance is also reducible, as can be seen more easily from a different formula
| last1 = Lance | first1 = G. N.
| last2 = Williams | first2 = W. T. | author2-link = W. T. Williams
Line 125 ⟶ 154:
| title = A general theory of classificatory sorting strategies. I. Hierarchical systems
| volume = 9
| year = 1967
}}.</ref>
:<math>d(A\cup B,C) = \frac{n_A+n_C}{n_A+n_B+n_C} d(A,C) + \frac{n_B+n_C}{n_A+n_B+n_C} d(B,C) - \frac{n_C}{n_A+n_B+n_C} d(A,B).</math>
Distance update formulas such as this one are called formulas "of Lance–Williams type" after the work of {{harvtxt|Lance|Williams|1967}}.
If <math>d(A,B)</math> is the smallest of the three distances on the right hand side (as would necessarily be true if <math>A</math> and <math>B</math> are mutual nearest-neighbors) then the negative contribution from its term is cancelled by the <math>n_C</math> coefficient of one of the two other terms, leaving a positive value added to the weighted average of the other two distances. Therefore, the combined distance is always at least as large as the minimum of <math>d(A,C)</math> and <math>d(B,C)</math>, meeting the definition of reducibility.
===Complete linkage and average distance===
[[Complete-linkage clustering|Complete-linkage]] or furthest-neighbor clustering is a form of agglomerative clustering that
Unlike Ward's method, these two forms of clustering do not have a constant-time method for computing distances between pairs of clusters. Instead it is possible to maintain an array of distances between all pairs of clusters
The same {{math|''O''(''n''<sup>2</sup>)}} time and space bounds can also be achieved in a different way,
▲Unlike Ward's method, these two forms of clustering do not have a constant-time method for computing distances between pairs of clusters. Instead it is possible to maintain an array of distances between all pairs of clusters, using the Lance–Williams formula to update the array as pairs of clusters are merged, in time and space {{math|''O''(''n''<sup>2</sup>)}}. The nearest-neighbor chain algorithm may be used in conjunction with this array of distances to find the same clustering as the greedy algorithm for these cases in total time and space {{math|''O''(''n''<sup>2</sup>)}}. The same {{math|''O''(''n''<sup>2</sup>)}} time and space bounds can also be achieved by a different and more general technique that overlays a [[quadtree]]-based priority queue data structure on top of the distance matrix and uses it to perform the standard greedy clustering algorithm, avoiding the need for reducibility,<ref name="e-jea"/> but the nearest-neighbor chain algorithm matches its time and space bounds while using simpler data structures.<ref>{{citation
by a technique that overlays a [[quadtree]]-based priority queue data structure on top of the distance matrix and uses it to perform the standard greedy clustering algorithm.
This quadtree method is more general, as it works even for clustering methods that are not reducible.<ref name="e-jea"/> However, the nearest-neighbor chain algorithm matches its time and space bounds while using simpler data structures.<ref name="gm07">{{citation
| last1 = Gronau | first1 = Ilan
| last2 = Moran | first2 = Shlomo | author2-link = Shlomo Moran
Line 152 ⟶ 187:
As with complete linkage and average distance, the difficulty of calculating cluster distances causes the nearest-neighbor chain algorithm to take time and space {{math|''O''(''n''<sup>2</sup>)}} to compute the single-linkage clustering.
However, the single-linkage clustering can be found more efficiently by an alternative algorithm that computes the [[minimum spanning tree]] of the input distances using [[Prim's algorithm]],
| last1 = Gower | first1 = J. C.
| last2 = Ross | first2 = G. J. S.
Line 165 ⟶ 200:
===Centroid distance===
Another distance measure commonly used in agglomerative clustering is the distance between the centroids of pairs of clusters, also known as the weighted group method.<ref name="mirkin"/><ref name="lance-williams"/> It can be calculated easily in constant time per distance calculation. However, it is not reducible
A different algorithm by {{harvtxt|Day ===Distances sensitive to merge order===
The above presentation explicitly disallowed distances sensitive to merge order
| last=
| first=Daniel
| arxiv=1109.
| title=Modern hierarchical, agglomerative clustering algorithms
| volume=1109
| year=2011
| bibcode=2011arXiv1109.2378M
}}.</ref>
==History==
The nearest-neighbor chain algorithm was developed and implemented in 1982 by
| last = Benzécri | first = J.-P. | authorlink = Jean-Paul Benzécri
| issue = 2
| journal = Les Cahiers de l'Analyse des Données
Line 206 ⟶ 244:
{{reflist|colwidth=30em}}
[[Category:
|