Hierarchical clustering: Difference between revisions

Content deleted Content added
Aasimayaz (talk | contribs)
m improved the definition of the hierarchical clustering
Tags: Reverted Visual edit
Aasimayaz (talk | contribs)
Modified the section about Cluster Linkage. added more descriptions
Tags: Reverted Visual edit
Line 17:
Divisive clustering with an exhaustive search is <math>\mathcal{O}(2^n)</math>, but it is common to use faster heuristics to choose splits, such as [[k-means clustering|''k''-means]].
 
== ClusterDeciding Linkage Criteria ==
In orderhierarchical toclustering, decidewhether whichagglomerative clusters(bottom-up) shouldor be combineddivisive (for agglomerativetop-down), ordecisions whereabout amerging clusteror shouldsplitting beclusters splitdepend (for divisive),on a measure of dissimilarity between sets of observations is required. InThis mostdissimilarity methodsis oftypically hierarchical clustering, this is achieveddefined by usetwo ofcomponents: ana appropriate [[distance]] ''d'',metric (such as the Euclidean distance,) betweenapplied ''single''to observations of theindividual data setpoints, and a '''linkage criterion, which specifies the dissimilarity of ''sets'' as a function of the pairwise distances of observations in the sets. The choice of metric as well as linkage can have a major impact on the result of the clustering, where the lower level metricwhich determines whichhow objectsdistances are most [[similarity measure|similar]], whereas the linkage criterion influences the shape of thebetween clusters. Forare example,computed complete-linkagefrom tendsthese topairwise producepoint more spherical clusters than single-linkagedistances.
 
The '''linkage criterion''' plays a central role in shaping the resulting cluster structure. It defines the distance between clusters as a function of the distances between observations they contain. The combination of the metric and linkage choice influences both the granularity and the geometry of the final clusters. For instance, some linkage methods emphasize compactness, while others favor connectivity, potentially resulting in elongated or irregular clusters.
The linkage criterion determines the distance between sets of observations as a function of the pairwise distances between observations.
 
=== Common Linkage Criteria ===
Some commonly used linkage criteria between two sets of observations ''A'' and ''B'' and a distance ''d'' are:<ref>{{cite web | title=The CLUSTER Procedure: Clustering Methods | url=https://support.sas.com/documentation/cdl/en/statug/63033/HTML/default/statug_cluster_sect012.htm | work=SAS/STAT 9.2 Users Guide | publisher= [[SAS Institute]] | access-date=2009-04-26}}</ref><ref>{{cite journal |last1=Székely |first1=G. J. |last2=Rizzo |first2=M. L. |year=2005 |title=Hierarchical clustering via Joint Between-Within Distances: Extending Ward's Minimum Variance Method |journal=Journal of Classification |volume=22 |issue=2 |pages=151–183 |doi=10.1007/s00357-005-0012-9 |s2cid=206960007 }}</ref>
'''Single linkage''' (nearest neighbor) defines the distance between two clusters as the shortest distance between any pair of points, one from each cluster. This method is computationally efficient and capable of detecting non-convex cluster shapes. However, it is highly sensitive to noise and can lead to a "chaining effect," where distant points are merged due to a connected sequence of close neighbors.
{|class="wikitable"
 
# '''Complete linkage''' (farthest neighbor) uses the maximum distance between any pair of observations across two clusters. This approach tends to produce more compact, spherical clusters and is less prone to chaining. However, it can be overly sensitive to outliers and may split larger clusters prematurely if internal variance is high.
# '''Average linkage''' (also known as UPGMA—Unweighted Pair Group Method with Arithmetic Mean) calculates the mean of all pairwise distances between points in two clusters. It strikes a balance between the tightness of complete linkage and the flexibility of single linkage. This method typically yields clusters of moderate size and variance, making it suitable when cluster homogeneity is expected.
# '''Ward’s method''' takes a different approach by focusing on minimizing the total within-cluster variance at each step. It merges the pair of clusters whose union results in the smallest increase in the sum of squared distances to the cluster centroids. This often leads to compact, evenly-sized clusters and is particularly effective for datasets with continuous variables.
# '''Centroid linkage''' defines cluster distance based on the Euclidean distance between their centroids (mean vectors). While intuitive, it may produce '''inversions'''—situations where the merged cluster appears closer to another cluster than the original components were—potentially distorting the hierarchy. This makes centroid linkage less robust in some contexts, particularly with non-convex clusters.
 
Each linkage method has its advantages and trade-offs. The optimal choice depends on the characteristics of the dataset and the objectives of the clustering. For example, Ward's method is preferred when variance minimization is crucial, while single linkage might be selected for detecting complex, non-globular shapes. Visualization tools such as '''dendrograms''' can be particularly helpful in comparing the results of different linkage criteria and understanding their effect on the clustering structure. Some commonly used linkage criteria between two sets of observations ''A'' and ''B'' and a distance ''d'' are:<ref>{{cite web | title=The CLUSTER Procedure: Clustering Methods | url=https://support.sas.com/documentation/cdl/en/statug/63033/HTML/default/statug_cluster_sect012.htm | work=SAS/STAT 9.2 Users Guide | publisher= [[SAS Institute]] | access-date=2009-04-26}}</ref><ref>{{cite journal |last1=Székely |first1=G. J. |last2=Rizzo |first2=M. L. |year=2005 |title=Hierarchical clustering via Joint Between-Within Distances: Extending Ward's Minimum Variance Method |journal=Journal of Classification |volume=22 |issue=2 |pages=151–183 |doi=10.1007/s00357-005-0012-9 |s2cid=206960007 }}</ref>
{| class="wikitable"
! Names
! Formula