Consensus clustering: Difference between revisions

Content deleted Content added
No edit summary
m link dot product using Find link
Line 52:
''Punera'' and ''Ghosh'' extended the idea of hard clustering ensembles to the soft clustering scenario. Each instance in a soft ensemble is represented by a concatenation of ''r'' posterior membership probability distributions obtained from the constituent clustering algorithms. We can define a distance measure between two instances using the [[Kullback–Leibler divergence|Kullback–Leibler (KL) divergence]], which calculates the "distance" between two probability distributions.<ref>Kunal Punera, Joydeep Ghosh. [https://web.archive.org/web/20081201150950/http://www.ideal.ece.utexas.edu/papers/2007/punera07softconsensus.pdf Consensus Based Ensembles of Soft Clusterings]</ref>
 
#'''sCSPA''': extends CSPA by calculating a similarity matrix. Each object is visualized as a point in dimensional space, with each dimension corresponding to probability of its belonging to a cluster. This technique first transforms the objects into a label-space and then interprets the [[dot product]] between the vectors representing the objects as their similarity.
#'''sMCLA''':extends MCLA by accepting soft clusterings as input. sMCLA's working can be divided into the following steps:
#* Construct Soft Meta-Graph of Clusters