Content deleted Content added
m Minus sign mistake. |
m Open access bot: url-access updated in citation with #oabot. |
||
(2 intermediate revisions by 2 users not shown) | |||
Line 5:
{{lowercase title}}
{{Data Visualization}}
'''t-distributed stochastic neighbor embedding''' ('''t-SNE''') is a [[statistical]] method for visualizing high-dimensional data by giving each datapoint a ___location in a two or three-dimensional map. It is based on Stochastic Neighbor Embedding originally developed by [[Geoffrey Hinton]] and Sam Roweis,<ref name="SNE">{{cite conference |
The t-SNE algorithm comprises two main stages. First, t-SNE constructs a [[probability distribution]] over pairs of high-dimensional objects in such a way that similar objects are assigned a higher probability while dissimilar points are assigned a lower probability. Second, t-SNE defines a similar probability distribution over the points in the low-dimensional map, and it minimizes the [[Kullback–Leibler divergence]] (KL divergence) between the two distributions with respect to the locations of the points in the map. While the original algorithm uses the [[Euclidean distance]] between objects as the base of its similarity metric, this can be changed as appropriate. A [[Riemannian metric|Riemannian]] variant is [[Uniform manifold approximation and projection|UMAP]].
t-SNE has been used for visualization in a wide range of applications, including [[genomics]], [[computer security]] research,<ref>{{cite journal|last=Gashi|first=I.|author2=Stankovic, V. |author3=Leita, C. |author4=Thonnard, O. |title=An Experimental Study of Diversity with Off-the-shelf AntiVirus Engines|journal=Proceedings of the IEEE International Symposium on Network Computing and Applications|year=2009|pages=4–11}}</ref> [[natural language processing]], [[music analysis]],<ref>{{cite journal|last=Hamel|first=P.|author2=Eck, D. |title=Learning Features from Music Audio with Deep Belief Networks|journal=Proceedings of the International Society for Music Information Retrieval Conference|year=2010|pages=339–344}}</ref> [[cancer research]],<ref>{{cite journal|last=Jamieson|first=A.R.|author2=Giger, M.L. |author3=Drukker, K. |author4=Lui, H. |author5=Yuan, Y. |author6=Bhooshan, N. |title=Exploring Nonlinear Feature Space Dimension Reduction and Data Representation in Breast CADx with Laplacian Eigenmaps and t-SNE|journal=Medical Physics |issue=1|year=2010|pages=339–351|doi=10.1118/1.3267037|pmid=20175497|volume=37|pmc=2807447}}</ref> [[bioinformatics]],<ref>{{cite journal|last=Wallach|first=I.|author2=Liliean, R. |title=The Protein-Small-Molecule Database, A Non-Redundant Structural Resource for the Analysis of Protein-Ligand Binding|journal=Bioinformatics |year=2009|pages=615–620|doi=10.1093/bioinformatics/btp035|volume=25|issue=5|pmid=19153135|doi-access=free}}</ref> geological ___domain interpretation,<ref>{{Cite journal|date=2019-04-01|title=A comparison of t-SNE, SOM and SPADE for identifying material type domains in geological data|url=https://www.sciencedirect.com/science/article/pii/S0098300418306010|journal=Computers & Geosciences|language=en|volume=125|pages=78–89|doi=10.1016/j.cageo.2019.01.011|issn=0098-3004|last1=Balamurali|first1=Mehala|last2=Silversides|first2=Katherine L.|last3=Melkumyan|first3=Arman|bibcode=2019CG....125...78B |s2cid=67926902|url-access=subscription}}</ref><ref>{{Cite book|last1=Balamurali|first1=Mehala|last2=Melkumyan|first2=Arman|date=2016|editor-last=Hirose|editor-first=Akira|editor2-last=Ozawa|editor2-first=Seiichi|editor3-last=Doya|editor3-first=Kenji|editor4-last=Ikeda|editor4-first=Kazushi|editor5-last=Lee|editor5-first=Minho|editor6-last=Liu|editor6-first=Derong|chapter=t-SNE Based Visualisation and Clustering of Geological Domain|chapter-url=https://link.springer.com/chapter/10.1007/978-3-319-46681-1_67|title=Neural Information Processing|series=Lecture Notes in Computer Science|volume=9950|language=en|___location=Cham|publisher=Springer International Publishing|pages=565–572|doi=10.1007/978-3-319-46681-1_67|isbn=978-3-319-46681-1}}</ref><ref>{{Cite journal|last1=Leung|first1=Raymond|last2=Balamurali|first2=Mehala|last3=Melkumyan|first3=Arman|date=2021-01-01|title=Sample Truncation Strategies for Outlier Removal in Geochemical Data: The MCD Robust Distance Approach Versus t-SNE Ensemble Clustering|url=https://doi.org/10.1007/s11004-019-09839-z|journal=Mathematical Geosciences|language=en|volume=53|issue=1|pages=105–130|doi=10.1007/s11004-019-09839-z|bibcode=2021MatGe..53..105L |s2cid=208329378|issn=1874-8953|url-access=subscription}}</ref> and biomedical signal processing.<ref>{{Cite book|last1=Birjandtalab|first1=J.|last2=Pouyan|first2=M. B.|last3=Nourani|first3=M.|title=2016 IEEE-EMBS International Conference on Biomedical and Health Informatics (BHI) |chapter=Nonlinear dimension reduction for EEG-based epileptic seizure detection |date=2016-02-01|pages=595–598|doi=10.1109/BHI.2016.7455968|isbn=978-1-5090-2455-1|s2cid=8074617}}</ref>
For a data set with ''n'' elements, t-SNE runs in {{math|O(''n''<sup>2</sup>)}} time and requires {{math|O(''n''<sup>2</sup>)}} space.<ref>{{cite arXiv|title=Approximated and User Steerable tSNE for Progressive Visual Analytics|last=Pezzotti|first=Nicola|date=2015 |class=cs.CV |eprint=1512.01655 }}</ref>
Line 30:
Also note that <math>p_{ii} = 0 </math> and <math>\sum_{i, j} p_{ij} = 1</math>.
The bandwidth of the [[Gaussian kernel]]s <math>\sigma_i</math> is set in such a way that the [[Entropy (information theory)|entropy]] of the conditional distribution equals a predefined entropy using the [[bisection method]]. As a result, the bandwidth is adapted to the [[density]] of the data: smaller values of <math>\sigma_i</math> are used in denser parts of the data space. The entropy increases with the [[perplexity]] of this distribution <math>P_i</math>; this relation is seen as
: <math>Perp(P_i) = 2^{H(P_i)}</math>
where <math>H(P_i)</math> is the
The perplexity is a hand-chosen parameter of t-SNE, and as the authors state, "perplexity can be interpreted as a smooth measure of the effective number of neighbors. The performance of SNE is fairly robust to changes in the perplexity, and typical values are between 5 and 50.".<ref name=MaatenHinton/>
Since the Gaussian kernel uses the [[Euclidean distance]] <math>\lVert x_i-x_j \rVert</math>, it is affected by the [[curse of dimensionality]], and in high dimensional data when distances lose the ability to discriminate, the <math>p_{ij}</math> become too similar (asymptotically, they would converge to a constant). It has been proposed to adjust the distances with a power transform, based on the [[intrinsic dimension]] of each point, to alleviate this.<ref>{{Cite conference|last1=Schubert|first1=Erich|last2=Gertz|first2=Michael|date=2017-10-04|title=Intrinsic t-Stochastic Neighbor Embedding for Visualization and Outlier Detection|conference=SISAP 2017 – 10th International Conference on Similarity Search and Applications|pages=188–203|doi=10.1007/978-3-319-68474-1_13}}</ref>
t-SNE aims to learn a <math>d</math>-dimensional map <math>\mathbf{y}_1, \dots, \mathbf{y}_N</math> (with <math>\mathbf{y}_i \in \mathbb{R}^d</math> and <math>d</math> typically chosen as 2 or 3) that reflects the similarities <math>p_{ij}</math> as well as possible. To this end, it measures similarities <math>q_{ij}</math> between two points in the map <math>\mathbf{y}_i</math> and <math>\mathbf{y}_j</math>, using a very similar approach.
|