Content deleted Content added
Olexa Riznyk (talk | contribs) →Specific Networks: Adding a reference, adding wikilinks, copyedit, punctuation correction, fixing style error |
Maxeto0910 (talk | contribs) →Probabilistic methods: removed double links in section |
||
(38 intermediate revisions by 27 users not shown) | |||
Line 1:
{{Short description|
{{Machine learning|Paradigms}}▼
'''Unsupervised learning''' is a framework in [[machine learning]] where, in contrast to [[supervised learning]], algorithms learn patterns exclusively from unlabeled data.<ref name="WeiWu">{{Cite web |last=Wu |first=Wei |title=Unsupervised Learning |url=https://na.uni-tuebingen.de/ex/ml_seminar_ss2022/Unsupervised_Learning%20Final.pdf |access-date=26 April 2024 |archive-date=14 April 2024 |archive-url=https://web.archive.org/web/20240414213810/https://na.uni-tuebingen.de/ex/ml_seminar_ss2022/Unsupervised_Learning%20Final.pdf |url-status=live }}</ref> Other frameworks in the spectrum of supervisions include [[Weak supervision|weak- or semi-supervision]], where a small portion of the data is tagged, and [[Self-supervised learning|self-supervision]]. Some researchers consider self-supervised learning a form of unsupervised learning.<ref>{{Cite journal |last1=Liu |first1=Xiao |last2=Zhang |first2=Fanjin |last3=Hou |first3=Zhenyu |last4=Mian |first4=Li |last5=Wang |first5=Zhaoyu |last6=Zhang |first6=Jing |last7=Tang |first7=Jie |date=2021 |title=Self-supervised Learning: Generative or Contrastive |url=https://ieeexplore.ieee.org/document/9462394 |journal=IEEE Transactions on Knowledge and Data Engineering |pages=1 |doi=10.1109/TKDE.2021.3090866 |issn=1041-4347|arxiv=2006.08218 }}</ref>
Conceptually, unsupervised learning divides into the aspects of data, training, algorithm, and downstream applications. Typically, the dataset is harvested cheaply "in the wild", such as massive [[text corpus]] obtained by [[Web crawler|web crawling]], with only minor filtering (such as [[Common Crawl]]). This compares favorably to supervised learning, where the dataset (such as the [[ImageNet|ImageNet1000]]) is typically constructed manually, which is much more expensive.
=== Tasks vs. methods ===▼
There were algorithms designed specifically for unsupervised learning, such as [[Cluster analysis|clustering algorithms]] like [[K-means clustering|k-means]], [[dimensionality reduction]] techniques like [[Principal component analysis|principal component analysis (PCA)]], [[Boltzmann machine|Boltzmann machine learning]], and [[autoencoder]]s. After the rise of deep learning, most large-scale unsupervised learning have been done by training general-purpose neural network architectures by [[gradient descent]], adapted to performing unsupervised learning by designing an appropriate training procedure.
▲{{Machine learning|Paradigms}}
Neural network tasks are often categorized as discriminative (recognition) or generative (imagination). Often but not always, discriminative tasks use supervised methods and generative tasks use unsupervised (see [[Venn diagram]]); however, the separation is very hazy. For example, object recognition favors supervised learning but unsupervised learning can also cluster objects into groups. Furthermore, as progress marches onward some tasks employ both methods, and some tasks swing from one to another. For example, image recognition started off as heavily supervised, but became hybrid by employing unsupervised pre-training, and then moved towards supervision again with the advent of [[Dilution_(neural_networks)|dropout]], [[Rectifier_(neural_networks)|ReLU]], and [[Learning_rate|adaptive learning rates]].▼
Sometimes a trained model can be used as-is, but more often they are modified for downstream applications. For example, the generative pretraining method trains a model to generate a textual dataset, before finetuning it for other applications, such as text classification.<ref name="gpt1paper">{{cite web |last1=Radford |first1=Alec |last2=Narasimhan |first2=Karthik |last3=Salimans |first3=Tim |last4=Sutskever |first4=Ilya |date=11 June 2018 |title=Improving Language Understanding by Generative Pre-Training |url=https://cdn.openai.com/research-covers/language-unsupervised/language_understanding_paper.pdf |url-status=live |archive-url=https://web.archive.org/web/20210126024542/https://cdn.openai.com/research-covers/language-unsupervised/language_understanding_paper.pdf |archive-date=26 January 2021 |access-date=23 January 2021 |publisher=[[OpenAI]] |page=12}}</ref><ref>{{Cite journal |last1=Li |first1=Zhuohan |last2=Wallace |first2=Eric |last3=Shen |first3=Sheng |last4=Lin |first4=Kevin |last5=Keutzer |first5=Kurt |last6=Klein |first6=Dan |last7=Gonzalez |first7=Joey |date=2020-11-21 |title=Train Big, Then Compress: Rethinking Model Size for Efficient Training and Inference of Transformers |url=https://proceedings.mlr.press/v119/li20m.html |journal=Proceedings of the 37th International Conference on Machine Learning |language=en |publisher=PMLR |pages=5958–5968}}</ref> As another example, autoencoders are trained to [[Feature learning|good features]], which can then be used as a module for other models, such as in a [[latent diffusion model]].
▲
A typical generative task is as follows. At each step, a datapoint is sampled from the dataset, and part of the data is removed, and the model must infer the removed part. This is particularly clear for the [[Autoencoder|denoising autoencoders]] and [[BERT (language model)|BERT]].
== Neural network architectures ==
=== Training ===
During the learning phase, an unsupervised network tries to mimic the data it's given and uses the error in its mimicked output to correct itself (i.e. correct its weights and biases). Sometimes the error is expressed as a low probability that the erroneous output occurs, or it might be expressed as an unstable high energy state in the network.
In contrast to supervised methods' dominant use of [[backpropagation]], unsupervised learning also employs other methods including:
=== Energy ===
An energy function is a macroscopic measure of a network's activation state. In Boltzmann machines, it plays the role of the Cost function. This analogy with physics is inspired by Ludwig Boltzmann's analysis of a gas' macroscopic energy from the microscopic probabilities of particle motion <math>p \propto e^{-E/kT}</math>, where k is the Boltzmann constant and T is temperature. In the [[
=== Networks ===
Line 27 ⟶ 35:
|| [[File:Boltzmannexamplev1.png |thumb|Network is separated into 2 layers (hidden vs. visible), but still using symmetric 2-way weights. Following Boltzmann's thermodynamics, individual probabilities give rise to macroscopic energies.]]
|| [[File:Restricted Boltzmann machine.svg|thumb|Restricted Boltzmann Machine. This is a Boltzmann machine where lateral connections within a layer are prohibited to make analysis tractable.]]
|| [[File:Stacked-boltzmann.png|thumb|
|}
Line 35 ⟶ 43:
|-
|| [[File:Helmholtz Machine.png |thumb|Instead of the bidirectional symmetric connection of the stacked Boltzmann machines, we have separate one-way connections to form a loop. It does both generation and discrimination.]]
|| [[File:Autoencoder_schema.png |thumb|A feed forward network that aims to find a good middle layer representation of its input world. This network is deterministic, so it
|| [[File:VAE blocks.png |thumb|Applies Variational Inference to the Autoencoder. The middle layer is a set of means & variances for Gaussian distributions. The stochastic nature allows for more robust imagination than the deterministic autoencoder.
|}
Line 44 ⟶ 52:
{| class="wikitable"
|-
| 1974 || Ising magnetic model proposed by {{ill|William A. Little (physicist)|lt=WA Little|de|William A. Little}} for cognition
|-
| 1980 || [[Kunihiko Fukushima]] introduces the [[neocognitron]], which is later called a [[convolutional neural network]]. It is mostly used in SL, but deserves a mention here.
|-
| 1982 || Ising variant Hopfield net described as [[Content-
|-
| 1983 || Ising variant Boltzmann machine with probabilistic neurons described by [[Geoffrey Hinton|Hinton]] & [[Terry Sejnowski|Sejnowski]] following Sherington & Kirkpatrick's 1975 work.
Line 59 ⟶ 63:
| 1986 || [[Paul Smolensky]] publishes Harmony Theory, which is an RBM with practically the same Boltzmann energy function. Smolensky did not give a practical training scheme. Hinton did in mid-2000s.
|-
| 1995 || Schmidthuber introduces the [[
|-
| 1995 || Dayan & Hinton introduces Helmholtz machine
|-
| 2013 || Kingma, Rezende, & co. introduced Variational Autoencoders as Bayesian graphical probability network, with neural nets as components.
Line 70 ⟶ 72:
=== Specific Networks ===
Here, we highlight some characteristics of select networks. The details of each are given in the comparison table below.
{{glossary}}
Line 86 ⟶ 88:
{{term |1=[[Helmholtz machine]]}}
{{defn |1=These are early inspirations for the Variational Auto Encoders.
{{term |1=[[Variational autoencoder]]}}
Line 103 ⟶ 105:
| '''Neuron''' || deterministic binary state. Activation = { 0 (or -1) if x is negative, 1 otherwise } || stochastic binary Hopfield neuron || ← same. (extended to real-valued in mid 2000s) || ← same || ← same || <!--AE--> language: LSTM. vision: local receptive fields. usually real valued relu activation. || middle layer neurons encode means & variances for Gaussians. In run mode (inference), the output of the middle layer are sampled values from the Gaussians.
|-
| '''Connections''' || 1-layer with symmetric weights. No self-connections. || 2-layers. 1-hidden & 1-visible. symmetric weights. || ← same. <br>no lateral connections within a layer. || top layer is undirected, symmetric. other layers are 2-way, asymmetric. || 3-layers: asymmetric weights. 2 networks combined into 1. || <!--AE--> 3-layers. The input is considered a layer even though it has no inbound weights. recurrent layers for NLP. feedforward convolutions for vision. input & output have the same neuron counts. || 3-layers: input, encoder, distribution sampler decoder. the sampler is not considered a layer
|-
| '''Inference & energy''' || Energy is given by Gibbs probability measure :<math>E = -\frac12\sum_{i,j}{w_{ij}{s_i}{s_j}}+\sum_i{\theta_i}{s_i}</math> || ← same || ← same || <!-- --> || minimize KL divergence || inference is only feed-forward. previous UL networks ran forwards AND backwards || minimize error = reconstruction error - KLD
Line 115 ⟶ 117:
|}
The classical example of unsupervised learning in the study of neural networks is [[Donald Hebb]]'s principle, that is, neurons that fire together wire together.<ref name="Buhmann" /> In [[Hebbian learning]], the connection is reinforced irrespective of an error, but is exclusively a function of the coincidence between action potentials between the two neurons.<ref name="Comesana" /> A similar version that modifies synaptic weights takes into account the time between the action potentials ([[spike-timing-dependent plasticity]] or STDP). Hebbian Learning has been hypothesized to underlie a range of cognitive functions, such as [[pattern recognition]] and experiential learning.
Line 121 ⟶ 124:
== Probabilistic methods ==
Two of the main methods used in unsupervised learning are [[Principal component analysis|principal component]] and [[cluster analysis]].
A central application of unsupervised learning is in the field of [[density estimation]] in [[statistics]],<ref name="JordanBishop2004" /> though unsupervised learning encompasses many other domains involving summarizing and explaining data features. It can be contrasted with supervised learning by saying that whereas supervised learning intends to infer a [[conditional probability distribution]] conditioned on the label of input data; unsupervised learning intends to infer an [[a priori probability]] distribution .
Line 128 ⟶ 131:
Some of the most common algorithms used in unsupervised learning include: (1) Clustering, (2) Anomaly detection, (3) Approaches for learning latent variable models. Each approach uses several methods as follows:
* [[Data clustering|Clustering]] methods include: [[hierarchical clustering]],<ref name="Hastie" /> [[k-means]],<ref name="tds-kmeans" /> [[mixture models]], [[model-based clustering]], [[DBSCAN]], and [[OPTICS algorithm]]
* [[Anomaly detection]] methods include: [[Local Outlier Factor]], and [[Isolation Forest]]
* Approaches for learning [[latent variable model]]s such as [[Expectation–maximization algorithm]] (EM), [[Method of moments (statistics)|Method of moments]], and [[Blind signal separation]] techniques (
=== Method of moments ===
Line 142 ⟶ 145:
* [[Automated machine learning]]
* [[Cluster analysis]]
* [[Model-based clustering]]
* [[Anomaly detection]]
* [[Expectation–maximization algorithm]]
Line 153 ⟶ 157:
{{Reflist|
refs=
<ref name="tds-ul" >{{Cite web|url=https://towardsdatascience.com/unsupervised-machine-learning-clustering-analysis-d40f2b34ae7e|title=Unsupervised Machine Learning: Clustering Analysis|last=Roman|first=Victor|date=2019-04-21|website=Medium|access-date=2019-10-01|archive-date=2020-08-21|archive-url=https://web.archive.org/web/20200821132257/https://towardsdatascience.com/unsupervised-machine-learning-clustering-analysis-d40f2b34ae7e|url-status=live}}</ref>
<ref name="JordanBishop2004">{{cite book |first1=Michael I. |last1=Jordan |first2=Christopher M. |last2=Bishop |chapter=7. Intelligent Systems §Neural Networks |editor-first=Allen B. |editor-last=Tucker |title=Computer Science Handbook |url=https://www.taylorfrancis.com/books/mono/10.1201/9780203494455/computer-science-handbook-allen-tucker |edition=2nd |publisher=Chapman & Hall/CRC Press |year=2004 |doi=10.1201/9780203494455 |isbn=1-58488-360-X |access-date=2022-11-03 |archive-date=2022-11-03 |archive-url=https://web.archive.org/web/20221103234201/https://www.taylorfrancis.com/books/mono/10.1201/9780203494455/computer-science-handbook-allen-tucker |url-status=live }}</ref>
<ref name="Hastie" >{{harvnb|Hastie|Tibshirani|Friedman|2009|pp=485–586}}</ref>
<ref name="tds-kmeans" >{{Cite web|url=https://towardsdatascience.com/understanding-k-means-clustering-in-machine-learning-6a6e67336aa1|title=Understanding K-means Clustering in Machine Learning|last=Garbade|first=Dr Michael J.|date=2018-09-12|website=Medium|language=en|access-date=2019-10-31|archive-date=2019-05-28|archive-url=https://web.archive.org/web/20190528183913/https://towardsdatascience.com/understanding-k-means-clustering-in-machine-learning-6a6e67336aa1|url-status=live}}</ref>
<ref name="TensorLVMs" >{{cite journal |last1=Anandkumar |first1=Animashree |last2=Ge |first2=Rong |last3=Hsu |first3=Daniel |last4=Kakade |first4=Sham |first5=
<ref name="Buhmann" >{{Cite book|last1=Buhmann|first1=J.|last2=Kuhnel|first2=H.|title= [Proceedings 1992] IJCNN International Joint Conference on Neural Networks|volume=4|pages=796–801|publisher=IEEE|doi=10.1109/ijcnn.1992.227220|isbn=0780305590|chapter=Unsupervised and supervised data clustering with competitive neural networks|year=1992|s2cid=62651220}}</ref>
<ref name="Comesana" >{{Cite journal|last1=Comesaña-Campos|first1=Alberto|last2=Bouza-Rodríguez|first2=José Benito|date=June 2016|title=An application of Hebbian learning in the design process decision-making|journal=Journal of Intelligent Manufacturing|volume=27|issue=3|pages=487–506|doi=10.1007/s10845-014-0881-z|s2cid=207171436|issn=0956-5515
<ref name="Carpenter" >{{cite journal|author1=Carpenter, G.A.
<ref name="Hinton2010" >{{cite book |
<ref name="HintonMlss2009" >{{cite web
}}
== Further reading ==
{{refbegin}}
* {{cite book |editor1=Bousquet, O. |editor3=Raetsch, G. |editor2=von Luxburg, U. |editor2-link=Ulrike von Luxburg |title=Advanced Lectures on Machine Learning |url=https://archive.org/details/springer_10.1007-b100712 |publisher=Springer |year=2004 |isbn=978-3540231226 }}
* {{cite book |author1=Duda, Richard O. |author2-link=Peter E. Hart |author2=Hart, Peter E. |author3=Stork, David G. |year=2001 |chapter=Unsupervised Learning and Clustering |title=Pattern classification |edition=2nd |publisher=Wiley |isbn=0-471-05669-3|author1-link=Richard O. Duda |title-link=Pattern classification }}
*{{cite book |first1=Trevor |last1=Hastie |authorlink1=Trevor Hastie |first2=Robert |last2=Tibshirani |authorlink2=Robert Tibshirani |first3=Jerome |last3=Friedman |chapter=Unsupervised Learning |chapter-url=https://link.springer.com/chapter/10.1007/978-0-387-84858-7_14 |title=The Elements of Statistical Learning: Data mining, Inference, and Prediction |year=2009 |publisher=Springer |isbn=978-0-387-84857-0 |pages=485–586 |doi=10.1007/978-0-387-84858-7_14 |access-date=2022-11-03 |archive-date=2022-11-03 |archive-url=https://web.archive.org/web/20221103234204/https://link.springer.com/chapter/10.1007/978-0-387-84858-7_14 |url-status=live }}
* {{cite book |editor1-last=Hinton |editor1-first=Geoffrey |editor-link=Geoffrey Hinton |editor2-last=Sejnowski |editor2-first=Terrence J. |editor2-link=Terrence J. Sejnowski |year=1999 |title=Unsupervised Learning: Foundations of Neural Computation |publisher=[[MIT Press]] |isbn=0-262-58168-X}}
{{refend}}
|