Unsupervised learning: Difference between revisions

Content deleted Content added
m Reverted edit by 180.248.22.95 (talk) to last version by Isi96
History: rewrote intro
Line 1:
{{Short description|A paradigm in machine learning}}
'''Unsupervised learning''' is a methodframework in [[machine learning]] where, in contrast to [[supervised learning]], algorithms learn patterns exclusively from unlabeled data.<ref name="WeiWu">{{Cite web |last=Wu |first=Wei |title=Unsupervised Learning |url=https://na.uni-tuebingen.de/ex/ml_seminar_ss2022/Unsupervised_Learning%20Final.pdf |access-date=26 April 2024 |archive-date=14 April 2024 |archive-url=https://web.archive.org/web/20240414213810/https://na.uni-tuebingen.de/ex/ml_seminar_ss2022/Unsupervised_Learning%20Final.pdf |url-status=live }}</ref> WithinOther suchframeworks anin approach,the aspectrum machineof learningsupervisions modelinclude tries[[Weak_supervision to|weak- findor any similaritiessemi-supervision]], differences,where patterns,a andsmall structureportion inof the data byis itselftagged, and [[Self-supervised_learning |self-supervision]]. NoSome priorresearchers humanconsider interventionself-supervised islearning neededa form of unsupervised learning.<ref>{{Cite namejournal |last="WeiWu"Liu |first=Xiao |last2=Zhang |first2=Fanjin |last3=Hou |first3=Zhenyu |last4=Mian |first4=Li |last5=Wang |first5=Zhaoyu |last6=Zhang |first6=Jing |last7=Tang |first7=Jie |date=2021 |title=Self-supervised Learning: Generative or Contrastive |url=https://ieeexplore.ieee.org/document/9462394/ |journal=IEEE Transactions on Knowledge and Data Engineering |pages=1–1 |doi=10.1109/TKDE.2021.3090866 |issn=1041-4347}}</ref>
 
Conceptually, unsupervised learning divides into the aspects of data, training, algorithm, and downstream applications. Typically, the dataset is harvested cheaply "in the wild", such as massive [[text corpus]] obtained by [[Web crawler|web crawling]], with only minor filtering (such as [[Common Crawl]]). This compares favorably to supervised learning, where the dataset (such as the [[ImageNet|ImageNet1000]]) is typically constructed manually, which is much more expensive.
Other methods in the supervision spectrum are [[Reinforcement Learning]] where the machine is given only a numerical performance score as guidance,<ref>{{Cite web |last=Ghahramani |first=Zoubin |title=Unsupervised learning |url=https://mlg.eng.cam.ac.uk/pub/pdf/Gha03a.pdf |access-date=26 April 2024 |archive-date=12 November 2023 |archive-url=https://web.archive.org/web/20231112093614/https://mlg.eng.cam.ac.uk/pub/pdf/Gha03a.pdf |url-status=live }}</ref> and [[Weak_supervision | Weak or Semi supervision]] where a small portion of the data is tagged, and [[Self-supervised_learning | Self Supervision]].
== Neural networks ==
 
There were algorithms designed specifically for unsupervised learning, such as [[Cluster analysis|clustering algorithms]] like [[K-means clustering|k-means]], [[dimensionality reduction]] techniques like [[Principal component analysis|principal component analysis (PCA)]], [[Boltzmann machine|Boltzmann machine learning]], and [[Autoencoder|autoencoders]]. After the rise of deep learning, most large-scale unsupervised learning were done by training general-purpose neural network architectures by [[gradient descent]], adapted to performing unsupervised learning by designing an appropriate training procedure.
=== Tasks vs. methods ===
 
[[File:Task-guidance.png|thumb|left|300px|Tendency for a task to employ supervised vs. unsupervised methods. Task names straddling circle boundaries is intentional. It shows that the classical division of imaginative tasks (left) employing unsupervised methods is blurred in today's learning schemes.]]
Sometimes a trained model can be used as-is, but more often they are modified for downstream applications. For example, the generative pretraining method trains a model to generate a textual dataset, before finetuning it for other applications, such as text classification. As another example, autoencoders are trained into [[Feature learning|good features]], which can then be used as a module for other models, such as in a [[Diffusion model|latent diffusion model]].
 
=== Tasks vs. methods ===
Neural[[File:Task-guidance.png|thumb|left|300px|Tendency networkfor a task to employ supervised vs. unsupervised methods. Task names straddling circle boundaries is intentional. It shows that the classical division of imaginative tasks (left) employing unsupervised methods is blurred in today's learning schemes.]]Tasks are often categorized as [[Discriminative model|discriminative]] (recognition) or [[Generative model|generative]] (imagination). Often but not always, discriminative tasks use supervised methods and generative tasks use unsupervised (see [[Venn diagram]]); however, the separation is very hazy. For example, object recognition favors supervised learning but unsupervised learning can also cluster objects into groups. Furthermore, as progress marches onward some tasks employ both methods, and some tasks swing from one to another. For example, image recognition started off as heavily supervised, but became hybrid by employing unsupervised pre-training, and then moved towards supervision again with the advent of [[Dilution_(neural_networks)|dropout]], [[Rectifier_(neural_networks)|ReLU]], and [[Learning_rate|adaptive learning rates]].
 
== Neural network architectures ==
{{Machine learning|Paradigms}}
Neural network tasks are often categorized as discriminative (recognition) or generative (imagination). Often but not always, discriminative tasks use supervised methods and generative tasks use unsupervised (see [[Venn diagram]]); however, the separation is very hazy. For example, object recognition favors supervised learning but unsupervised learning can also cluster objects into groups. Furthermore, as progress marches onward some tasks employ both methods, and some tasks swing from one to another. For example, image recognition started off as heavily supervised, but became hybrid by employing unsupervised pre-training, and then moved towards supervision again with the advent of [[Dilution_(neural_networks)|dropout]], [[Rectifier_(neural_networks)|ReLU]], and [[Learning_rate|adaptive learning rates]].
 
=== Training ===
During the learning phase, an unsupervised network tries to mimic the data it's given and uses the error in its mimicked output to correct itself (i.e. correct its weights and biases). Sometimes the error is expressed as a low probability that the erroneous output occurs, or it might be expressed as an unstable high energy state in the network.
 
In contrast to supervised methods' dominant use of [[backpropagation]], unsupervised learning also employs other methods including: Hopfield learning rule, Boltzmann learning rule, [[Contrastive Divergence]], [[Wake-sleep algorithm|Wake Sleep]], [[Variational Inference]], [[Maximum Likelihood]], [[Maximum A Posteriori]], [[Gibbs Sampling]], and backpropagating reconstruction errors or hidden state reparameterizations. See the table below for more details.
 
=== Energy ===
Line 45 ⟶ 49:
 
{| class="wikitable"
|-
| 1969 || [[Perceptrons (book)|Perceptrons]] by Minsky & Papert shows a [[perceptron]] without hidden layers fails on [[XOR]]
|-
| 1970s || (approximate dates) First [[AI winter]]
|-
| 1974 || Ising magnetic model proposed by {{ill|William A. Little (physicist)|lt=WA Little|de|William A. Little}} for cognition
Line 63:
|-
| 1995 || Dayan & Hinton introduces Helmholtz machine
|-
| 1995-2005 || (approximate dates) Second [[AI winter]]
|-
| 2013 || Kingma, Rezende, & co. introduced Variational Autoencoders as Bayesian graphical probability network, with neural nets as components.