Content deleted Content added
Ohnoitsjamie (talk | contribs) m Reverted edit by 180.248.22.95 (talk) to last version by Isi96 |
→History: rewrote intro |
||
Line 1:
{{Short description|A paradigm in machine learning}}
'''Unsupervised learning''' is a
Conceptually, unsupervised learning divides into the aspects of data, training, algorithm, and downstream applications. Typically, the dataset is harvested cheaply "in the wild", such as massive [[text corpus]] obtained by [[Web crawler|web crawling]], with only minor filtering (such as [[Common Crawl]]). This compares favorably to supervised learning, where the dataset (such as the [[ImageNet|ImageNet1000]]) is typically constructed manually, which is much more expensive.
There were algorithms designed specifically for unsupervised learning, such as [[Cluster analysis|clustering algorithms]] like [[K-means clustering|k-means]], [[dimensionality reduction]] techniques like [[Principal component analysis|principal component analysis (PCA)]], [[Boltzmann machine|Boltzmann machine learning]], and [[Autoencoder|autoencoders]]. After the rise of deep learning, most large-scale unsupervised learning were done by training general-purpose neural network architectures by [[gradient descent]], adapted to performing unsupervised learning by designing an appropriate training procedure.
=== Tasks vs. methods ===▼
Sometimes a trained model can be used as-is, but more often they are modified for downstream applications. For example, the generative pretraining method trains a model to generate a textual dataset, before finetuning it for other applications, such as text classification. As another example, autoencoders are trained into [[Feature learning|good features]], which can then be used as a module for other models, such as in a [[Diffusion model|latent diffusion model]].
== Neural network architectures ==
{{Machine learning|Paradigms}}
▲Neural network tasks are often categorized as discriminative (recognition) or generative (imagination). Often but not always, discriminative tasks use supervised methods and generative tasks use unsupervised (see [[Venn diagram]]); however, the separation is very hazy. For example, object recognition favors supervised learning but unsupervised learning can also cluster objects into groups. Furthermore, as progress marches onward some tasks employ both methods, and some tasks swing from one to another. For example, image recognition started off as heavily supervised, but became hybrid by employing unsupervised pre-training, and then moved towards supervision again with the advent of [[Dilution_(neural_networks)|dropout]], [[Rectifier_(neural_networks)|ReLU]], and [[Learning_rate|adaptive learning rates]].
=== Training ===
During the learning phase, an unsupervised network tries to mimic the data it's given and uses the error in its mimicked output to correct itself (i.e. correct its weights and biases). Sometimes the error is expressed as a low probability that the erroneous output occurs, or it might be expressed as an unstable high energy state in the network.
In contrast to supervised methods' dominant use of [[backpropagation]], unsupervised learning also employs other methods including:
=== Energy ===
Line 45 ⟶ 49:
{| class="wikitable"
|-
| 1974 || Ising magnetic model proposed by {{ill|William A. Little (physicist)|lt=WA Little|de|William A. Little}} for cognition
Line 63:
|-
| 1995 || Dayan & Hinton introduces Helmholtz machine
|-
| 2013 || Kingma, Rezende, & co. introduced Variational Autoencoders as Bayesian graphical probability network, with neural nets as components.
|