Unsupervised learning: Difference between revisions

Content deleted Content added
History: rewrote intro
Line 6:
There were algorithms designed specifically for unsupervised learning, such as [[Cluster analysis|clustering algorithms]] like [[K-means clustering|k-means]], [[dimensionality reduction]] techniques like [[Principal component analysis|principal component analysis (PCA)]], [[Boltzmann machine|Boltzmann machine learning]], and [[Autoencoder|autoencoders]]. After the rise of deep learning, most large-scale unsupervised learning were done by training general-purpose neural network architectures by [[gradient descent]], adapted to performing unsupervised learning by designing an appropriate training procedure.
 
Sometimes a trained model can be used as-is, but more often they are modified for downstream applications. For example, the generative pretraining method trains a model to generate a textual dataset, before finetuning it for other applications, such as text classification.<ref name="gpt1paper">{{cite web |last1=Radford |first1=Alec |last2=Narasimhan |first2=Karthik |last3=Salimans |first3=Tim |last4=Sutskever |first4=Ilya |date=11 June 2018 |title=Improving Language Understanding by Generative Pre-Training |url=https://cdn.openai.com/research-covers/language-unsupervised/language_understanding_paper.pdf |url-status=live |archive-url=https://web.archive.org/web/20210126024542/https://cdn.openai.com/research-covers/language-unsupervised/language_understanding_paper.pdf |archive-date=26 January 2021 |access-date=23 January 2021 |publisher=[[OpenAI]] |page=12}}</ref><ref>{{Cite journal |last=Li |first=Zhuohan |last2=Wallace |first2=Eric |last3=Shen |first3=Sheng |last4=Lin |first4=Kevin |last5=Keutzer |first5=Kurt |last6=Klein |first6=Dan |last7=Gonzalez |first7=Joey |date=2020-11-21 |title=Train Big, Then Compress: Rethinking Model Size for Efficient Training and Inference of Transformers |url=https://proceedings.mlr.press/v119/li20m.html |journal=Proceedings of the 37th International Conference on Machine Learning |language=en |publisher=PMLR |pages=5958–5968}}</ref> As another example, autoencoders are trained into [[Feature learning|good features]], which can then be used as a module for other models, such as in a [[Diffusion model|latent diffusion model]].
 
== Tasks ==
[[File:Task-guidance.png|thumb|left|300px|Tendency for a task to employ supervised vs. unsupervised methods. Task names straddling circle boundaries is intentional. It shows that the classical division of imaginative tasks (left) employing unsupervised methods is blurred in today's learning schemes.]]Tasks are often categorized as [[Discriminative model|discriminative]] (recognition) or [[Generative model|generative]] (imagination). Often but not always, discriminative tasks use supervised methods and generative tasks use unsupervised (see [[Venn diagram]]); however, the separation is very hazy. For example, object recognition favors supervised learning but unsupervised learning can also cluster objects into groups. Furthermore, as progress marches onward some tasks employ both methods, and some tasks swing from one to another. For example, image recognition started off as heavily supervised, but became hybrid by employing unsupervised pre-training, and then moved towards supervision again with the advent of [[Dilution_(neural_networks)|dropout]], [[Rectifier_(neural_networks)|ReLU]], and [[Learning_rate|adaptive learning rates]].
 
A typical generative task is as follows. At each step, a datapoint is sampled from the dataset, and part of the data is removed, and the model must infer the removed part. This is particularly clear for the [[Autoencoder|denoising autoencoders]] and [[BERT (language model)|BERT]].
 
== Neural network architectures ==