Unsupervised learning: Difference between revisions

Content deleted Content added
Specific Networks: Adding a reference, adding wikilinks, copyedit, punctuation correction, fixing style error
Probabilistic methods: removed double links in section
 
(38 intermediate revisions by 27 users not shown)
Line 1:
{{Short description|A paradigmParadigm in machine learning that uses no classification labels}}
{{Machine learning|Paradigms}}
'''Unsupervised learning''', is paradigm in [[machine learning]] where, in contrast to [[supervised learning]] and [[semi-supervised learning]], algorithms learn patterns exclusively from unlabeled data.
 
'''Unsupervised learning''' is a framework in [[machine learning]] where, in contrast to [[supervised learning]], algorithms learn patterns exclusively from unlabeled data.<ref name="WeiWu">{{Cite web |last=Wu |first=Wei |title=Unsupervised Learning |url=https://na.uni-tuebingen.de/ex/ml_seminar_ss2022/Unsupervised_Learning%20Final.pdf |access-date=26 April 2024 |archive-date=14 April 2024 |archive-url=https://web.archive.org/web/20240414213810/https://na.uni-tuebingen.de/ex/ml_seminar_ss2022/Unsupervised_Learning%20Final.pdf |url-status=live }}</ref> Other frameworks in the spectrum of supervisions include [[Weak supervision|weak- or semi-supervision]], where a small portion of the data is tagged, and [[Self-supervised learning|self-supervision]]. Some researchers consider self-supervised learning a form of unsupervised learning.<ref>{{Cite journal |last1=Liu |first1=Xiao |last2=Zhang |first2=Fanjin |last3=Hou |first3=Zhenyu |last4=Mian |first4=Li |last5=Wang |first5=Zhaoyu |last6=Zhang |first6=Jing |last7=Tang |first7=Jie |date=2021 |title=Self-supervised Learning: Generative or Contrastive |url=https://ieeexplore.ieee.org/document/9462394 |journal=IEEE Transactions on Knowledge and Data Engineering |pages=1 |doi=10.1109/TKDE.2021.3090866 |issn=1041-4347|arxiv=2006.08218 }}</ref>
== Neural networks ==
 
Conceptually, unsupervised learning divides into the aspects of data, training, algorithm, and downstream applications. Typically, the dataset is harvested cheaply "in the wild", such as massive [[text corpus]] obtained by [[Web crawler|web crawling]], with only minor filtering (such as [[Common Crawl]]). This compares favorably to supervised learning, where the dataset (such as the [[ImageNet|ImageNet1000]]) is typically constructed manually, which is much more expensive.
=== Tasks vs. methods ===
 
[[File:Task-guidance.png|thumb|left|300px|Tendency for a task to employ supervised vs. unsupervised methods. Task names straddling circle boundaries is intentional. It shows that the classical division of imaginative tasks (left) employing unsupervised methods is blurred in today's learning schemes.]]
There were algorithms designed specifically for unsupervised learning, such as [[Cluster analysis|clustering algorithms]] like [[K-means clustering|k-means]], [[dimensionality reduction]] techniques like [[Principal component analysis|principal component analysis (PCA)]], [[Boltzmann machine|Boltzmann machine learning]], and [[autoencoder]]s. After the rise of deep learning, most large-scale unsupervised learning have been done by training general-purpose neural network architectures by [[gradient descent]], adapted to performing unsupervised learning by designing an appropriate training procedure.
{{Machine learning|Paradigms}}
 
Neural network tasks are often categorized as discriminative (recognition) or generative (imagination). Often but not always, discriminative tasks use supervised methods and generative tasks use unsupervised (see [[Venn diagram]]); however, the separation is very hazy. For example, object recognition favors supervised learning but unsupervised learning can also cluster objects into groups. Furthermore, as progress marches onward some tasks employ both methods, and some tasks swing from one to another. For example, image recognition started off as heavily supervised, but became hybrid by employing unsupervised pre-training, and then moved towards supervision again with the advent of [[Dilution_(neural_networks)|dropout]], [[Rectifier_(neural_networks)|ReLU]], and [[Learning_rate|adaptive learning rates]].
Sometimes a trained model can be used as-is, but more often they are modified for downstream applications. For example, the generative pretraining method trains a model to generate a textual dataset, before finetuning it for other applications, such as text classification.<ref name="gpt1paper">{{cite web |last1=Radford |first1=Alec |last2=Narasimhan |first2=Karthik |last3=Salimans |first3=Tim |last4=Sutskever |first4=Ilya |date=11 June 2018 |title=Improving Language Understanding by Generative Pre-Training |url=https://cdn.openai.com/research-covers/language-unsupervised/language_understanding_paper.pdf |url-status=live |archive-url=https://web.archive.org/web/20210126024542/https://cdn.openai.com/research-covers/language-unsupervised/language_understanding_paper.pdf |archive-date=26 January 2021 |access-date=23 January 2021 |publisher=[[OpenAI]] |page=12}}</ref><ref>{{Cite journal |last1=Li |first1=Zhuohan |last2=Wallace |first2=Eric |last3=Shen |first3=Sheng |last4=Lin |first4=Kevin |last5=Keutzer |first5=Kurt |last6=Klein |first6=Dan |last7=Gonzalez |first7=Joey |date=2020-11-21 |title=Train Big, Then Compress: Rethinking Model Size for Efficient Training and Inference of Transformers |url=https://proceedings.mlr.press/v119/li20m.html |journal=Proceedings of the 37th International Conference on Machine Learning |language=en |publisher=PMLR |pages=5958–5968}}</ref> As another example, autoencoders are trained to [[Feature learning|good features]], which can then be used as a module for other models, such as in a [[latent diffusion model]].
 
=== Tasks vs. methods ===
Neural[[File:Task-guidance.png|thumb|left|300px|Tendency for a task to employ supervised vs. unsupervised methods. Task names straddling circle boundaries is intentional. It shows that the classical division of networkimaginative tasks (left) employing unsupervised methods is blurred in today's learning schemes.]]Tasks are often categorized as [[Discriminative model|discriminative]] (recognition) or [[Generative model|generative]] (imagination). Often but not always, discriminative tasks use supervised methods and generative tasks use unsupervised (see [[Venn diagram]]); however, the separation is very hazy. For example, object recognition favors supervised learning but unsupervised learning can also cluster objects into groups. Furthermore, as progress marches onward, some tasks employ both methods, and some tasks swing from one to another. For example, image recognition started off as heavily supervised, but became hybrid by employing unsupervised pre-training, and then moved towards supervision again with the advent of [[Dilution_Dilution (neural_networksneural networks)|dropout]], [[Rectifier_Rectifier (neural_networksneural networks)|ReLU]], and [[Learning_rateLearning rate|adaptive learning rates]].
 
A typical generative task is as follows. At each step, a datapoint is sampled from the dataset, and part of the data is removed, and the model must infer the removed part. This is particularly clear for the [[Autoencoder|denoising autoencoders]] and [[BERT (language model)|BERT]].
 
== Neural network architectures ==
 
=== Training ===
During the learning phase, an unsupervised network tries to mimic the data it's given and uses the error in its mimicked output to correct itself (i.e. correct its weights and biases). Sometimes the error is expressed as a low probability that the erroneous output occurs, or it might be expressed as an unstable high energy state in the network.
 
In contrast to supervised methods' dominant use of [[backpropagation]], unsupervised learning also employs other methods including: Hopfield learning rule, Boltzmann learning rule, [[Contrastive Divergence]], [[Wake-sleep algorithm|Wake Sleep]], [[Variational Inference]], [[Maximum Likelihood]], [[Maximum A Posteriori]], [[Gibbs Sampling]], and backpropagating reconstruction errors or hidden state reparameterizations. See the table below for more details.
 
=== Energy ===
An energy function is a macroscopic measure of a network's activation state. In Boltzmann machines, it plays the role of the Cost function. This analogy with physics is inspired by Ludwig Boltzmann's analysis of a gas' macroscopic energy from the microscopic probabilities of particle motion <math>p \propto e^{-E/kT}</math>, where k is the Boltzmann constant and T is temperature. In the [[Restricted_Boltzmann_machineRestricted Boltzmann machine|RBM]] network the relation is <math> p = e^{-E} / Z </math>,<ref name="Hinton2010" /> where <math>p</math> and <math>E</math> vary over every possible activation pattern and <math>\textstyle{Z = \sum_{\scriptscriptstyle{\text{All Patterns}}} e^{-E(\text{pattern})}}</math>. To be more precise, <math>p(a) = e^{-E(a)} / Z</math>, where <math>a</math> is an activation pattern of all neurons (visible and hidden). Hence, some early neural networks bear the name Boltzmann Machine. Paul Smolensky calls <math>-E\,</math> the ''Harmony''. A network seeks low energy which is high Harmony.
 
=== Networks ===
Line 27 ⟶ 35:
|| [[File:Boltzmannexamplev1.png |thumb|Network is separated into 2 layers (hidden vs. visible), but still using symmetric 2-way weights. Following Boltzmann's thermodynamics, individual probabilities give rise to macroscopic energies.]]
|| [[File:Restricted Boltzmann machine.svg|thumb|Restricted Boltzmann Machine. This is a Boltzmann machine where lateral connections within a layer are prohibited to make analysis tractable.]]
|| [[File:Stacked-boltzmann.png|thumb| This network has multiple RBM's to encode a hierarchy of hidden features. After a single RBM is trained, another blue hidden layer (see left RBM) is added, and the top 2 layers are trained as a red & blue RBM. Thus the middle layers of an RBM acts as hidden or visible, depending on the training phase it's is in.]]
|}
 
Line 35 ⟶ 43:
|-
|| [[File:Helmholtz Machine.png |thumb|Instead of the bidirectional symmetric connection of the stacked Boltzmann machines, we have separate one-way connections to form a loop. It does both generation and discrimination.]]
|| [[File:Autoencoder_schema.png |thumb|A feed forward network that aims to find a good middle layer representation of its input world. This network is deterministic, so it's is not as robust as its successor the VAE.]]
|| [[File:VAE blocks.png |thumb|Applies Variational Inference to the Autoencoder. The middle layer is a set of means & variances for Gaussian distributions. The stochastic nature allows for more robust imagination than the deterministic autoencoder. ]]
|}
 
Line 44 ⟶ 52:
 
{| class="wikitable"
|-
| 1969 || [[Perceptrons (book)|Perceptrons]] by Minsky & Papert shows a [[perceptron]] without hidden layers fails on [[XOR]]
|-
| 1970s || (approximate dates) First [[AI winter]]
|-
| 1974 || Ising magnetic model proposed by {{ill|William A. Little (physicist)|lt=WA Little|de|William A. Little}} for cognition
|-
| 1980 || [[Kunihiko Fukushima]] introduces the [[neocognitron]], which is later called a [[convolutional neural network]]. It is mostly used in SL, but deserves a mention here.
|-
| 1982 || Ising variant Hopfield net described as [[Content-addressable_memoryaddressable memory|CAMs]] and classifiers by John Hopfield.
|-
| 1983 || Ising variant Boltzmann machine with probabilistic neurons described by [[Geoffrey Hinton|Hinton]] & [[Terry Sejnowski|Sejnowski]] following Sherington & Kirkpatrick's 1975 work.
Line 59 ⟶ 63:
| 1986 || [[Paul Smolensky]] publishes Harmony Theory, which is an RBM with practically the same Boltzmann energy function. Smolensky did not give a practical training scheme. Hinton did in mid-2000s.
|-
| 1995 || Schmidthuber introduces the [[Long_shortLong short-term_memoryterm memory|LSTM]] neuron for languages.
|-
| 1995 || Dayan & Hinton introduces Helmholtz machine
|-
| 1995-2005 || (approximate dates) Second [[AI winter]]
|-
| 2013 || Kingma, Rezende, & co. introduced Variational Autoencoders as Bayesian graphical probability network, with neural nets as components.
Line 70 ⟶ 72:
=== Specific Networks ===
 
Here, we highlight some characteristics of select networks. The details of each are given in the comparison table below.
 
{{glossary}}
Line 86 ⟶ 88:
 
{{term |1=[[Helmholtz machine]]}}
{{defn |1=These are early inspirations for the Variational Auto Encoders. It'sIts 2 networks combined into one—forward weights operates recognition and backward weights implements imagination. It is perhaps the first network to do both. Helmholtz did not work in machine learning but he inspired the view of "statistical inference engine whose function is to infer probable causes of sensory input".<ref name=“nc95“"nc95">{{Cite journal|title = The Helmholtz machine.|journal = Neural Computation|date = 1995|pages = 889–904|volume = 7|issue = 5|first1 = Dayan|last1 = Peter|authorlink1=Peter Dayan|first2 = Geoffrey E.|last2 = Hinton|authorlink2=Geoffrey Hinton|first3 = Radford M.|last3 = Neal|authorlink3=Radford M. Neal|first4 = Richard S.|last4 = Zemel|authorlink4=Richard Zemel|doi = 10.1162/neco.1995.7.5.889|pmid = 7584891|s2cid = 1890561|hdl = 21.11116/0000-0002-D6D3-E|hdl-access = free}} {{closed access}}</ref> the stochastic binary neuron outputs a probability that its state is 0 or 1. The data input is normally not considered a layer, but in the Helmholtz machine generation mode, the data layer receives input from the middle layer and has separate weights for this purpose, so it is considered a layer. Hence this network has 3 layers.}}
 
{{term |1=[[Variational autoencoder]]}}
Line 103 ⟶ 105:
| '''Neuron''' || deterministic binary state. Activation = { 0 (or -1) if x is negative, 1 otherwise } || stochastic binary Hopfield neuron || ← same. (extended to real-valued in mid 2000s) || ← same || ← same || <!--AE--> language: LSTM. vision: local receptive fields. usually real valued relu activation. || middle layer neurons encode means & variances for Gaussians. In run mode (inference), the output of the middle layer are sampled values from the Gaussians.
|-
| '''Connections''' || 1-layer with symmetric weights. No self-connections. || 2-layers. 1-hidden & 1-visible. symmetric weights. || ← same. <br>no lateral connections within a layer. || top layer is undirected, symmetric. other layers are 2-way, asymmetric. || 3-layers: asymmetric weights. 2 networks combined into 1. || <!--AE--> 3-layers. The input is considered a layer even though it has no inbound weights. recurrent layers for NLP. feedforward convolutions for vision. input & output have the same neuron counts. || 3-layers: input, encoder, distribution sampler decoder. the sampler is not considered a layer (e)
|-
| '''Inference & energy''' || Energy is given by Gibbs probability measure :<math>E = -\frac12\sum_{i,j}{w_{ij}{s_i}{s_j}}+\sum_i{\theta_i}{s_i}</math> || ← same || ← same || <!-- --> || minimize KL divergence || inference is only feed-forward. previous UL networks ran forwards AND backwards || minimize error = reconstruction error - KLD
Line 115 ⟶ 117:
|}
 
'''=== Hebbian Learning, ART, SOM'''<br> ===
 
The classical example of unsupervised learning in the study of neural networks is [[Donald Hebb]]'s principle, that is, neurons that fire together wire together.<ref name="Buhmann" /> In [[Hebbian learning]], the connection is reinforced irrespective of an error, but is exclusively a function of the coincidence between action potentials between the two neurons.<ref name="Comesana" /> A similar version that modifies synaptic weights takes into account the time between the action potentials ([[spike-timing-dependent plasticity]] or STDP). Hebbian Learning has been hypothesized to underlie a range of cognitive functions, such as [[pattern recognition]] and experiential learning.
 
Line 121 ⟶ 124:
 
== Probabilistic methods ==
Two of the main methods used in unsupervised learning are [[Principal component analysis|principal component]] and [[cluster analysis]]. [[Cluster analysis]] is used in unsupervised learning to group, or segment, datasets with shared attributes in order to extrapolate algorithmic relationships.<ref name="tds-ul" /> Cluster analysis is a branch of [[machine learning]] that groups the data that has not been [[Labeled data|labelled]], classified or categorized. Instead of responding to feedback, cluster analysis identifies commonalities in the data and reacts based on the presence or absence of such commonalities in each new piece of data. This approach helps detect anomalous data points that do not fit into either group.
 
A central application of unsupervised learning is in the field of [[density estimation]] in [[statistics]],<ref name="JordanBishop2004" /> though unsupervised learning encompasses many other domains involving summarizing and explaining data features. It can be contrasted with supervised learning by saying that whereas supervised learning intends to infer a [[conditional probability distribution]] conditioned on the label of input data; unsupervised learning intends to infer an [[a priori probability]] distribution .
Line 128 ⟶ 131:
Some of the most common algorithms used in unsupervised learning include: (1) Clustering, (2) Anomaly detection, (3) Approaches for learning latent variable models. Each approach uses several methods as follows:
 
* [[Data clustering|Clustering]] methods include: [[hierarchical clustering]],<ref name="Hastie" /> [[k-means]],<ref name="tds-kmeans" /> [[mixture models]], [[model-based clustering]], [[DBSCAN]], and [[OPTICS algorithm]]
* [[Anomaly detection]] methods include: [[Local Outlier Factor]], and [[Isolation Forest]]
* Approaches for learning [[latent variable model]]s such as [[Expectation–maximization algorithm]] (EM), [[Method of moments (statistics)|Method of moments]], and [[Blind signal separation]] techniques ([[Principal component analysis]], [[Independent component analysis]], [[Non-negative matrix factorization]], [[Singular value decomposition]])
 
=== Method of moments ===
Line 142 ⟶ 145:
* [[Automated machine learning]]
* [[Cluster analysis]]
* [[Model-based clustering]]
* [[Anomaly detection]]
* [[Expectation–maximization algorithm]]
Line 153 ⟶ 157:
{{Reflist|
refs=
<ref name="tds-ul" >{{Cite web|url=https://towardsdatascience.com/unsupervised-machine-learning-clustering-analysis-d40f2b34ae7e|title=Unsupervised Machine Learning: Clustering Analysis|last=Roman|first=Victor|date=2019-04-21|website=Medium|access-date=2019-10-01|archive-date=2020-08-21|archive-url=https://web.archive.org/web/20200821132257/https://towardsdatascience.com/unsupervised-machine-learning-clustering-analysis-d40f2b34ae7e|url-status=live}}</ref>
<ref name="JordanBishop2004">{{cite book |first1=Michael I. |last1=Jordan |first2=Christopher M. |last2=Bishop |chapter=7. Intelligent Systems §Neural Networks |editor-first=Allen B. |editor-last=Tucker |title=Computer Science Handbook |url=https://www.taylorfrancis.com/books/mono/10.1201/9780203494455/computer-science-handbook-allen-tucker |edition=2nd |publisher=Chapman & Hall/CRC Press |year=2004 |doi=10.1201/9780203494455 |isbn=1-58488-360-X |access-date=2022-11-03 |archive-date=2022-11-03 |archive-url=https://web.archive.org/web/20221103234201/https://www.taylorfrancis.com/books/mono/10.1201/9780203494455/computer-science-handbook-allen-tucker |url-status=live }}</ref>
<ref name="Hastie" >{{harvnb|Hastie|Tibshirani|Friedman|2009|pp=485–586}}</ref>
<ref name="tds-kmeans" >{{Cite web|url=https://towardsdatascience.com/understanding-k-means-clustering-in-machine-learning-6a6e67336aa1|title=Understanding K-means Clustering in Machine Learning|last=Garbade|first=Dr Michael J.|date=2018-09-12|website=Medium|language=en|access-date=2019-10-31|archive-date=2019-05-28|archive-url=https://web.archive.org/web/20190528183913/https://towardsdatascience.com/understanding-k-means-clustering-in-machine-learning-6a6e67336aa1|url-status=live}}</ref>
<ref name="TensorLVMs" >{{cite journal |last1=Anandkumar |first1=Animashree |last2=Ge |first2=Rong |last3=Hsu |first3=Daniel |last4=Kakade |first4=Sham |first5= Matus |last5=Telgarsky |date=2014 |title=Tensor Decompositions for Learning Latent Variable Models |url=http://www.jmlr.org/papers/volume15/anandkumar14b/anandkumar14b.pdf |journal=Journal of Machine Learning Research |volume=15 |pages=2773–2832 |bibcode=2012arXiv1210.7559A |arxiv=1210.7559 |access-date=2015-04-10 |archive-date=2015-03-20 |archive-url=https://web.archive.org/web/20150320201108/http://jmlr.org/papers/volume15/anandkumar14b/anandkumar14b.pdf |url-status=live }}</ref>
<ref name="Buhmann" >{{Cite book|last1=Buhmann|first1=J.|last2=Kuhnel|first2=H.|title= &#91;Proceedings 1992&#93; IJCNN International Joint Conference on Neural Networks|volume=4|pages=796–801|publisher=IEEE|doi=10.1109/ijcnn.1992.227220|isbn=0780305590|chapter=Unsupervised and supervised data clustering with competitive neural networks|year=1992|s2cid=62651220}}</ref>
<ref name="Comesana" >{{Cite journal|last1=Comesaña-Campos|first1=Alberto|last2=Bouza-Rodríguez|first2=José Benito|date=June 2016|title=An application of Hebbian learning in the design process decision-making|journal=Journal of Intelligent Manufacturing|volume=27|issue=3|pages=487–506|doi=10.1007/s10845-014-0881-z|s2cid=207171436|issn=0956-5515|url=https://www.semanticscholar.org/paper/4059b77be03fea077350c106e6e9aa9fce23e8c7}}</ref>
<ref name="Carpenter" >{{cite journal|author1=Carpenter, G.A. |author2=Grossberg, S. |name-list-style=amp |year=1988|title=The ART of adaptive pattern recognition by a self-organizing neural network|journal= Computer|volume=21|issue=3 |pages=77–88|url=http://www.cns.bu.edu/Profiles/Grossberg/CarGro1988Computer.pdf|doi=10.1109/2.33|s2cid=14625094 |access-date=2013-09-16|archive-date=2018-05-16|archive-url=https://web.archive.org/web/20180516131553/http://www.cns.bu.edu/Profiles/Grossberg/CarGro1988Computer.pdf|url-status=dead}}</ref>
<ref name="Hinton2010" >{{cite book | last = Hinton |first=G. | date=2012 |chapter = A Practical Guide to Training Restricted Boltzmann Machines |chapter-url=http://www.cs.utoronto.ca/~hinton/absps/guideTR.pdf |publisher=Springer |title=Neural Networks: Tricks of the Trade |series=Lecture Notes in Computer Science |volume=7700 |pages=599–619 |doi=10.1007/978-3-642-35289-8_32 |isbn=978-3-642-35289-8 |access-date=2022-11-03 |archive-date=2022-09-03 |archive-url=https://web.archive.org/web/20220903215809/http://www.cs.utoronto.ca/~hinton/absps/guideTR.pdf |url-status=live }}</ref>
<ref name="HintonMlss2009" >{{cite web |people=Hinton, Geoffrey |date=September 2009 |title=Deep Belief Nets |type=video |url=https://videolectures.net/mlss09uk_hinton_dbn |access-date=2022-03-27 |archive-date=2022-03-08 |archive-url=https://web.archive.org/web/20220308022539/http://videolectures.net/mlss09uk_hinton_dbn/ |url-status=live }}</ref>
}}
 
== Further reading ==
{{refbegin}}
* {{cite book |editor1=Bousquet, O. |editor3=Raetsch, G. |editor2=von Luxburg, U. |editor2-link=Ulrike von Luxburg |title=Advanced Lectures on Machine Learning |url=https://archive.org/details/springer_10.1007-b100712 |publisher=Springer |year=2004 |isbn=978-3540231226 }}
* {{cite book |author1=Duda, Richard O. |author2-link=Peter E. Hart |author2=Hart, Peter E. |author3=Stork, David G. |year=2001 |chapter=Unsupervised Learning and Clustering |title=Pattern classification |edition=2nd |publisher=Wiley |isbn=0-471-05669-3|author1-link=Richard O. Duda |title-link=Pattern classification }}
*{{cite book |first1=Trevor |last1=Hastie |authorlink1=Trevor Hastie |first2=Robert |last2=Tibshirani |authorlink2=Robert Tibshirani |first3=Jerome |last3=Friedman |chapter=Unsupervised Learning |chapter-url=https://link.springer.com/chapter/10.1007/978-0-387-84858-7_14 |title=The Elements of Statistical Learning: Data mining, Inference, and Prediction |year=2009 |publisher=Springer |isbn=978-0-387-84857-0 |pages=485–586 |doi=10.1007/978-0-387-84858-7_14 |access-date=2022-11-03 |archive-date=2022-11-03 |archive-url=https://web.archive.org/web/20221103234204/https://link.springer.com/chapter/10.1007/978-0-387-84858-7_14 |url-status=live }}
* {{cite book |editor1-last=Hinton |editor1-first=Geoffrey |editor-link=Geoffrey Hinton |editor2-last=Sejnowski |editor2-first=Terrence J. |editor2-link=Terrence J. Sejnowski |year=1999 |title=Unsupervised Learning: Foundations of Neural Computation |publisher=[[MIT Press]] |isbn=0-262-58168-X}}
{{refend}}