Unsupervised learning: Difference between revisions

Content deleted Content added
Citation bot (talk | contribs)
Removed proxy/dead URL that duplicated identifier. | Use this bot. Report bugs. | #UCB_CommandLine
m it's > its - it's does not mean belonging to it, it is only a contraction of it is, or it has
Line 27:
|| [[File:Boltzmannexamplev1.png |thumb|Network is separated into 2 layers (hidden vs. visible), but still using symmetric 2-way weights. Following Boltzmann's thermodynamics, individual probabilities give rise to macroscopic energies.]]
|| [[File:Restricted Boltzmann machine.svg|thumb|Restricted Boltzmann Machine. This is a Boltzmann machine where lateral connections within a layer are prohibited to make analysis tractable.]]
|| [[File:Stacked-boltzmann.png|thumb| This network has multiple RBM's to encode a hierarchy of hidden features. After a single RBM is trained, another blue hidden layer (see left RBM) is added, and the top 2 layers are trained as a red & blue RBM. Thus the middle layers of an RBM acts as hidden or visible, depending on the training phase it's is in.]]
|}
 
Line 35:
|-
|| [[File:Helmholtz Machine.png |thumb|Instead of the bidirectional symmetric connection of the stacked Boltzmann machines, we have separate one-way connections to form a loop. It does both generation and discrimination.]]
|| [[File:Autoencoder_schema.png |thumb|A feed forward network that aims to find a good middle layer representation of its input world. This network is deterministic, so it's is not as robust as its successor the VAE.]]
|| [[File:VAE blocks.png |thumb|Applies Variational Inference to the Autoencoder. The middle layer is a set of means & variances for Gaussian distributions. The stochastic nature allows for more robust imagination than the deterministic autoencoder. ]]
|}
Line 86:
 
{{term |1=[[Helmholtz machine]]}}
{{defn |1=These are early inspirations for the Variational Auto Encoders. It'sIts 2 networks combined into one—forward weights operates recognition and backward weights implements imagination. It is perhaps the first network to do both. Helmholtz did not work in machine learning but he inspired the view of "statistical inference engine whose function is to infer probable causes of sensory input".<ref name=“nc95“>{{Cite journal|title = The Helmholtz machine.|journal = Neural Computation|date = 1995|pages = 889–904|volume = 7|issue = 5|first1 = Dayan|last1 = Peter|authorlink1=Peter Dayan|first2 = Geoffrey E.|last2 = Hinton|authorlink2=Geoffrey Hinton|first3 = Radford M.|last3 = Neal|authorlink3=Radford M. Neal|first4 = Richard S.|last4 = Zemel|authorlink4=Richard Zemel|doi = 10.1162/neco.1995.7.5.889|pmid = 7584891|s2cid = 1890561|hdl = 21.11116/0000-0002-D6D3-E|hdl-access = free}} {{closed access}}</ref> the stochastic binary neuron outputs a probability that its state is 0 or 1. The data input is normally not considered a layer, but in the Helmholtz machine generation mode, the data layer receives input from the middle layer and has separate weights for this purpose, so it is considered a layer. Hence this network has 3 layers.}}
 
{{term |1=[[Variational autoencoder]]}}