Capsule neural network: Difference between revisions

Content deleted Content added
VANDAL
Line 7:
{{TOC limit|3}}
 
== storyHistory ==
In 2000 [[Geoffrey Hinton]] et. al. described an imaging system that combined segmentation and recognition into a single inference process using [[Parse tree|parse trees]]. So-called credibility networks described the joint distribution over the latent variables and over the possible parse trees. That system proved useful on the [[MNIST database|MNIST]] handwritten digit database.<ref name=":0" />
 
Line 14:
In Hinton's original idea one minicolumn would represent and detect one multidimensional entity.<ref>{{Citation|last=Meher Vamsi|title=Geoffrey Hinton Capsule theory|date=2017-11-15|url=https://www.youtube.com/watch?v=6S1_WqE55UQ|accessdate=2017-12-06}}</ref><ref group="note" name=":0" />
 
== TransformerTransformations ==
An [[Invariant (mathematics)|invariant]] is an object property that does not change as a result of some transformation. For example, the area of a circle does not change if the circle is shifted to the left.
 
Line 85:
 
==== Procedure squash ====
Because the length of the vectors represents probabilities they should be between zero (0) and one (1), and to do that a squashing function is applied:<ref name=":1"/>
 
:<math>\begin{array}{lcl}
1: \mathbf{procedure}~ \mathrm{squash} ( \mathbf{a} ) \\
2: \quad \triangleright \mbox{argument vector} \\
2: \quad \triangleright \mbox{return vector} \\