Content deleted Content added
Citation bot (talk | contribs) Alter: template type. Add: chapter-url. Removed or converted URL. | Use this bot. Report bugs. | Suggested by AManWithNoPlan | #UCB_CommandLine |
→Common approaches: Web |
||
Line 40:
====Memory-Augmented Neural Networks====
A Memory-Augmented [[Neural Network]], or MANN for short, is claimed to be able to encode new information quickly and thus to adapt to new tasks after only a few examples.<ref name="paper2">{{cite
====Meta Networks====
Line 49:
====Convolutional Siamese Neural Network====
[[Siamese neural network]] is composed of two twin networks whose output is jointly trained. There is a function above to learn the relationship between input data sample pairs. The two networks are the same, sharing the same weight and network parameters.<ref name="paper4">{{cite
====Matching Networks====
Matching Networks learn a network that maps a small labelled support set and an unlabelled example to its label, obviating the need for fine-tuning to adapt to new class types.<ref name="paper5">{{cite
====Relation Network====
The Relation Network (RN), is trained end-to-end from scratch. During meta-learning, it learns to learn a deep distance metric to compare a small number of images within episodes, each of which is designed to simulate the few-shot setting.<ref name="paper6">{{cite
====Prototypical Networks====
Prototypical Networks learn a [[metric space]] in which classification can be performed by computing distances to prototype representations of each class. Compared to recent approaches for few-shot learning, they reflect a simpler inductive bias that is beneficial in this limited-data regime, and achieve satisfied results.<ref name="paper7">{{cite
===Optimization-Based===
|