Content deleted Content added
Olexa Riznyk (talk | contribs) →LSTM Meta-Learner: Fixing a wikilink |
Citation bot (talk | contribs) Alter: template type. Add: chapter-url. Removed or converted URL. | Use this bot. Report bugs. | Suggested by AManWithNoPlan | #UCB_CommandLine |
||
Line 14:
| year = 1986
| language = en
| chapter-url =
}}</ref> This means that it will only learn well if the bias matches the learning problem. A learning algorithm may perform very well in one ___domain, but not on the next. This poses strong restrictions on the use of [[machine learning]] or [[data mining]] techniques, since the relationship between the learning problem (often some kind of [[database]]) and the effectiveness of different learning algorithms is not yet understood.
Line 31:
==Common approaches==
There are three common approaches:<ref name="paper1">{{cite
# using (cyclic) networks with external or internal memory (model-based)
# learning effective distance metrics (metrics-based)
Line 40:
====Memory-Augmented Neural Networks====
A Memory-Augmented [[Neural Network]], or MANN for short, is claimed to be able to encode new information quickly and thus to adapt to new tasks after only a few examples.<ref name="paper2">{{cite
====Meta Networks====
Meta Networks (MetaNet) learns a meta-level knowledge across tasks and shifts its inductive biases via fast parameterization for rapid generalization.<ref name="paper3">{{cite
===Metric-Based===
Line 49:
====Convolutional Siamese Neural Network====
[[Siamese neural network]] is composed of two twin networks whose output is jointly trained. There is a function above to learn the relationship between input data sample pairs. The two networks are the same, sharing the same weight and network parameters.<ref name="paper4">{{cite
====Matching Networks====
Matching Networks learn a network that maps a small labelled support set and an unlabelled example to its label, obviating the need for fine-tuning to adapt to new class types.<ref name="paper5">{{cite
====Relation Network====
The Relation Network (RN), is trained end-to-end from scratch. During meta-learning, it learns to learn a deep distance metric to compare a small number of images within episodes, each of which is designed to simulate the few-shot setting.<ref name="paper6">{{cite
====Prototypical Networks====
Prototypical Networks learn a [[metric space]] in which classification can be performed by computing distances to prototype representations of each class. Compared to recent approaches for few-shot learning, they reflect a simpler inductive bias that is beneficial in this limited-data regime, and achieve satisfied results.<ref name="paper7">{{cite
===Optimization-Based===
Line 70:
====Reptile====
Reptile is a remarkably simple meta-learning optimization algorithm, given that both of its components rely on [[meta-optimization]] through gradient descent and both are model-agnostic.<ref name="paper10">{{cite
==Examples==
|