Content deleted Content added
Citation bot (talk | contribs) m Alter: journal. | You can use this bot yourself. Report bugs here.| Activated by User:Nemo bis | via #UCB_webform |
Citation bot (talk | contribs) Altered template type. Add: pmc, pages, volume, journal. | Use this bot. Report bugs. | Suggested by Schützenpanzer | Category:CS1 errors: unsupported parameter | #UCB_Category 214/244 |
||
(48 intermediate revisions by 32 users not shown) | |||
Line 1:
{{short description|Subfield of machine learning}}
{{About|meta {{See also|Ensemble learning}}
{{machine learning|Paradigms}}
'''Meta
is a subfield of [[machine learning]] where automatic learning algorithms are applied
Flexibility is important because each learning algorithm is based on a set of assumptions about the data, its [[inductive bias]].<ref name="utgoff1986">{{Cite
| author = P. E. Utgoff
|
|
|editor2=J. Carbonell
|editor3=T. Mitchell
| title = Machine Learning: An Artificial Intelligence Approach
| pages = 163–190
| year = 1986
| publisher = Morgan Kaufmann
| isbn = 978-0-934613-00-2
| language = en
| chapter-url = https://books.google.com/books?id=f9RylgKpHZsC&q=utgoff&pg=PA107
}}</ref> This means that it will only learn well if the bias matches the learning problem. A learning algorithm may perform very well in one ___domain, but not on the next. This poses strong restrictions on the use of [[machine learning]] or [[data mining]] techniques, since the relationship between the learning problem (often some kind of [[database]]) and the effectiveness of different learning algorithms is not yet understood.
By using different kinds of metadata, like properties of the learning problem, algorithm properties (like performance measures), or patterns previously derived from the data, it is possible to learn, select, alter or combine different learning algorithms to effectively solve a given learning problem. Critiques of meta
== Definition ==
A proposed definition<ref>{{Cite journal|
* The system must include a learning subsystem.
* Experience is gained by exploiting meta knowledge extracted
Line 21 ⟶ 30:
** from different domains.
* Learning bias must be chosen dynamically.
''Bias'' refers to the assumptions that influence the choice of explanatory hypotheses<ref>{{Cite book|title=Metalearning - Springer|doi=10.1007/978-3-540-73263-1|series = Cognitive Technologies|year = 2009|isbn = 978-3-540-73262-4|last1 = Brazdil|first1 = Pavel|last2=Carrier|first2=Christophe Giraud|last3=Soares|first3=Carlos|last4=Vilalta|first4=Ricardo|language=en}}</ref> and not the notion of bias represented in the [[bias-variance dilemma]]. Meta
* Declarative bias specifies the representation of the space of hypotheses, and affects the size of the search space (e.g., represent hypotheses using linear functions only).
* Procedural bias imposes constraints on the ordering of the inductive hypotheses (
==Common approaches==
There are three common approaches:
# using (cyclic) networks with external or internal memory (model-based)
# learning effective distance metrics (metrics-based)
# explicitly optimizing model parameters for fast learning (optimization-based).
===Model-Based===
Model-based meta-learning models updates its parameters rapidly with a few training steps, which can be achieved by its internal architecture or controlled by another meta-learner model.<ref name="paper1"/>
====Memory-Augmented Neural Networks====
====Meta Networks====
Meta Networks (MetaNet) learns a meta-level knowledge across tasks and shifts its inductive biases via fast parameterization for rapid generalization.<ref name="paper3">
===Metric-Based===
The core idea in metric-based meta-learning is similar to [[K-nearest neighbor algorithm|nearest neighbors]] algorithms, which weight is generated by a kernel function. It aims to learn a metric or distance function over objects. The notion of a good metric is problem-dependent. It should represent the relationship between inputs in the task space and facilitate problem solving.<ref name="paper1" />
====Convolutional Siamese
====Matching Networks====
Matching Networks learn a network that maps a small labelled support set and an unlabelled example to its label, obviating the need for fine-tuning to adapt to new class types.<ref name="paper5">
====Relation Network====
The Relation Network (RN), is trained end-to-end from scratch. During meta-learning, it learns to learn a deep distance metric to compare a small number of images within episodes, each of which is designed to simulate the few-shot setting.<ref name="paper6">
====Prototypical Networks====
Prototypical Networks learn a [[metric space]] in which classification can be performed by computing distances to prototype representations of each class. Compared to recent approaches for few-shot learning, they reflect a simpler inductive bias that is beneficial in this limited-data regime, and achieve satisfied results.<ref name="paper7">
===Optimization-Based===
What optimization-based
====LSTM Meta-Learner====
[[LSTM]]-based meta-learner is to learn the exact [[optimization algorithm]] used to train another learner [[Artificial neural network|neural network]] [[classification rule|classifier]]
====Temporal Discreteness====
====Reptile====
Reptile is a remarkably simple meta-learning optimization algorithm, given that both of its components rely on [[meta-optimization]] through gradient descent and both are model-agnostic.<ref name="paper10">
==Examples==
Some approaches which have been viewed as instances of meta
* [[Recurrent neural networks]] (RNNs) are universal computers. In 1993, [[Jürgen Schmidhuber]] showed how "self-referential" RNNs can in principle learn by [[backpropagation]] to run their own weight change algorithm, which may be quite different from backpropagation.<ref name="sch1993">{{cite journal | last1 = Schmidhuber | first1 = Jürgen | year = 1993| title = A self-referential weight matrix
* In the 1990s, Meta [[Reinforcement Learning]] or Meta RL was achieved in Schmidhuber's research group through self-modifying policies written in a universal programming language that contains special instructions for changing the policy itself. There is a single lifelong trial. The goal of the RL agent is to maximize reward. It learns to accelerate reward intake by continually improving its own learning algorithm which is part of the "self-referential" policy.<ref name="sch1994">{{cite journal | last1 = Schmidhuber | first1 = Jürgen | year = 1994| title = On learning how to learn learning strategies
* An extreme type of Meta [[Reinforcement Learning]] is embodied by the [[Gödel machine]], a theoretical construct which can inspect and modify any part of its own software which also contains a general [[Automated theorem proving|theorem prover]]. It can achieve [[recursive self-improvement]] in a provably optimal way.<ref name="goedelmachine">{{cite journal | last1 = Schmidhuber | first1 = Jürgen | year = 2006| title = Gödel machines: Fully Self-Referential Optimal Universal Self-Improvers | url=
▲* In the 1990s, Meta [[Reinforcement Learning]] or Meta RL was achieved in Schmidhuber's research group through self-modifying policies written in a universal programming language that contains special instructions for changing the policy itself. There is a single lifelong trial. The goal of the RL agent is to maximize reward. It learns to accelerate reward intake by continually improving its own learning algorithm which is part of the "self-referential" policy.<ref name="sch1994">{{cite journal | last1 = Schmidhuber | first1 = Jürgen | year = 1994| title = On learning how to learn learning strategies | url= | journal = Technical Report FKI-198-94, Tech. Univ. Munich}}</ref><ref name="sch1997">{{cite journal | last1 = Schmidhuber | first1 = Jürgen | last2 = Zhao | first2 = J. | last3 = Wiering | first3 = M. | year = 1997| title = Shifting inductive bias with success-story algorithm, adaptive Levin search, and incremental self-improvement | url= | journal = Machine Learning | volume = 28 | pages = 105–130 | doi=10.1023/a:1007383707642}}</ref>
* ''Model-Agnostic Meta-Learning'' (MAML) was introduced in 2017 by [[Chelsea Finn]] et al.<ref name="maml" /> Given a sequence of tasks, the parameters of a given model are trained such that few iterations of gradient descent with few training data from a new task will lead to good generalization performance on that task. MAML "trains the model to be easy to fine-tune."<ref name="maml" /> MAML was successfully applied to few-shot image classification benchmarks and to policy
▲* An extreme type of Meta [[Reinforcement Learning]] is embodied by the [[Gödel machine]], a theoretical construct which can inspect and modify any part of its own software which also contains a general [[Automated theorem proving|theorem prover]]. It can achieve [[recursive self-improvement]] in a provably optimal way.<ref name="goedelmachine">{{cite journal | last1 = Schmidhuber | first1 = Jürgen | year = 2006| title = Gödel machines: Fully Self-Referential Optimal Universal Self-Improvers | url= | journal = In B. Goertzel & C. Pennachin, Eds.: Artificial General Intelligence | pages = 199–226}}</ref><ref name="scholarpedia" />
* ''Variational Bayes-Adaptive Deep RL'' (VariBAD) was introduced in 2019.<ref>{{Cite journal |last1=Zintgraf |first1=Luisa |last2=Schulze |first2=Sebastian |last3=Lu |first3=Cong |last4=Feng |first4=Leo |last5=Igl |first5=Maximilian |last6=Shiarlis |first6=Kyriacos |last7=Gal |first7=Yarin |last8=Hofmann |first8=Katja |last9=Whiteson |first9=Shimon |date=2021 |title=VariBAD: Variational Bayes-Adaptive Deep RL via Meta-Learning |url=http://jmlr.org/papers/v22/21-0657.html |journal=Journal of Machine Learning Research |volume=22 |issue=289 |pages=1–39 |issn=1533-7928}}</ref> While MAML is optimization-based, VariBAD is a model-based method for meta reinforcement learning, and leverages a [[variational autoencoder]] to capture the task information in an internal memory, thus conditioning its decision making on the task.
▲* ''Model-Agnostic Meta-Learning'' (MAML) was introduced in 2017 by Chelsea Finn et al.<ref name="maml" /> Given a sequence of tasks, the parameters of a given model are trained such that few iterations of gradient descent with few training data from a new task will lead to good generalization performance on that task. MAML "trains the model to be easy to fine-tune."<ref name="maml" /> MAML was successfully applied to few-shot image classification benchmarks and to policy gradient-based reinforcement learning.<ref name="maml">{{cite arxiv | last1 = Finn | first1 = Chelsea | last2 = Abbeel | first2 = Pieter | last3 = Levine | first3 = Sergey |year = 2017| title = Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks | eprint=1703.03400|class=cs.LG }}</ref>
* When addressing a set of tasks, most meta learning approaches optimize the average score across all tasks. Hence, certain tasks may be sacrificed in favor of the average score, which is often unacceptable in real-world applications. By contrast, ''Robust Meta Reinforcement Learning'' (RoML) focuses on improving low-score tasks, increasing robustness to the selection of task.<ref>{{Cite journal |last1=Greenberg |first1=Ido |last2=Mannor |first2=Shie |last3=Chechik |first3=Gal |last4=Meirom |first4=Eli |date=2023-12-15 |title=Train Hard, Fight Easy: Robust Meta Reinforcement Learning |url=https://proceedings.neurips.cc/paper_files/paper/2023/hash/d74e6bfe9ce029526e69db14d2c281ec-Abstract-Conference.html |journal=Advances in Neural Information Processing Systems |language=en |volume=36 |pages=68276–68299}}</ref> RoML works as a meta-algorithm, as it can be applied on top of other meta learning algorithms (such as MAML and VariBAD) to increase their robustness. It is applicable to both supervised meta learning and meta [[reinforcement learning]].
* ''Discovering [[meta-knowledge]]'' works by inducing knowledge (e.g. rules) that expresses how each learning method will perform on different learning problems. The metadata is formed by characteristics of the data (general, statistical, information-theoretic,... ) in the learning problem, and characteristics of the learning algorithm (type, parameter settings, performance measures,...). Another learning algorithm then learns how the data characteristics relate to the algorithm characteristics. Given a new learning problem, the data characteristics are measured, and the performance of different learning algorithms are predicted. Hence, one can predict the algorithms best suited for the new problem.
* ''Stacked generalisation'' works by combining multiple (different) learning algorithms. The metadata is formed by the predictions of those different algorithms. Another learning algorithm learns from this metadata to predict which combinations of algorithms give generally good results. Given a new learning problem, the predictions of the selected set of algorithms are combined (e.g. by (weighted) voting) to provide the final prediction. Since each algorithm is deemed to work on a subset of problems, a combination is hoped to be more flexible and able to make good predictions.
Line 79 ⟶ 92:
* ''[[Inductive transfer]]'' studies how the learning process can be improved over time. Metadata consists of knowledge about previous learning episodes and is used to efficiently develop an effective hypothesis for a new task. A related approach is called [[learning to learn]], in which the goal is to use acquired knowledge from one ___domain to help learning in other domains.
* Other approaches using metadata to improve automatic learning are [[learning classifier system]]s, [[case-based reasoning]] and [[constraint satisfaction]].
* Some initial, theoretical work has been initiated to use ''[[Applied Behavioral Analysis]]'' as a foundation for agent-mediated meta-learning about the performances of human learners, and adjust the instructional course of an artificial agent.<ref name="Begoli, PRS-ABA, ABA Ontology">{{cite
* [[AutoML]] such as Google Brain's "AI building AI" project, which according to Google briefly exceeded existing [[ImageNet]] benchmarks in 2017.<ref>{{cite news|title=Robots Are Now 'Creating New Robots,' Tech Reporter Says|url=https://www.npr.org/2018/03/15/593863645/robots-are-now-creating-new-robots-tech-reporter-says|
<!--==See also==
Line 89 ⟶ 102:
== External links ==
* [http://www.scholarpedia.org/article/Metalearning Metalearning] article in [[Scholarpedia]]
* {{cite journal|last1=Vilalta
* {{cite book|last1=Giraud-Carrier
*
* Video courses about Meta-Learning with step-by-step explanation of [https://www.youtube.com/watch?v=IkDw22a8BDE MAML], [https://www.youtube.com/watch?v=rHGPfl0pvLY Prototypical Networks], and [https://www.youtube.com/watch?v=j8qDaVfrO_c Relation Networks].
{{DEFAULTSORT:Meta
[[Category:Machine learning]]
|