Meta-learning (computer science): Difference between revisions

Content deleted Content added
Citation bot (talk | contribs)
Alter: template type. Add: chapter-url. Removed or converted URL. | Use this bot. Report bugs. | Suggested by AManWithNoPlan | #UCB_CommandLine
Line 40:
 
====Memory-Augmented Neural Networks====
A Memory-Augmented [[Neural Network]], or MANN for short, is claimed to be able to encode new information quickly and thus to adapt to new tasks after only a few examples.<ref name="paper2">{{cite documentweb|url=http://proceedings.mlr.press/v48/santoro16.pdf|first1=Adam|last1=Santoro|first2=Sergey|last2=Bartunov|first3=Daan|last3=Wierstra|first4=Timothy|last4=Lillicrap|title=Meta-Learning with Memory-Augmented Neural Networks|publisher=Google DeepMind|access-date=29 October 2019|language=en}}</ref>
 
====Meta Networks====
Line 49:
 
====Convolutional Siamese Neural Network====
[[Siamese neural network]] is composed of two twin networks whose output is jointly trained. There is a function above to learn the relationship between input data sample pairs. The two networks are the same, sharing the same weight and network parameters.<ref name="paper4">{{cite documentweb|url=http://www.cs.toronto.edu/~rsalakhu/papers/oneshot1.pdf|first1=Gregory|last1=Koch|first2=Richard|last2=Zemel|first3=Ruslan|last3=Salakhutdinov|year=2015|title=Siamese Neural Networks for One-shot Image Recognition|publisher=Department of Computer Science, University of Toronto|___location=Toronto, Ontario, Canada|language=en}}</ref>
 
====Matching Networks====
Matching Networks learn a network that maps a small labelled support set and an unlabelled example to its label, obviating the need for fine-tuning to adapt to new class types.<ref name="paper5">{{cite documentweb|url=http://papers.nips.cc/paper/6385-matching-networks-for-one-shot-learning.pdf|last1=Vinyals|first1=O.|last2=Blundell|first2=C.|last3=Lillicrap|first3=T.|last4=Kavukcuoglu|first4=K.|last5=Wierstra|first5=D.|year=2016|title=Matching networks for one shot learning|publisher=Google DeepMind|access-date=3 November 2019|language=en}}</ref>
 
====Relation Network====
The Relation Network (RN), is trained end-to-end from scratch. During meta-learning, it learns to learn a deep distance metric to compare a small number of images within episodes, each of which is designed to simulate the few-shot setting.<ref name="paper6">{{cite documentweb|url=http://openaccess.thecvf.com/content_cvpr_2018/papers_backup/Sung_Learning_to_Compare_CVPR_2018_paper.pdf|last1=Sung|first1=F.|last2=Yang|first2=Y.|last3=Zhang|first3=L.|last4=Xiang|first4=T.|last5=Torr|first5=P. H. S.|last6=Hospedales|first6=T. M.|year=2018|title=Learning to compare: relation network for few-shot learning|language=en}}</ref>
 
====Prototypical Networks====
Prototypical Networks learn a [[metric space]] in which classification can be performed by computing distances to prototype representations of each class. Compared to recent approaches for few-shot learning, they reflect a simpler inductive bias that is beneficial in this limited-data regime, and achieve satisfied results.<ref name="paper7">{{cite documentweb|url=http://papers.nips.cc/paper/6996-prototypical-networks-for-few-shot-learning.pdf|last1=Snell|first1=J.|last2=Swersky|first2=K.|last3=Zemel|first3=R. S.|year=2017|title=Prototypical networks for few-shot learning|language=en}}</ref>
 
===Optimization-Based===