Multi-task learning: Difference between revisions

Content deleted Content added
Removed applications because they date to the early 200's and do not reflect well modern applications. Also, there are so many applications today, almost any computer vision paper, that these appear very random and arbitrary
Tags: references removed Visual edit
Methods: Organized a bit, and added an intro
Line 9:
 
==Methods==
The key challenge in multi-task learning, is how to combine learning signals from multiple tasks into a single model. This may strongly depend on how well different task agree with each other, or contradict each other. There are several ways to address this challenge:
 
===Task grouping and overlap===
Line 19 ⟶ 20:
Related to multi-task learning is the concept of knowledge transfer. Whereas traditional multi-task learning implies that a shared representation is developed concurrently across tasks, transfer of knowledge implies a sequentially shared representation. Large scale machine learning projects such as the deep [[convolutional neural network]] [[GoogLeNet]],<ref>{{Cite book|arxiv = 1409.4842 |doi = 10.1109/CVPR.2015.7298594 |isbn = 978-1-4673-6964-0|chapter = Going deeper with convolutions |title = 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) |pages = 1–9 |year = 2015 |last1 = Szegedy |first1 = Christian |last2 = Wei Liu |first2 = Youssef |last3 = Yangqing Jia |first3 = Tomaso |last4 = Sermanet |first4 = Pierre |last5 = Reed |first5 = Scott |last6 = Anguelov |first6 = Dragomir |last7 = Erhan |first7 = Dumitru |last8 = Vanhoucke |first8 = Vincent |last9 = Rabinovich |first9 = Andrew |s2cid = 206592484 }}</ref> an image-based object classifier, can develop robust representations which may be useful to further algorithms learning related tasks. For example, the pre-trained model can be used as a feature extractor to perform pre-processing for another learning algorithm. Or the pre-trained model can be used to initialize a model with similar architecture which is then fine-tuned to learn a different classification task.<ref>{{Cite web|url = https://www.mit.edu/~9.520/fall15/slides/class24/deep_learning_overview.pdf|title = Deep Learning Overview|last = Roig|first = Gemma|access-date = 2019-08-26|archive-date = 2016-03-06|archive-url = https://web.archive.org/web/20160306020712/http://www.mit.edu/~9.520/fall15/slides/class24/deep_learning_overview.pdf|url-status = dead}}</ref>
 
=== GroupMultiple onlinenon-stationary adaptive learningtasks ===
Traditionally Multi-task learning and transfer of knowledge are applied to stationary learning settings. Their extension to non-stationary environments is termed ''Group online adaptive learning'' (GOAL).<ref>Zweig, A. & Chechik, G. Group online adaptive learning. Machine Learning, DOI 10.1007/s10994-017- 5661-5, August 2017. http://rdcu.be/uFSv</ref> Sharing information could be particularly useful if learners operate in continuously changing environments, because a learner could benefit from previous experience of another learner to quickly adapt to their new environment. Such group-adaptive learning has numerous applications, from predicting [[Financial modeling|financial time-series]], through content recommendation systems, to visual understanding for adaptive autonomous agents.
 
=== Multi-task optimization ===
Line 66 ⟶ 67:
 
==== Known task structure ====
 
 
===== Task structure representations =====