Multi-task learning: Difference between revisions

Content deleted Content added
ї
Tags: Reverted blanking Visual edit Mobile edit Mobile web edit
m Reverted 1 edit by 91.244.39.81 (talk) to last revision by Citation bot
Line 6:
In the classification context, MTL aims to improve the performance of multiple classification tasks by learning them jointly. One example is a spam-filter, which can be treated as distinct but related classification tasks across different users. To make this more concrete, consider that different people have different distributions of features which distinguish spam emails from legitimate ones, for example an English speaker may find that all emails in Russian are spam, not so for Russian speakers. Yet there is a definite commonality in this classification task across users, for example one common feature might be text related to money transfer. Solving each user's spam classification problem jointly via MTL can let the solutions inform each other and improve performance.<ref name=":0">{{Cite web|url = http://www.cs.cornell.edu/~kilian/research/multitasklearning/multitasklearning.html|title = Multi-task Learning|last = Weinberger|first = Kilian}}</ref> Further examples of settings for MTL include [[multiclass classification]] and [[multi-label classification]].<ref name=":1">{{Cite arXiv|eprint = 1504.03101|title = Convex Learning of Multiple Tasks and their Structure|last = Ciliberto|first = C.|date = 2015 |class = cs.LG}}</ref>
 
Multi-task learning works because [[Regularization (mathematics)|regularization]] induced by requiring an algorithm to perform well on a related task can be superior to regularization that prevents [[overfitting]] by penalizing all complexity uniformly. One situation where MTL may be particularly helpful is if the tasks share significant commonalities and are generally slightly under sampled.<ref name=":bmdl">Hajiramezanali, E. & Dadaneh, S. Z. & Karbalayghareh, A. & Zhou, Z. & Qian, X. Bayesian multi-___domain learning for cancer subtype discovery from next-generation sequencing count data. 32nd Conference on Neural Information Processing Systems (NIPS 2018), Montréal, Canada. {{ArXiv|1810.09433}}</ref><ref name=":0" /> However, as discussed below, MTL has also been shown to be beneficial for learning unrelated tasks.<ref name=":bmdl"/><ref name=":3">Romera-Paredes, B., Argyriou, A., Bianchi-Berthouze, N., & Pontil, M., (2012) Exploiting Unrelated Tasks in Multi-Task Learning. http://jmlr.csail.mit.edu/proceedings/papers/v22/romera12/romera12.pdf</ref>
 
==Methods==
 
===Task grouping and overlap===
Within the MTL paradigm, information can be shared across some or all of the tasks. Depending on the structure of task relatedness, one may want to share information selectively across the tasks. For example, tasks may be grouped or exist in a hierarchy, or be related according to some general metric. Suppose, as developed more formally below, that the parameter vector modeling each task is a [[linear combination]] of some underlying basis. Similarity in terms of this basis can indicate the relatedness of the tasks. For example, with [[Sparse array|sparsity]], overlap of nonzero coefficients across tasks indicates commonality. A task grouping then corresponds to those tasks lying in a subspace generated by some subset of basis elements, where tasks in different groups may be disjoint or overlap arbitrarily in terms of their bases.<ref>Kumar, A., & Daume III, H., (2012) Learning Task Grouping and Overlap in Multi-Task Learning. http://icml.cc/2012/papers/690.pdf</ref> Task relatedness can be imposed a priori or learned from the data.<ref name=":1"/><ref>Jawanpuria, P., & Saketha Nath, J., (2012) A Convex Feature Learning Formulation for Latent Task Structure Discovery. http://icml.cc/2012/papers/90.pdf</ref> Hierarchical task relatedness can also be exploited implicitly without assuming a priori knowledge or learning relations explicitly.<ref name=":bmdl">Hajiramezanali, E. & Dadaneh, S. Z. & Karbalayghareh, A. & Zhou, Z. & Qian, X. Bayesian multi-___domain learning for cancer subtype discovery from next-generation sequencing count data. 32nd Conference on Neural Information Processing Systems (NIPS 2018), Montréal, Canada. {{ArXiv|1810.09433}}</ref><ref>Zweig, A. & Weinshall, D. Hierarchical Regularization Cascade for Joint Learning. Proceedings: of 30th International Conference on Machine Learning (ICML), Atlanta GA, June 2013. http://www.cs.huji.ac.il/~daphna/papers/Zweig_ICML2013.pdf</ref> For example, the explicit learning of sample relevance across tasks can be done to guarantee the effectiveness of joint learning across multiple domains.<ref name=":bmdl"/>
 
===Exploiting unrelated tasks===
One can attempt learning a group of principal tasks using a group of auxiliary tasks, unrelated to the principal ones. In many applications, joint learning of unrelated tasks which use the same input data can be beneficial. The reason is that prior knowledge about task relatedness can lead to sparser and more informative representations for each task grouping, essentially by screening out idiosyncrasies of the data distribution. Novel methods which builds on a prior multitask methodology by favoring a shared low-dimensional representation within each task grouping have been proposed. The programmer can impose a penalty on tasks from different groups which encourages the two representations to be [[orthogonal]]. Experiments on synthetic and real data have indicated that incorporating unrelated tasks can result in significant improvements over standard multi-task learning methods.<ref name=":3">Romera-Paredes, B., Argyriou, A., Bianchi-Berthouze, N., & Pontil, M., (2012) Exploiting Unrelated Tasks in Multi-Task Learning. http://jmlr.csail.mit.edu/proceedings/papers/v22/romera12/romera12.pdf</ref>
 
=== Transfer of knowledge ===
Related to multi-task learning is the concept of knowledge transfer. Whereas traditional multi-task learning implies that a shared representation is developed concurrently across tasks, transfer of knowledge implies a sequentially shared representation. Large scale machine learning projects such as the deep [[convolutional neural network]] [[GoogLeNet]],<ref>{{Cite book|arxiv = 1409.4842 |doi = 10.1109/CVPR.2015.7298594 |isbn = 978-1-4673-6964-0|chapter = Going deeper with convolutions |title = 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) |pages = 1–9 |year = 2015 |last1 = Szegedy |first1 = Christian |last2 = Wei Liu |first2 = Youssef |last3 = Yangqing Jia |first3 = Tomaso |last4 = Sermanet |first4 = Pierre |last5 = Reed |first5 = Scott |last6 = Anguelov |first6 = Dragomir |last7 = Erhan |first7 = Dumitru |last8 = Vanhoucke |first8 = Vincent |last9 = Rabinovich |first9 = Andrew |s2cid = 206592484 }}</ref> an image-based object classifier, can develop robust representations which may be useful to further algorithms learning related tasks. For example, the pre-trained model can be used as a feature extractor to perform pre-processing for another learning algorithm. Or the pre-trained model can be used to initialize a model with similar architecture which is then fine-tuned to learn a different classification task.<ref>{{Cite web|url = https://www.mit.edu/~9.520/fall15/slides/class24/deep_learning_overview.pdf|title = Deep Learning Overview|last = Roig|first = Gemma}}</ref>
 
=== Group online adaptive learning ===
Traditionally Multi-task learning and transfer of knowledge are applied to stationary learning settings. Their extension to non-stationary environments is termed Group online adaptive learning (GOAL).<ref>Zweig, A. & Chechik, G. Group online adaptive learning. Machine Learning, DOI 10.1007/s10994-017- 5661-5, August 2017. http://rdcu.be/uFSv</ref> Sharing information could be particularly useful if learners operate in continuously changing environments, because a learner could benefit from previous experience of another learner to quickly adapt to their new environment. Such group-adaptive learning has numerous applications, from predicting financial time-series, through content recommendation systems, to visual understanding for adaptive autonomous agents.
 
== Mathematics ==