Content deleted Content added
Merge from Multitask optimization following unopposed 2024 proposal; overlap and context; see Talk:Multitask optimization#Merge with multitask learning |
m →top: HTTP to HTTPS for Cornell University |
||
(5 intermediate revisions by 4 users not shown) | |||
Line 1:
{{short description|Solving multiple machine learning tasks at the same time}}
'''Multi-task learning''' (MTL) is a subfield of [[machine learning]] in which multiple learning tasks are solved at the same time, while exploiting commonalities and differences across tasks. This can result in improved learning efficiency and prediction accuracy for the task-specific models, when compared to training the models separately.<ref>Baxter, J. (2000). A model of inductive bias learning" ''Journal of Artificial Intelligence Research'' 12:149--198, [http://www-2.cs.cmu.edu/afs/cs/project/jair/pub/volume12/baxter00a.pdf On-line paper]</ref><ref>[[Sebastian Thrun|Thrun, S.]] (1996). Is learning the n-th thing any easier than learning the first?. In Advances in Neural Information Processing Systems 8, pp. 640--646. MIT Press. [http://citeseer.ist.psu.edu/thrun96is.html Paper at Citeseer]</ref><ref name=":2">{{Cite journal|url =
Inherently, Multi-task learning is a [[multi-objective optimization]] problem having [[
Part of Advances in Neural Information Processing Systems 31 (NeurIPS 2018), https://proceedings.neurips.cc/paper/2018/hash/432aca3a1e345e339f35a30c8f65edce-Abstract.html</ref>
Early versions of MTL were called "hints".<ref>Suddarth, S., Kergosien, Y. (1990). Rule-injection hints as a means of improving network performance and learning time. EURASIP Workshop. Neural Networks pp. 120-129. Lecture Notes in Computer Science. Springer.</ref><ref>{{cite journal | last1 = Abu-Mostafa | first1 = Y. S. | year = 1990 | title = Learning from hints in neural networks | journal = Journal of Complexity | volume = 6 | issue = 2| pages = 192–198 | doi=10.1016/0885-064x(90)90006-y| doi-access = free }}</ref>
Line 12:
==Methods==
The key challenge in multi-task learning, is how to combine learning signals from multiple tasks into a single model. This may strongly depend on how well different task agree with each other, or contradict each other. There are several ways to address this challenge:
===Task grouping and overlap===
Within the MTL paradigm, information can be shared across some or all of the tasks. Depending on the structure of task relatedness, one may want to share information selectively across the tasks. For example, tasks may be grouped or exist in a hierarchy, or be related according to some general metric. Suppose, as developed more formally below, that the parameter vector modeling each task is a [[linear combination]] of some underlying basis. Similarity in terms of this basis can indicate the relatedness of the tasks. For example, with [[Sparse array|sparsity]], overlap of nonzero coefficients across tasks indicates commonality. A task grouping then corresponds to those tasks lying in a subspace generated by some subset of basis elements, where tasks in different groups may be disjoint or overlap arbitrarily in terms of their bases.<ref>Kumar, A., & Daume III, H., (2012) Learning Task Grouping and Overlap in Multi-Task Learning. http://icml.cc/2012/papers/690.pdf</ref> Task relatedness can be imposed a priori or learned from the data.<ref name=":1"/><ref>Jawanpuria, P., & Saketha Nath, J., (2012) A Convex Feature Learning Formulation for Latent Task Structure Discovery. http://icml.cc/2012/papers/90.pdf</ref> Hierarchical task relatedness can also be exploited implicitly without assuming a priori knowledge or learning relations explicitly.<ref name=":bmdl">Hajiramezanali, E. & Dadaneh, S. Z. & Karbalayghareh, A. & Zhou, Z. & Qian, X. Bayesian multi-___domain learning for cancer subtype discovery from next-generation sequencing count data. 32nd Conference on Neural Information Processing Systems (NIPS 2018), Montréal, Canada. {{ArXiv|1810.09433}}</ref><ref>Zweig, A. & Weinshall, D. Hierarchical Regularization Cascade for Joint Learning. Proceedings: of 30th International Conference on Machine Learning
===Exploiting unrelated tasks===
Line 27:
=== Multi-task optimization ===
'''Multi-task optimization''' focuses on solving optimizing the whole process.<ref name=TO>{{cite journal | doi=10.1109/TETCI.2017.2769104 | title=Insights on Transfer Optimization: Because Experience is the Best Teacher | year=2018 | last1=Gupta | first1=Abhishek | last2=Ong | first2=Yew-Soon | last3=Feng | first3=Liang | journal=IEEE Transactions on Emerging Topics in Computational Intelligence | volume=2 | pages=51–64 | hdl=10356/147980 | s2cid=11510470 | hdl-access=free }}</ref><ref name=mfo>{{cite journal | doi=10.1109/TEVC.2015.2458037 | title=Multifactorial Evolution: Toward Evolutionary Multitasking | year=2016 | last1=Gupta | first1=Abhishek | last2=Ong | first2=Yew-Soon | last3=Feng | first3=Liang | journal=IEEE Transactions on Evolutionary Computation | volume=20 | issue=3 | pages=343–357 | hdl=10356/148174 | s2cid=13767012 | hdl-access=free }}</ref> The paradigm has been inspired by the well-established concepts of [[transfer learning]]<ref>{{cite journal | doi=10.1109/TKDE.2009.191 | title=A Survey on Transfer Learning | year=2010 | last1=Pan | first1=Sinno Jialin | last2=Yang | first2=Qiang | journal=IEEE Transactions on Knowledge and Data Engineering | volume=22 | issue=10 | pages=1345–1359 | s2cid=740063 }}</ref> and
The key motivation behind multi-task optimization is that if optimization tasks are related to each other in terms of their optimal solutions or the general characteristics of their function landscapes,<ref>{{cite journal | doi=10.1016/j.engappai.2017.05.008 | title=Coevolutionary multitasking for concurrent global optimization: With case studies in complex engineering design | year=2017 | last1=Cheng | first1=Mei-Ying | last2=Gupta | first2=Abhishek | last3=Ong | first3=Yew-Soon | last4=Ni | first4=Zhi-Wei | journal=Engineering Applications of Artificial Intelligence | volume=64 | pages=13–24 | s2cid=13767210 | doi-access=free }}</ref> the search progress can be transferred to substantially accelerate the search on the other.
Line 35:
There is a direct relationship between multitask optimization and [[multi-objective optimization]].<ref>J. -Y. Li, Z. -H. Zhan, Y. Li and J. Zhang, "Multiple Tasks for Multiple Objectives: A New Multiobjective Optimization Method via Multitask Optimization," in IEEE Transactions on Evolutionary Computation, {{doi|10.1109/TEVC.2023.3294307}}</ref>
In some cases, the simultaneous training of seemingly related tasks may hinder performance compared to single-task models.<ref>{{Cite journal |last1=Standley |first1=Trevor |last2=Zamir |first2=Amir R. |last3=Chen |first3=Dawn |last4=Guibas |first4=Leonidas |last5=Malik |first5=Jitendra |last6=Savarese |first6=Silvio |date=2020-07-13 |title=Learning the Pareto Front with Hypernetworks |url=https://proceedings.mlr.press/v119/standley20a.html |journal=International Conference on Machine Learning
There are several common approaches for multi-task optimization: [[Bayesian optimization]], [[evolutionary computation]], and approaches based on [[Game theory]].<ref name=TO/>
Line 43:
==== Evolutionary multi-tasking ====
'''Evolutionary multi-tasking''' has been explored as a means of exploiting the [[implicit parallelism]] of population-based search algorithms to simultaneously progress multiple distinct optimization tasks. By mapping all tasks to a unified search space, the evolving population of candidate solutions can harness the hidden relationships between them through continuous genetic transfer. This is induced when solutions associated with different tasks crossover.<ref name=mfo/><ref name=cognitive>Ong, Y. S., & Gupta, A. (2016). [http://www.cil.ntu.edu.sg/mfo/downloads/MultitaskOptimization_manuscript.pdf Evolutionary multitasking: a computer science view of cognitive multitasking]. Cognitive Computation, 8(2), 125-142.</ref> Recently, modes of knowledge transfer that are different from direct solution [[Crossover (genetic algorithm)|crossover]] have been explored.<ref>{{cite journal | doi=10.1109/TCYB.2018.2845361 | title=Evolutionary Multitasking via Explicit Autoencoding | year=2019 | last1=Feng | first1=Liang | last2=Zhou | first2=Lei | last3=Zhong | first3=Jinghui | last4=Gupta | first4=Abhishek | last5=Ong | first5=Yew-Soon | last6=Tan | first6=Kay-Chen | last7=Qin | first7=A. K. | journal=IEEE Transactions on Cybernetics | volume=49 | issue=9 | pages=3457–3470 | pmid=29994415 | s2cid=51613697 }}</ref><ref>{{Cite journal |
==== Game-theoretic optimization ====
Line 51:
Algorithms for multi-task optimization span a wide array of real-world applications. Recent studies highlight the potential for speed-ups in the optimization of engineering design parameters by conducting related designs jointly in a multi-task manner.<ref name=cognitive/> In [[machine learning]], the transfer of optimized features across related data sets can enhance the efficiency of the training process as well as improve the generalization capability of learned models.<ref>Chandra, R., Gupta, A., Ong, Y. S., & Goh, C. K. (2016, October). [http://www.cil.ntu.edu.sg/mfo/downloads/cvmultask.pdf Evolutionary multi-task learning for modular training of feedforward neural networks]. In International Conference on Neural Information Processing (pp. 37-46). Springer, Cham.</ref><ref>Yosinski, J., Clune, J., Bengio, Y., & Lipson, H. (2014). [http://papers.nips.cc/paper/5347-how-transferable-are-features-in-deep-n%E2%80%A6 How transferable are features in deep neural networks?] In Advances in neural information processing systems (pp. 3320-3328).</ref> In addition, the concept of multi-tasking has led to advances in automatic [[hyperparameter optimization]] of machine learning models and [[ensemble learning]].<ref>{{cite book | doi=10.1109/CEC.2016.7748363 | chapter=Learning ensemble of decision trees through multifactorial genetic programming | title=2016 IEEE Congress on Evolutionary Computation (CEC) | year=2016 | last1=Wen | first1=Yu-Wei | last2=Ting | first2=Chuan-Kang | pages=5293–5300 | isbn=978-1-5090-0623-6 | s2cid=2617811 }}</ref><ref>{{cite book | doi=10.1145/3205455.3205638 | chapter=Evolutionary feature subspaces generation for ensemble classification | title=Proceedings of the Genetic and Evolutionary Computation Conference | year=2018 | last1=Zhang | first1=Boyu | last2=Qin | first2=A. K. | last3=Sellis | first3=Timos | pages=577–584 | isbn=978-1-4503-5618-3 | s2cid=49564862 }}</ref>
Applications have also been reported in cloud computing,<ref>{{cite book | doi=10.1007/978-3-319-94472-2_10 | chapter=An Evolutionary Multitasking Algorithm for Cloud Computing Service Composition | title=Services – SERVICES 2018 | series=Lecture Notes in Computer Science | year=2018 | last1=Bao | first1=Liang | last2=Qi | first2=Yutao | last3=Shen | first3=Mengqing | last4=Bu | first4=Xiaoxuan | last5=Yu | first5=Jusheng | last6=Li | first6=Qian | last7=Chen | first7=Ping | volume=10975 | pages=130–144 | isbn=978-3-319-94471-5 }}</ref> with future developments geared towards cloud-based on-demand optimization services that can cater to multiple customers simultaneously.<ref name=mfo/><ref>Tang, J., Chen, Y., Deng, Z., Xiang, Y., & Joy, C. P. (2018). [https://www.ijcai.org/proceedings/2018/0538.pdf A Group-based Approach to Improve Multifactorial Evolutionary Algorithm]. In IJCAI (pp. 3870-3876).</ref> Recent work has additionally shown applications in chemistry.<ref>{{citation |mode=cs1 |doi=10.26434/chemrxiv.13250216.v2 |title=Multi-task Bayesian Optimization of Chemical Reactions |work=chemRxiv |date=2021 |last1=Felton |first1=Kobi |last2=Wigh |first2=Daniel |last3=Lapkin |first3=Alexei|doi-access=free }}</ref> In addition, some recent works have applied multi-task optimization algorithms in industrial manufacturing.<ref>{{Cite journal |
== Mathematics ==
Line 157:
==Software package==
A Matlab package called Multi-Task Learning via StructurAl Regularization (MALSAR) <ref>Zhou, J., Chen, J. and Ye, J. MALSAR: Multi-tAsk Learning via StructurAl Regularization. Arizona State University, 2012. http://www.public.asu.edu/~jye02/Software/MALSAR. [http://www.public.asu.edu/~jye02/Software/MALSAR/Manual.pdf On-line manual]</ref> implements the following multi-task learning algorithms: Mean-Regularized Multi-Task Learning,<ref>Evgeniou, T., & Pontil, M. (2004). [https://web.archive.org/web/20171212193041/https://pdfs.semanticscholar.org/1ea1/91c70559d21be93a4d128f95943e80e1b4ff.pdf Regularized multi–task learning]. Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining (pp. 109–117).</ref><ref>{{cite journal | last1 = Evgeniou | first1 = T. | last2 = Micchelli | first2 = C. | last3 = Pontil | first3 = M. | year = 2005 | title = Learning multiple tasks with kernel methods | url = http://jmlr.org/papers/volume6/evgeniou05a/evgeniou05a.pdf | journal = Journal of Machine Learning Research | volume = 6 | page = 615 }}</ref> Multi-Task Learning with Joint Feature Selection,<ref>{{cite journal | last1 = Argyriou | first1 = A. | last2 = Evgeniou | first2 = T. | last3 = Pontil | first3 = M. | year = 2008a | title = Convex multi-task feature learning | journal = Machine Learning | volume = 73 | issue = 3| pages = 243–272 | doi=10.1007/s10994-007-5040-8| doi-access = free }}</ref> Robust Multi-Task Feature Learning,<ref>Chen, J., Zhou, J., & Ye, J. (2011). [https://www.academia.edu/download/44101186/Integrating_low-rank_and_group-sparse_st20160325-15067-1mftmbg.pdf Integrating low-rank and group-sparse structures for robust multi-task learning]{{dead link|date=July 2022|bot=medic}}{{cbignore|bot=medic}}. Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining.</ref> Trace-Norm Regularized Multi-Task Learning,<ref>Ji, S., & Ye, J. (2009). [http://www.machinelearning.org/archive/icml2009/papers/151.pdf An accelerated gradient method for trace norm minimization]. Proceedings of the 26th Annual International Conference on Machine Learning (pp. 457–464).</ref> Alternating Structural Optimization,<ref>{{cite journal | last1 = Ando | first1 = R. | last2 = Zhang | first2 = T. | year = 2005 | title = A framework for learning predictive structures from multiple tasks and unlabeled data | url = http://www.jmlr.org/papers/volume6/ando05a/ando05a.pdf | journal = The Journal of Machine Learning Research | volume = 6 | pages = 1817–1853 }}</ref><ref>Chen, J., Tang, L., Liu, J., & Ye, J. (2009). [http://leitang.net/papers/ICML09_CASO.pdf A convex formulation for learning shared structures from multiple tasks]. Proceedings of the 26th Annual International Conference on Machine Learning (pp. 137–144).</ref> Incoherent Low-Rank and Sparse Learning,<ref>Chen, J., Liu, J., & Ye, J. (2010). [https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3783291/ Learning incoherent sparse and low-rank patterns from multiple tasks]. Proceedings of the 16th ACM SIGKDD international conference on Knowledge discovery and data mining (pp. 1179–1188).</ref> Robust Low-Rank Multi-Task Learning, Clustered Multi-Task Learning,<ref>Jacob, L., Bach, F., & Vert, J. (2008). [https://hal-ensmp.archives-ouvertes.fr/docs/00/32/05/73/PDF/cmultitask.pdf Clustered multi-task learning: A convex formulation]. Advances in Neural Information Processing Systems, 2008</ref><ref>Zhou, J., Chen, J., & Ye, J. (2011). [http://papers.nips.cc/paper/4292-clustered-multi-task-learning-via-alternating-structure-optimization.pdf Clustered multi-task learning via alternating structure optimization]. Advances in Neural Information Processing Systems.</ref> Multi-Task Learning with Graph Structures.
== Literature ==
|