Content deleted Content added
m task, removed: (ICML) (2) |
m →top: HTTP to HTTPS for Cornell University |
||
(2 intermediate revisions by 2 users not shown) | |||
Line 1:
{{short description|Solving multiple machine learning tasks at the same time}}
'''Multi-task learning''' (MTL) is a subfield of [[machine learning]] in which multiple learning tasks are solved at the same time, while exploiting commonalities and differences across tasks. This can result in improved learning efficiency and prediction accuracy for the task-specific models, when compared to training the models separately.<ref>Baxter, J. (2000). A model of inductive bias learning" ''Journal of Artificial Intelligence Research'' 12:149--198, [http://www-2.cs.cmu.edu/afs/cs/project/jair/pub/volume12/baxter00a.pdf On-line paper]</ref><ref>[[Sebastian Thrun|Thrun, S.]] (1996). Is learning the n-th thing any easier than learning the first?. In Advances in Neural Information Processing Systems 8, pp. 640--646. MIT Press. [http://citeseer.ist.psu.edu/thrun96is.html Paper at Citeseer]</ref><ref name=":2">{{Cite journal|url =
Inherently, Multi-task learning is a [[multi-objective optimization]] problem having [[trade-off]]s between different tasks.<ref>Multi-Task Learning as Multi-Objective Optimization
Part of Advances in Neural Information Processing Systems 31 (NeurIPS 2018), https://proceedings.neurips.cc/paper/2018/hash/432aca3a1e345e339f35a30c8f65edce-Abstract.html</ref>
Line 43:
==== Evolutionary multi-tasking ====
'''Evolutionary multi-tasking''' has been explored as a means of exploiting the [[implicit parallelism]] of population-based search algorithms to simultaneously progress multiple distinct optimization tasks. By mapping all tasks to a unified search space, the evolving population of candidate solutions can harness the hidden relationships between them through continuous genetic transfer. This is induced when solutions associated with different tasks crossover.<ref name=mfo/><ref name=cognitive>Ong, Y. S., & Gupta, A. (2016). [http://www.cil.ntu.edu.sg/mfo/downloads/MultitaskOptimization_manuscript.pdf Evolutionary multitasking: a computer science view of cognitive multitasking]. Cognitive Computation, 8(2), 125-142.</ref> Recently, modes of knowledge transfer that are different from direct solution [[Crossover (genetic algorithm)|crossover]] have been explored.<ref>{{cite journal | doi=10.1109/TCYB.2018.2845361 | title=Evolutionary Multitasking via Explicit Autoencoding | year=2019 | last1=Feng | first1=Liang | last2=Zhou | first2=Lei | last3=Zhong | first3=Jinghui | last4=Gupta | first4=Abhishek | last5=Ong | first5=Yew-Soon | last6=Tan | first6=Kay-Chen | last7=Qin | first7=A. K. | journal=IEEE Transactions on Cybernetics | volume=49 | issue=9 | pages=3457–3470 | pmid=29994415 | s2cid=51613697 }}</ref><ref>{{Cite journal |last1=Jiang |first1=Yi |last2=Zhan |first2=Zhi-Hui |last3=Tan |first3=Kay Chen |last4=Zhang |first4=Jun |date=January 2024 |title=Block-Level Knowledge Transfer for Evolutionary Multitask Optimization
==== Game-theoretic optimization ====
Line 51:
Algorithms for multi-task optimization span a wide array of real-world applications. Recent studies highlight the potential for speed-ups in the optimization of engineering design parameters by conducting related designs jointly in a multi-task manner.<ref name=cognitive/> In [[machine learning]], the transfer of optimized features across related data sets can enhance the efficiency of the training process as well as improve the generalization capability of learned models.<ref>Chandra, R., Gupta, A., Ong, Y. S., & Goh, C. K. (2016, October). [http://www.cil.ntu.edu.sg/mfo/downloads/cvmultask.pdf Evolutionary multi-task learning for modular training of feedforward neural networks]. In International Conference on Neural Information Processing (pp. 37-46). Springer, Cham.</ref><ref>Yosinski, J., Clune, J., Bengio, Y., & Lipson, H. (2014). [http://papers.nips.cc/paper/5347-how-transferable-are-features-in-deep-n%E2%80%A6 How transferable are features in deep neural networks?] In Advances in neural information processing systems (pp. 3320-3328).</ref> In addition, the concept of multi-tasking has led to advances in automatic [[hyperparameter optimization]] of machine learning models and [[ensemble learning]].<ref>{{cite book | doi=10.1109/CEC.2016.7748363 | chapter=Learning ensemble of decision trees through multifactorial genetic programming | title=2016 IEEE Congress on Evolutionary Computation (CEC) | year=2016 | last1=Wen | first1=Yu-Wei | last2=Ting | first2=Chuan-Kang | pages=5293–5300 | isbn=978-1-5090-0623-6 | s2cid=2617811 }}</ref><ref>{{cite book | doi=10.1145/3205455.3205638 | chapter=Evolutionary feature subspaces generation for ensemble classification | title=Proceedings of the Genetic and Evolutionary Computation Conference | year=2018 | last1=Zhang | first1=Boyu | last2=Qin | first2=A. K. | last3=Sellis | first3=Timos | pages=577–584 | isbn=978-1-4503-5618-3 | s2cid=49564862 }}</ref>
Applications have also been reported in cloud computing,<ref>{{cite book | doi=10.1007/978-3-319-94472-2_10 | chapter=An Evolutionary Multitasking Algorithm for Cloud Computing Service Composition | title=Services – SERVICES 2018 | series=Lecture Notes in Computer Science | year=2018 | last1=Bao | first1=Liang | last2=Qi | first2=Yutao | last3=Shen | first3=Mengqing | last4=Bu | first4=Xiaoxuan | last5=Yu | first5=Jusheng | last6=Li | first6=Qian | last7=Chen | first7=Ping | volume=10975 | pages=130–144 | isbn=978-3-319-94471-5 }}</ref> with future developments geared towards cloud-based on-demand optimization services that can cater to multiple customers simultaneously.<ref name=mfo/><ref>Tang, J., Chen, Y., Deng, Z., Xiang, Y., & Joy, C. P. (2018). [https://www.ijcai.org/proceedings/2018/0538.pdf A Group-based Approach to Improve Multifactorial Evolutionary Algorithm]. In IJCAI (pp. 3870-3876).</ref> Recent work has additionally shown applications in chemistry.<ref>{{citation |mode=cs1 |doi=10.26434/chemrxiv.13250216.v2 |title=Multi-task Bayesian Optimization of Chemical Reactions |work=chemRxiv |date=2021 |last1=Felton |first1=Kobi |last2=Wigh |first2=Daniel |last3=Lapkin |first3=Alexei|doi-access=free }}</ref> In addition, some recent works have applied multi-task optimization algorithms in industrial manufacturing.<ref>{{Cite journal |last1=Jiang |first1=Yi |last2=Zhan |first2=Zhi-Hui |last3=Tan |first3=Kay Chen |last4=Zhang |first4=Jun |date=October 2023 |title=A Bi-Objective Knowledge Transfer Framework for Evolutionary Many-Task Optimization |journal=IEEE Transactions on Evolutionary Computation |volume=27 |issue=5 |pages=1514–1528 |doi=10.1109/TEVC.2022.3210783 |issn=1089-778X|doi-access=free }}</ref><ref>{{Cite journal |last1=Jiang |first1=Yi |last2=Zhan |first2=Zhi-Hui |last3=Tan |first3=Kay Chen |last4=Kwong |first4=Sam |last5=Zhang |first5=Jun |date=2024 |title=Knowledge Structure Preserving-Based Evolutionary Many-Task Optimization |journal=IEEE Transactions on Evolutionary Computation |volume=29 |issue=2 |pages=287–301 |doi=10.1109/TEVC.2024.3355781 |issn=1089-778X|doi-access=free }}</ref>
== Mathematics ==
|