Multi-task learning: Difference between revisions

Content deleted Content added
Tags: Reverted Visual edit
Reverted to revision 992892066 by MrOllie (talk): Rv more refspam
Line 21:
=== Group online adaptive learning ===
Traditionally Multi-task learning and transfer of knowledge are applied to stationary learning settings. Their extension to non-stationary environments is termed Group online adaptive learning (GOAL).<ref>Zweig, A. & Chechik, G. Group online adaptive learning. Machine Learning, DOI 10.1007/s10994-017- 5661-5, August 2017. http://rdcu.be/uFSv</ref> Sharing information could be particularly useful if learners operate in continuously changing environments, because a learner could benefit from previous experience of another learner to quickly adapt to their new environment. Such group-adaptive learning has numerous applications, from predicting financial time-series, through content recommendation systems, to visual understanding for adaptive autonomous agents.
 
=== Correlated multi-task learning and feature selection ===
In multi-task learning, different tasks can be correlated. The response variables from different tasks may include a mix of discrete and continuous variables. There is also a class of predictor objects, which may differ within a subject depending on how the predictor object is measured, i.e., depend on the experiment. The goal is to select which predictor objects affect any of the task responses, where the number of such informative predictor objects or features tends to infinity as sample size increases. There are marginal likelihoods for each way the predictor object is measured, i.e., for each task. To avoid directly specifying the joint distribution of the correlated multi-task responses, a pseudolikelihood combining the marginal likelihoods can be used to perform estimation and inferences. Regularized estimation based on group penalization using LASSO, SCAD or adaptive LASSO can be preformed to select important predictors across multiple tasks <ref>{{Cite journal|last=Gao|first=Xin|last2=Carroll|first2=Raymond J.|date=2017-05-09|title=Data integration with high dimensionality|url=http://dx.doi.org/10.1093/biomet/asx023|journal=Biometrika|volume=104|issue=2|pages=251–272|doi=10.1093/biomet/asx023|issn=0006-3444}}</ref>.
 
== Mathematics ==
Line 149 ⟶ 146:
* Clustered Multi-Task Learning<ref>Jacob, L., Bach, F., & Vert, J. (2008). [https://hal-ensmp.archives-ouvertes.fr/docs/00/32/05/73/PDF/cmultitask.pdf Clustered multi-task learning: A convex formulation]. Advances in Neural Information Processing Systems, 2008</ref><ref>Zhou, J., Chen, J., & Ye, J. (2011). [http://papers.nips.cc/paper/4292-clustered-multi-task-learning-via-alternating-structure-optimization.pdf Clustered multi-task learning via alternating structure optimization]. Advances in Neural Information Processing Systems.</ref>
* Multi-Task Learning with Graph Structures
The Correlated-Multi-Task learning R package <ref>{{Cite journal|last=Gao|first=Xin|last2=Zhong|first2=Yuan|date=2019-03-27|title=FusionLearn: a biomarker selection algorithm on cross-platform data|url=http://dx.doi.org/10.1093/bioinformatics/btz223|journal=Bioinformatics|volume=35|issue=21|pages=4465–4468|doi=10.1093/bioinformatics/btz223|issn=1367-4803}}</ref> <ref>{{Citation|last=Gao|first=Xin|title=FusionLearn: Fusion Learning|date=2019-03-09|url=https://cran.r-project.org/package=FusionLearn|access-date=2020-12-07|last2=Zhong|first2=Yuan|last3=Carroll|first3=and Raymond J.}}</ref>implements the following procedures:
 
* Pseudo-likelihood based estimation
 
* Group-regularization via group-LASSO, group-SCAD, or group-adamptive-LASSO
 
* Pseudo-likelihood based Bayesian Information criterion for correlated muti-task feature selection
 
==See also==