TY - JOUR
T1 - Local Rademacher Complexity-based Learning Guarantees for Multi-Task Learning
AU - Yousefi, Niloofar
AU - Lei, Yunwen
AU - Kloft, Marius
AU - Mollaghasemi, Mansooreh
AU - Anagnostopoulos, Georgios C.
N1 - Funding Information:
NY acknowledges financial support from National Science Foundation (NSF) grant No. 1161228, and No. 1200566. MK acknowledges support from the German Research Foundation (DFG) award KL 2698/2-1 and from the Federal Ministry of Science and Education (BMBF) award 031B0187B. YL acknowledges support from the Science and Technology Innovation Committee Foundation of Shenzhen (Grant No. ZDSYS201703031748284). Finally, GCA acknowledges partial support from the US National Science Foundation (NSF) under Grant No. 1560345. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the NSF.
Publisher Copyright:
© 2018 Niloofar Yousefi, Yunwen Lei, Marius Kloft, Mansooreh Mollaghasemi and Georgios C. Anagnostopoulos
PY - 2018/8
Y1 - 2018/8
N2 - We show a Talagrand-type concentration inequality for Multi-Task Learning (MTL), with which we establish sharp excess risk bounds for MTL in terms of the Local Rademacher Complexity (LRC). We also give a new bound on the LRC for any norm regularized hypothesis classes, which applies not only to MTL, but also to the standard Single-Task Learning (STL) setting. By combining both results, one can easily derive fast-rate bounds on the excess risk for many prominent MTL methods, including-as we demonstrate-Schatten norm, group norm, and graph regularized MTL. The derived bounds reflect a relationship akin to a conservation law of asymptotic convergence rates. When compared to the rates obtained via a traditional, global Rademacher analysis, this very relationship allows for trading off slower rates with respect to the number of tasks for faster rates with respect to the number of available samples per task.
AB - We show a Talagrand-type concentration inequality for Multi-Task Learning (MTL), with which we establish sharp excess risk bounds for MTL in terms of the Local Rademacher Complexity (LRC). We also give a new bound on the LRC for any norm regularized hypothesis classes, which applies not only to MTL, but also to the standard Single-Task Learning (STL) setting. By combining both results, one can easily derive fast-rate bounds on the excess risk for many prominent MTL methods, including-as we demonstrate-Schatten norm, group norm, and graph regularized MTL. The derived bounds reflect a relationship akin to a conservation law of asymptotic convergence rates. When compared to the rates obtained via a traditional, global Rademacher analysis, this very relationship allows for trading off slower rates with respect to the number of tasks for faster rates with respect to the number of available samples per task.
KW - Excess Risk Bounds
KW - Local Rademacher Complexity
KW - Multi-task Learning
UR - http://www.scopus.com/inward/record.url?scp=85053376688&partnerID=8YFLogxK
M3 - Journal article
AN - SCOPUS:85053376688
SN - 1532-4435
VL - 19
JO - Journal of Machine Learning Research
JF - Journal of Machine Learning Research
ER -