TY - JOUR
T1 - Broad Multitask Learning System With Group Sparse Regularization
AU - Huang, Jintao
AU - Chen, Chuangquan
AU - Vong, Chi-Man
AU - Cheung, Yiu-ming
N1 - This work was supported in part by the National Natural Science Foundation of China under Grant 62201402; in part by Shenzhen Science and Technology Innovation Committee under Grant SGDX20220530111001006; in part by NSFC/Research Grants Council (RGC) Joint Research Scheme
under Grant N_HKBU214/21; in part by the General Research Fund of RGC under Grant 12201321, Grant 12202622, and Grant 1220323; in part by RGC Senior Research Fellow Scheme under Grant SRFS2324-2S02; in part by Guangdong Basic and Applied Basic Research Foundation under Grant 2023A1515011978; and in part by Hong Kong and Macau Joint Research and Development Fund of Wuyi University under Grant 2021WGALH19.
PY - 2024/7/1
Y1 - 2024/7/1
N2 - The broad learning system (BLS) featuring lightweight, incremental extension, and strong generalization capabilities has been successful in its applications. Despite these advantages, BLS struggles in multitask learning (MTL) scenarios with its limited ability to simultaneously unravel multiple complex tasks where existing BLS models cannot adequately capture and leverage essential information across tasks, decreasing their effectiveness and efficacy in MTL scenarios. To address these limitations, we proposed an innovative MTL framework explicitly designed for BLS, named group sparse regularization for broad multitask learning system using related task-wise (BMtLS-RG). This framework combines a task-related BLS learning mechanism with a group sparse optimization strategy, significantly boosting BLS’s ability to generalize in MTL environments. The task-related learning component harnesses task correlations to enable shared learning and optimize parameters efficiently. Meanwhile, the group sparse optimization approach helps minimize the effects of irrelevant or noisy data, thus enhancing the robustness and stability of BLS in navigating complex learning scenarios. To address the varied requirements of MTL challenges, we presented two additional variants of BMtLS-RG: BMtLS-RG with sharing parameters of feature mapped nodes (BMtLS-RGf), which integrates a shared feature mapping layer, and BMtLS-RGf and enhanced nodes (BMtLS-RGfe), which further includes an enhanced node layer atop the shared feature mapping structure. These adaptations provide customized solutions tailored to the diverse landscape of MTL problems. We compared BMtLS-RG with state-of-the-art (SOTA) MTL and BLS algorithms through comprehensive experimental evaluation across multiple practical MTL and UCI datasets. BMtLS-RG outperformed SOTA methods in 97.81% of classification tasks and achieved optimal performance in 96.00% of regression tasks, demonstrating its superior accuracy and robustness. Furthermore, BMtLS-RG exhibited satisfactory training efficiency, outperforming existing MTL algorithms by 8.04–42.85 times.
AB - The broad learning system (BLS) featuring lightweight, incremental extension, and strong generalization capabilities has been successful in its applications. Despite these advantages, BLS struggles in multitask learning (MTL) scenarios with its limited ability to simultaneously unravel multiple complex tasks where existing BLS models cannot adequately capture and leverage essential information across tasks, decreasing their effectiveness and efficacy in MTL scenarios. To address these limitations, we proposed an innovative MTL framework explicitly designed for BLS, named group sparse regularization for broad multitask learning system using related task-wise (BMtLS-RG). This framework combines a task-related BLS learning mechanism with a group sparse optimization strategy, significantly boosting BLS’s ability to generalize in MTL environments. The task-related learning component harnesses task correlations to enable shared learning and optimize parameters efficiently. Meanwhile, the group sparse optimization approach helps minimize the effects of irrelevant or noisy data, thus enhancing the robustness and stability of BLS in navigating complex learning scenarios. To address the varied requirements of MTL challenges, we presented two additional variants of BMtLS-RG: BMtLS-RG with sharing parameters of feature mapped nodes (BMtLS-RGf), which integrates a shared feature mapping layer, and BMtLS-RGf and enhanced nodes (BMtLS-RGfe), which further includes an enhanced node layer atop the shared feature mapping structure. These adaptations provide customized solutions tailored to the diverse landscape of MTL problems. We compared BMtLS-RG with state-of-the-art (SOTA) MTL and BLS algorithms through comprehensive experimental evaluation across multiple practical MTL and UCI datasets. BMtLS-RG outperformed SOTA methods in 97.81% of classification tasks and achieved optimal performance in 96.00% of regression tasks, demonstrating its superior accuracy and robustness. Furthermore, BMtLS-RG exhibited satisfactory training efficiency, outperforming existing MTL algorithms by 8.04–42.85 times.
KW - Broad learning system (BLS)
KW - group sparse regularization
KW - multitask learning (MTL)
KW - task relation
U2 - 10.1109/TNNLS.2024.3416191
DO - 10.1109/TNNLS.2024.3416191
M3 - Journal article
SN - 2162-237X
JO - IEEE Transactions on Neural Networks and Learning Systems
JF - IEEE Transactions on Neural Networks and Learning Systems
ER -