TY - JOUR
T1 - Multi-label feature selection via asymmetric redundancy and variable precision dependency
AU - Qian, Wenbin
AU - Lu, Xiwen
AU - Dai, Shiming
AU - Huang, Jintao
N1 - Publisher Copyright:
© 2025 Elsevier B.V.
Funding Information:
This work is supported by the National Natural Science Foundation of China (No. 62366019 and No. 61966016), the Jiangxi Provincial Natural Science Foundation, China (No. 20242BAB23014), and the National Key Research and Development Program of China (No. 2024YFF1307305).
PY - 2025/12
Y1 - 2025/12
N2 - Multi-label feature selection is an effective data preprocessing technique that can significantly mitigate the challenges posed by high-dimensional features in multi-label learning. However, the exploration of feature-label correlations has often been strictly limited to inclusion relationships, while ignoring the fusion of local and global label information. Moreover, most previous work has typically assumed that redundancy between features is fully symmetric, overlooking the valuable insights that asymmetric redundancy provides for designing feature selection. To address these issues, this paper proposes a novel multi-label feature selection via asymmetric redundancy and variable precision dependency. Specifically, it constructs a conditional probability model to reflect the local label semantics, incorporating this into the construction of the variable precision dependency through a fusion indicator. Subsequently, the optimistic and pessimistic information overlap between features is discussed, allowing variable precision granularity to capture asymmetric redundancy between features. Building upon this, an information fusion method is proposed to quantify the pessimistic asymmetric redundancy between features by inducing knowledge granularity in the feature space. Finally, a comprehensive evaluation metric, Maximum Correlation-maximum Discrimination-minimum Redundancy (MCDR), is proposed to evaluate the significance of features. The experimental results on fifteen multi-label benchmark datasets indicate that the proposed method outperforms the other seven state-of-the-art methods.
AB - Multi-label feature selection is an effective data preprocessing technique that can significantly mitigate the challenges posed by high-dimensional features in multi-label learning. However, the exploration of feature-label correlations has often been strictly limited to inclusion relationships, while ignoring the fusion of local and global label information. Moreover, most previous work has typically assumed that redundancy between features is fully symmetric, overlooking the valuable insights that asymmetric redundancy provides for designing feature selection. To address these issues, this paper proposes a novel multi-label feature selection via asymmetric redundancy and variable precision dependency. Specifically, it constructs a conditional probability model to reflect the local label semantics, incorporating this into the construction of the variable precision dependency through a fusion indicator. Subsequently, the optimistic and pessimistic information overlap between features is discussed, allowing variable precision granularity to capture asymmetric redundancy between features. Building upon this, an information fusion method is proposed to quantify the pessimistic asymmetric redundancy between features by inducing knowledge granularity in the feature space. Finally, a comprehensive evaluation metric, Maximum Correlation-maximum Discrimination-minimum Redundancy (MCDR), is proposed to evaluate the significance of features. The experimental results on fifteen multi-label benchmark datasets indicate that the proposed method outperforms the other seven state-of-the-art methods.
KW - Feature selection
KW - Granular computing
KW - Multi-label learning
KW - Asymmetric redundancy
KW - Variable precision dependency
UR - http://www.scopus.com/inward/record.url?scp=105016317712&partnerID=8YFLogxK
U2 - 10.1016/j.asoc.2025.113852
DO - 10.1016/j.asoc.2025.113852
M3 - Journal article
AN - SCOPUS:105016317712
SN - 1568-4946
VL - 185, Part A
JO - Applied Soft Computing
JF - Applied Soft Computing
M1 - 113852
ER -