TY - JOUR
T1 - PEFed
T2 - Enhancing privacy and efficiency in federated learning via removable perturbation and decentralized encryption
AU - Guan, Menghong
AU - Bao, Haiyong
AU - Wang, Jing
AU - Xing, Lu
AU - Dai, Hong-Ning
N1 - Publisher Copyright:
© 2025 Elsevier B.V.
PY - 2025/4/10
Y1 - 2025/4/10
N2 - Federated Learning (FL) is a distributed data processing method in the field of machine learning, which combines models trained on local data from various clients, without sharing the sensitive data. While various privacy-preserving FL methods exist, they often struggle to achieve a balance between accuracy and efficiency, and may not effectively defend against inference attacks from the clients and the server. To address these challenges, we present Enhancing Privacy and Efficiency in Federated Learning via Removable Perturbation and Decentralized Encryption (PEFed) framework. This framework achieves privacy-preserving, accurate, widely applicable, and efficient FL with secure aggregation. Specifically, we propose an improved removable perturbation scheme that is widely applicable for gradient preservation. By incorporating removable perturbation vectors into the global gradient, we effectively preserve gradient privacy while maintaining high accuracy. Additionally, we integrate decentralized multi-client functional encryption (DMCFE) on secure aggregation applied in model updates. To enhance efficiency, we meticulously design parallelized algorithms leveraging single instruction multiple data (SIMD) within DMCFE. Our framework also supports a wide range of loss functions, making it highly versatile for various applications. We evaluated PEFed through extensive experiments on four diverse datasets for classification and regression tasks, contrasting it with five advanced FL approaches. Our findings demonstrate that PEFed achieves superior accuracy and efficiency while preserving the privacy of sensitive data. For instance, it reaches a classification accuracy of 94.8% on medical datasets with the ResNet32 model. Moreover, PEFed reduces computational costs by 67% compared to secure multi-party computation (SMC)-based methods in classification tasks. In addition, it ensures robust privacy-preservation in regression tasks by limiting the success rate of data reconstruction attacks to 8.2%.
AB - Federated Learning (FL) is a distributed data processing method in the field of machine learning, which combines models trained on local data from various clients, without sharing the sensitive data. While various privacy-preserving FL methods exist, they often struggle to achieve a balance between accuracy and efficiency, and may not effectively defend against inference attacks from the clients and the server. To address these challenges, we present Enhancing Privacy and Efficiency in Federated Learning via Removable Perturbation and Decentralized Encryption (PEFed) framework. This framework achieves privacy-preserving, accurate, widely applicable, and efficient FL with secure aggregation. Specifically, we propose an improved removable perturbation scheme that is widely applicable for gradient preservation. By incorporating removable perturbation vectors into the global gradient, we effectively preserve gradient privacy while maintaining high accuracy. Additionally, we integrate decentralized multi-client functional encryption (DMCFE) on secure aggregation applied in model updates. To enhance efficiency, we meticulously design parallelized algorithms leveraging single instruction multiple data (SIMD) within DMCFE. Our framework also supports a wide range of loss functions, making it highly versatile for various applications. We evaluated PEFed through extensive experiments on four diverse datasets for classification and regression tasks, contrasting it with five advanced FL approaches. Our findings demonstrate that PEFed achieves superior accuracy and efficiency while preserving the privacy of sensitive data. For instance, it reaches a classification accuracy of 94.8% on medical datasets with the ResNet32 model. Moreover, PEFed reduces computational costs by 67% compared to secure multi-party computation (SMC)-based methods in classification tasks. In addition, it ensures robust privacy-preservation in regression tasks by limiting the success rate of data reconstruction attacks to 8.2%.
KW - Federated learning
KW - Functional encryption
KW - Privacy-preservation
UR - http://www.scopus.com/inward/record.url?scp=105002578290&partnerID=8YFLogxK
U2 - 10.1016/j.inffus.2025.103187
DO - 10.1016/j.inffus.2025.103187
M3 - Journal article
SN - 1566-2535
VL - 122
JO - Information Fusion
JF - Information Fusion
M1 - 103187
ER -