TY - JOUR
T1 - SAMFL
T2 - Secure Aggregation Mechanism for Federated Learning with Byzantine-robustness by functional encryption
AU - Guan, Menghong
AU - Bao, Haiyong
AU - Li, Zhiqiang
AU - Pan, Hao
AU - Huang, Cheng
AU - Dai, Hong Ning
N1 - Funding Information:
This work was supported in part by the National Natural Science Foundation of China under Grant 62072404; in part by the Natural Science Foundation of Shanghai Municipality under Grant 23ZR1417700.
Publisher Copyright:
© 2024 Elsevier B.V. All rights are reserved, including those for text and data mining, AI training, and similar technologies.
PY - 2024/12
Y1 - 2024/12
N2 - Federated learning (FL) enables collaborative model training without sharing private data, thereby potentially meeting the growing demand for data privacy protection. Despite its potentials, FL also poses challenges in achieving privacy-preservation and Byzantine-robustness when handling sensitive data. To address these challenges, we present a novel Secure Aggregation Mechanism for Federated Learning with Byzantine-Robustness by Functional Encryption (SAMFL). Our approach designs a novel dual-decryption multi-input functional encryption (DD-MIFE) scheme, which enables efficient computation of cosine similarities and aggregation of encrypted gradients through a single ciphertext. This innovative scheme allows for dual decryption, producing distinct results based on different keys, while maintaining high efficiency. We further propose TF-Init, integrating DD-MIFE with Truth Discovery (TD) to eliminate the reliance on a root dataset. Additionally, we devise a secure cosine similarity calculation aggregation protocol (SC2AP) using DD-MIFE, ensuring privacy-preserving and Byzantine-robust FL secure aggregation. To enhance FL efficiency, we employ single instruction multiple data (SIMD) to parallelize encryption and decryption processes. Concurrently, to preserve accuracy, we incorporate differential privacy (DP) with selective clipping of model layers within the FL framework. Finally, we integrate TF-Init, SC2AP, SIMD, and DP to construct SAMFL. Extensive experiments demonstrate that SAMFL successfully defends against both inference attacks and poisoning attacks, while improving efficiency and accuracy compared to existing methods. SAMFL provides a comprehensive integrated solution for FL with efficiency, accuracy, privacy-preservation, and robustness.
AB - Federated learning (FL) enables collaborative model training without sharing private data, thereby potentially meeting the growing demand for data privacy protection. Despite its potentials, FL also poses challenges in achieving privacy-preservation and Byzantine-robustness when handling sensitive data. To address these challenges, we present a novel Secure Aggregation Mechanism for Federated Learning with Byzantine-Robustness by Functional Encryption (SAMFL). Our approach designs a novel dual-decryption multi-input functional encryption (DD-MIFE) scheme, which enables efficient computation of cosine similarities and aggregation of encrypted gradients through a single ciphertext. This innovative scheme allows for dual decryption, producing distinct results based on different keys, while maintaining high efficiency. We further propose TF-Init, integrating DD-MIFE with Truth Discovery (TD) to eliminate the reliance on a root dataset. Additionally, we devise a secure cosine similarity calculation aggregation protocol (SC2AP) using DD-MIFE, ensuring privacy-preserving and Byzantine-robust FL secure aggregation. To enhance FL efficiency, we employ single instruction multiple data (SIMD) to parallelize encryption and decryption processes. Concurrently, to preserve accuracy, we incorporate differential privacy (DP) with selective clipping of model layers within the FL framework. Finally, we integrate TF-Init, SC2AP, SIMD, and DP to construct SAMFL. Extensive experiments demonstrate that SAMFL successfully defends against both inference attacks and poisoning attacks, while improving efficiency and accuracy compared to existing methods. SAMFL provides a comprehensive integrated solution for FL with efficiency, accuracy, privacy-preservation, and robustness.
KW - Byzantine-robustness
KW - Federated learning
KW - Functional encryption
KW - Privacy-preservation
UR - http://www.scopus.com/inward/record.url?scp=85209095130&partnerID=8YFLogxK
U2 - 10.1016/j.sysarc.2024.103304
DO - 10.1016/j.sysarc.2024.103304
M3 - Journal article
AN - SCOPUS:85209095130
SN - 1383-7621
VL - 157
JO - Journal of Systems Architecture
JF - Journal of Systems Architecture
M1 - 103304
ER -