TY - JOUR
T1 - UEFL: Universal and Efficient Privacy-Preserving Federated Learning
AU - Li, Zhiqiang
AU - Bao, Haiyong
AU - Pan, Hao
AU - Guan, Menghong
AU - Huang, Cheng
AU - Dai, Hong Ning
N1 - This work was supported in part by the National Natural Science Foundation of China under Grant 62072404; in part by the Shanghai Natural Science Foundation under Grant 23ZR1417700.
Publisher copyright:
© 2025 IEEE.
PY - 2025/1/6
Y1 - 2025/1/6
N2 - Federated Learning (FL) is a distributed machine learning framework that allows for model training across multiple clients without requiring access to their local data. However, FL poses some risks, for example, curious clients might conduct inference attacks (e.g., membership inference attacks, model-inversion attacks) to extract sensitive information from other participants. Existing solutions typically fail to strike a good balance between performance and privacy, or are only applicable to specific FL scenarios. To address these challenges, we propose a universal and efficient privacy-preserving FL framework based on matrix theory. Specifically, we design the Improved Extended Hill Cryptosystem (IEHC), which efficiently encrypts model parameters while supporting the secure ReLU function. To accommodate different training tasks, we design the Secure Loss Function Computation (SLFC) protocol, which computes derivatives of various loss functions while maintaining data privacy of both client and server. And we implement SLFC specifically for three classic loss functions, including MSE, Cross Entropy, and L1. Extensive experimental results demonstrate that our approach robustly defends against various inference attacks. Furthermore, model training experiments conducted in various FL scenarios indicate that our method shows significant advantages across most metrics.
AB - Federated Learning (FL) is a distributed machine learning framework that allows for model training across multiple clients without requiring access to their local data. However, FL poses some risks, for example, curious clients might conduct inference attacks (e.g., membership inference attacks, model-inversion attacks) to extract sensitive information from other participants. Existing solutions typically fail to strike a good balance between performance and privacy, or are only applicable to specific FL scenarios. To address these challenges, we propose a universal and efficient privacy-preserving FL framework based on matrix theory. Specifically, we design the Improved Extended Hill Cryptosystem (IEHC), which efficiently encrypts model parameters while supporting the secure ReLU function. To accommodate different training tasks, we design the Secure Loss Function Computation (SLFC) protocol, which computes derivatives of various loss functions while maintaining data privacy of both client and server. And we implement SLFC specifically for three classic loss functions, including MSE, Cross Entropy, and L1. Extensive experimental results demonstrate that our approach robustly defends against various inference attacks. Furthermore, model training experiments conducted in various FL scenarios indicate that our method shows significant advantages across most metrics.
KW - Federated Learning
KW - Inference Attacks
KW - Matrix Theory
KW - Privacy-Preservation
UR - http://www.scopus.com/inward/record.url?scp=85214511877&partnerID=8YFLogxK
U2 - 10.1109/JIOT.2025.3525731
DO - 10.1109/JIOT.2025.3525731
M3 - Journal article
AN - SCOPUS:85214511877
SN - 2327-4662
JO - IEEE Internet of Things Journal
JF - IEEE Internet of Things Journal
ER -