A Privacy-Enhanced Method for Privacy-Preserving and Verifiable Federated Learning

Xiaofen Wang, Tao Chen*, Hong Ning Dai, Peng Long, Haomiao Yang, Zehui Xiong, Willy Susilo

*Corresponding author for this work

Research output: Contribution to journalJournal articlepeer-review

Abstract

Federated learning allows clients to share model gradients instead of privacy-sensitive data, which can solve the issue of data silos, but lead to the problem of data privacy leakage due to the model gradient revealing the characteristics of the training data. Privacy-preserving federated learning based on homomorphic encryption schemes (HE-based PPFL) can properly solve the issues of participantsfs data privacy leakage, but they encounter some new challenges. Existing PPFL-based single-key homomorphic encryption schemes face the problem that clients can obtain othersf model gradients due to the shared key and PPFL-based multi-key homomorphic encryption schemes face the issues of incomplete privacy protection for models and high communication overhead due to the requirement of the collaborated decryption. Moreover, existing PPFL schemes either assume the server is always honest or the verification method is unreliable and expensive. To tackle these emerging challenges in HE-based PPFL, we propose an enhancing privacy-preserving and verifiable federated learning scheme. Specifically, we first construct a novel multi-key homomorphic encryption algorithm that achieves single-key decryption instead of the collaborated decryption in traditional PPFL-based multi-key homomorphic encryption. Meanwhile, we design a blockchain-based public verification method for the global model by applying a vector homomorphic hash, which can properly solve the issues of unreliable and expensive global model verification of the existing global model verification methods. Formal security analysis shows that the proposed scheme can well provide complete privacy protection and guarantee the integrity of the global model. Extensive experiments demonstrate that the proposed schemes can keep high accuracy (≈95%) compared with existing differential privacy-based PPFL schemes (≤90%). Meanwhile, the proposed schemes can achieve no decryption share size (0MB) compared to existing HE-based PPFL schemes and efficient verification compared with existing model verification methods based on bilinear maps.

Original languageEnglish
Pages (from-to)1-15
Number of pages15
JournalIEEE Internet of Things Journal
DOIs
Publication statusE-pub ahead of print - 16 Apr 2025

User-Defined Keywords

  • Federated Learning
  • Privacy-Preserving
  • Integrity Verification
  • Multi-Key Homomorphic Encryption

Fingerprint

Dive into the research topics of 'A Privacy-Enhanced Method for Privacy-Preserving and Verifiable Federated Learning'. Together they form a unique fingerprint.

Cite this