Abstract
Preserving differential privacy during empirical risk minimization model training has been extensively studied under centralized and sample-wise distributed dataset settings. This paper considers a nearly unexplored context with features partitioned among different parties under privacy restriction. Motivated by the nearly optimal utility guarantee achieved by centralized private Frank-Wolfe algorithm (Talwar, Thakurta, and Zhang 2015), we develop a distributed variant with guaranteed privacy, utility and uplink communication complexity. To obtain these guarantees, we provide a much generalized convergence analysis for block-coordinate Frank-Wolfe under arbitrary sampling, which greatly extends known convergence results that are only applicable to two specific block sampling distributions. We also design an active feature sharing scheme by utilizing private Johnson-Lindenstrauss transform, which is the key to updating local partial gradients in a differentially private and communication efficient manner.
Original language | English |
---|---|
Title of host publication | 32nd AAAI Conference on Artificial Intelligence, AAAI 2018 |
Publisher | AAAI press |
Pages | 125-133 |
Number of pages | 9 |
ISBN (Electronic) | 9781577358008 |
DOIs | |
Publication status | Published - 8 Feb 2018 |
Event | 32nd AAAI Conference on Artificial Intelligence, AAAI 2018 - New Orleans, United States Duration: 2 Feb 2018 → 7 Feb 2018 https://ojs.aaai.org/index.php/AAAI/issue/view/301 https://aaai.org/papers/530-ws0496-aaaiw-18-17111/ |
Publication series
Name | Proceedings of the AAAI Conference on Artificial Intelligence |
---|---|
Number | 1 |
Volume | 32 |
ISSN (Print) | 2159-5399 |
ISSN (Electronic) | 2374-3468 |
Conference
Conference | 32nd AAAI Conference on Artificial Intelligence, AAAI 2018 |
---|---|
Country/Territory | United States |
City | New Orleans |
Period | 2/02/18 → 7/02/18 |
Internet address |
Scopus Subject Areas
- Artificial Intelligence