Infinitely Divisible Noise in the Low Privacy Regime

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

Documents

  • Fulltext

    Final published version, 467 KB, PDF document

Federated learning, in which training data is distributed among users and never shared, has emerged as a popular approach to privacy-preserving machine learning. Cryptographic techniques such as secure aggregation are used to aggregate contributions, like a model update, from all users. A robust technique for making such aggregates differentially private is to exploit \emph{infinite divisibility} of the Laplace distribution, namely, that a Laplace distribution can be expressed as a sum of i.i.d. noise shares from a Gamma distribution, one share added by each user. However, Laplace noise is known to have suboptimal error in the low privacy regime for ε
-differential privacy, where ε>1
is a large constant. In this paper we present the first infinitely divisible noise distribution for real-valued data that achieves ε
-differential privacy and has expected error that decreases exponentially with ε
.
Original languageEnglish
Title of host publicationProceedings of The 33rd International Conference on Algorithmic Learning Theory
PublisherPMLR
Publication date2022
Pages881-909
Publication statusPublished - 2022
Event33rd International Conference on Algorithmic Learning Theory (ALT 2022) - Paris, France
Duration: 29 Mar 20221 Apr 2022

Conference

Conference33rd International Conference on Algorithmic Learning Theory (ALT 2022)
LandFrance
ByParis
Periode29/03/202201/04/2022
SeriesProceedings of Machine Learning Research
Volume167
ISSN2640-3498

ID: 340697017