Infinitely Divisible Noise in the Low Privacy Regime

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

Standard

Infinitely Divisible Noise in the Low Privacy Regime. / Pagh, Rasmus; Stausholm, Nina Mesing.

Proceedings of The 33rd International Conference on Algorithmic Learning Theory. PMLR, 2022. p. 881-909 (Proceedings of Machine Learning Research, Vol. 167).

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

Harvard

Pagh, R & Stausholm, NM 2022, Infinitely Divisible Noise in the Low Privacy Regime. in Proceedings of The 33rd International Conference on Algorithmic Learning Theory. PMLR, Proceedings of Machine Learning Research, vol. 167, pp. 881-909, 33rd International Conference on Algorithmic Learning Theory (ALT 2022), Paris, France, 29/03/2022. <https://proceedings.mlr.press/v167/pagh22a.html>

APA

Pagh, R., & Stausholm, N. M. (2022). Infinitely Divisible Noise in the Low Privacy Regime. In Proceedings of The 33rd International Conference on Algorithmic Learning Theory (pp. 881-909). PMLR. Proceedings of Machine Learning Research Vol. 167 https://proceedings.mlr.press/v167/pagh22a.html

Vancouver

Pagh R, Stausholm NM. Infinitely Divisible Noise in the Low Privacy Regime. In Proceedings of The 33rd International Conference on Algorithmic Learning Theory. PMLR. 2022. p. 881-909. (Proceedings of Machine Learning Research, Vol. 167).

Author

Pagh, Rasmus ; Stausholm, Nina Mesing. / Infinitely Divisible Noise in the Low Privacy Regime. Proceedings of The 33rd International Conference on Algorithmic Learning Theory. PMLR, 2022. pp. 881-909 (Proceedings of Machine Learning Research, Vol. 167).

Bibtex

@inproceedings{89255e5b1d9a43bf9d1803f0553a5853,
title = "Infinitely Divisible Noise in the Low Privacy Regime",
abstract = "Federated learning, in which training data is distributed among users and never shared, has emerged as a popular approach to privacy-preserving machine learning. Cryptographic techniques such as secure aggregation are used to aggregate contributions, like a model update, from all users. A robust technique for making such aggregates differentially private is to exploit \emph{infinite divisibility} of the Laplace distribution, namely, that a Laplace distribution can be expressed as a sum of i.i.d. noise shares from a Gamma distribution, one share added by each user. However, Laplace noise is known to have suboptimal error in the low privacy regime for ε-differential privacy, where ε>1 is a large constant. In this paper we present the first infinitely divisible noise distribution for real-valued data that achieves ε-differential privacy and has expected error that decreases exponentially with ε.",
author = "Rasmus Pagh and Stausholm, {Nina Mesing}",
year = "2022",
language = "English",
series = "Proceedings of Machine Learning Research",
pages = "881--909",
booktitle = "Proceedings of The 33rd International Conference on Algorithmic Learning Theory",
publisher = "PMLR",
note = "33rd International Conference on Algorithmic Learning Theory (ALT 2022) ; Conference date: 29-03-2022 Through 01-04-2022",

}

RIS

TY - GEN

T1 - Infinitely Divisible Noise in the Low Privacy Regime

AU - Pagh, Rasmus

AU - Stausholm, Nina Mesing

PY - 2022

Y1 - 2022

N2 - Federated learning, in which training data is distributed among users and never shared, has emerged as a popular approach to privacy-preserving machine learning. Cryptographic techniques such as secure aggregation are used to aggregate contributions, like a model update, from all users. A robust technique for making such aggregates differentially private is to exploit \emph{infinite divisibility} of the Laplace distribution, namely, that a Laplace distribution can be expressed as a sum of i.i.d. noise shares from a Gamma distribution, one share added by each user. However, Laplace noise is known to have suboptimal error in the low privacy regime for ε-differential privacy, where ε>1 is a large constant. In this paper we present the first infinitely divisible noise distribution for real-valued data that achieves ε-differential privacy and has expected error that decreases exponentially with ε.

AB - Federated learning, in which training data is distributed among users and never shared, has emerged as a popular approach to privacy-preserving machine learning. Cryptographic techniques such as secure aggregation are used to aggregate contributions, like a model update, from all users. A robust technique for making such aggregates differentially private is to exploit \emph{infinite divisibility} of the Laplace distribution, namely, that a Laplace distribution can be expressed as a sum of i.i.d. noise shares from a Gamma distribution, one share added by each user. However, Laplace noise is known to have suboptimal error in the low privacy regime for ε-differential privacy, where ε>1 is a large constant. In this paper we present the first infinitely divisible noise distribution for real-valued data that achieves ε-differential privacy and has expected error that decreases exponentially with ε.

M3 - Article in proceedings

T3 - Proceedings of Machine Learning Research

SP - 881

EP - 909

BT - Proceedings of The 33rd International Conference on Algorithmic Learning Theory

PB - PMLR

T2 - 33rd International Conference on Algorithmic Learning Theory (ALT 2022)

Y2 - 29 March 2022 through 1 April 2022

ER -

ID: 340697017