• Publications
  • Influence
Advances and Open Problems in Federated Learning
TLDR
Motivated by the explosive growth in FL research, this paper discusses recent advances and presents an extensive collection of open problems and challenges.
Extremal Mechanisms for Local Differential Privacy
TLDR
It is shown that for all information theoretic utility functions studied in this paper, maximizing utility is equivalent to solving a linear program, the outcome of which is the optimal staircase mechanism, which is universally optimal in the high and low privacy regimes.
Discrete Distribution Estimation under Local Privacy
TLDR
New mechanisms are presented, including hashed K-ary Randomized Response (KRR), that empirically meet or exceed the utility of existing mechanisms at all privacy levels and demonstrate the order-optimality of KRR and the existing RAPPOR mechanism at different privacy regimes.
The Composition Theorem for Differential Privacy
TLDR
This paper proves an upper bound on the overall privacy level and construct a sequence of privatization mechanisms that achieves this bound by introducing an operational interpretation of differential privacy and the use of a data processing inequality.
Can You Really Backdoor Federated Learning?
TLDR
This paper conducts a comprehensive study of backdoor attacks and defenses for the EMNIST dataset, a real-life, user-partitioned, and non-iid dataset, and shows that norm clipping and "weak'' differential privacy mitigate the attacks without hurting the overall performance.
Context-Aware Generative Adversarial Privacy
TLDR
This work introduces a novel context-aware privacy framework called GAP, which leverages recent advancements in generative adversarial networks to allow the data holder to learn privatization schemes from the dataset itself, and demonstrates that the framework can be easily applied in practice, even in the absence of dataset statistics.
Spy vs. Spy: Rumor Source Obfuscation
TLDR
A novel messaging protocol is introduced, which is called adaptive diffusion, and it is shown that it spreads the messages fast and achieves a perfect obfuscation of the source when the underlying contact network is an infinite regular tree.
DP-CGAN: Differentially Private Synthetic Data and Label Generation
TLDR
A Differentially Private Conditional GAN (DP-CGAN) training framework based on a new clipping and perturbation strategy, which improves the performance of the model while preserving privacy of the training dataset is introduced.
Generative Models for Effective ML on Private, Decentralized Datasets
TLDR
This paper demonstrates that generative models - trained using federated methods and with formal differential privacy guarantees - can be used effectively to debug many commonly occurring data issues even when the data cannot be directly inspected.
Generative Adversarial Privacy
TLDR
This work presents a data-driven framework called generative adversarial privacy (GAP), which allows the data holder to learn the privatization mechanism directly from the data and provides privacy guarantees against strong information-theoretic adversaries.
...
1
2
3
4
5
...