• Corpus ID: 202566017

Differentially Private Meta-Learning

@article{Li2020DifferentiallyPM,
  title={Differentially Private Meta-Learning},
  author={Jeffrey Li and Mikhail Khodak and Sebastian Caldas and Ameet S. Talwalkar},
  journal={ArXiv},
  year={2020},
  volume={abs/1909.05830}
}
Parameter-transfer is a well-known and versatile approach for meta-learning, with applications including few-shot learning, federated learning, and reinforcement learning. However, parameter-transfer algorithms often require sharing models that have been trained on the samples from specific tasks, thus leaving the task-owners susceptible to breaches of privacy. We conduct the first formal study of privacy in this setting and formalize the notion of task-global differential privacy as a… 

Figures and Tables from this paper

Scalable Differential Privacy with Sparse Network Finetuning

It is argued that minimizing the number of trainable parameters is the key to improving the privacy-performance tradeoff of DP on complex visual recognition tasks and inspired by this argument, a novel transfer learning paradigm that finetunes a very sparse subnetwork with DP is proposed.

Transfer Learning In Differential Privacy's Hybrid-Model

A general scheme is given – Subsample-Test-Reweigh – for this transfer learning problem, which reduces any curator-model DP-learner to a hybrid-model learner in this setting using iterative subsampling and reweighing of the n examples held by the curator based on a smooth variation of the Multiplicative-Weights algorithm.

Differentially Private Model Personalization

This work studies personalization of supervised learning with user-level differential privacy by providing algorithms that exploit popular non-private approaches in this domain like the Almost-No-Inner-Loop (ANIL) method, and gives strong user- level privacy guarantees for the general approach.

Federated Few-Shot Learning with Adversarial Learning

  • Chenyou FanJianwei Huang
  • Computer Science
    2021 19th International Symposium on Modeling and Optimization in Mobile, Ad hoc, and Wireless Networks (WiOpt)
  • 2021
A federated few-shot learning (FedFSL) framework to learn a few- shot classification model that can classify unseen data classes with only a few labeled samples and formulate the training in an adversarial fashion and optimize the client models to produce a discriminative feature space that can better represent unseen data samples is proposed.

Differentially Private Federated Learning: Algorithm, Analysis and Optimization

This chapter investigates a differential privacy mechanism in which, at the clients’ side, artificial noises are added to parameters before uploading, and proposes a K-client random scheduling policy, in which K clients are randomly selected from a total of N clients to participate in each communication round.

Private Multi-Task Learning: Formulation and Applications to Federated Learning

This work formalizes notions of client-level privacy for MTL via joint differential privacy (JDP), a relaxation of differential privacy for mechanism design and distributed optimization, and proposes an algorithm for mean-regularized MTL, an objective commonly used for applications in personalized federated learning, subject to JDP.

Personalization Improves Privacy-Accuracy Tradeoffs in Federated Learning

Stochastic optimization algorithms for a personalized federated learning setting involv-ing local and global models subject to user-level (joint) differential privacy are studied.

GS-WGAN: A Gradient-Sanitized Approach for Learning Differentially Private Generators

Gradient-sanitized Wasserstein Generative Adversarial Networks (GS-WGAN) is proposed, which allows releasing a sanitized form of the sensitive data with rigorous privacy guarantees, and is able to distort gradient information more precisely, and thereby enabling training deeper models which generate more informative samples.

Federated Learning with Personalized Local Differential Privacy

This paper proposes an algorithm (PLU-FedOA) to optimize the deep neural network of horizontal FL with personalized local differential privacy to deal with privacy-preserving issues in distributed multi-party federated modeling.

Federated f-Di↵erential Privacy

A generic private federated learning framework PriFedSync is proposed that accommodates a large family of state-of-the-art FL algorithms, which provably achieves federated f -di↵erential privacy.
...

References

SHOWING 1-10 OF 39 REFERENCES

When Relaxations Go Bad: "Differentially-Private" Machine Learning

Current mechanisms for differential privacy for machine learning rarely offer acceptable utility-privacy tradeoffs: settings that provide limited accuracy loss provide little effective privacy, and settings that provided strong privacy result in useless models.

Provable Guarantees for Gradient-Based Meta-Learning

This paper develops a meta-algorithm bridging the gap between popular gradient-based meta-learning and classical regularization-based multi-task transfer methods, and is the first to simultaneously satisfy good sample efficiency guarantees in the convex setting and generalization bounds that improve with task-similarity.

Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks

We propose an algorithm for meta-learning that is model-agnostic, in the sense that it is compatible with any model trained with gradient descent and applicable to a variety of different learning

Federated Meta-Learning for Recommendation

Experimental results show that recommendation models trained by meta-learning algorithms in the proposed framework outperform the state-of-the-art in accuracy and scale.

Learning-to-Learn Stochastic Gradient Descent with Biased Regularization

A key feature of the results is that, when the number of tasks grows and their variance is relatively small, the learning-to-learn approach has a significant advantage over learning each task in isolation by Stochastic Gradient Descent without a bias term.

Adaptive Gradient-Based Meta-Learning Methods

This approach enables the task-similarity to be learned adaptively, provides sharper transfer-risk bounds in the setting of statistical learning-to-learn, and leads to straightforward derivations of average-case regret bounds for efficient algorithms in settings where thetask-environment changes dynamically or the tasks share a certain geometric structure.

Protection Against Reconstruction and Its Applications in Private Federated Learning

In large-scale statistical learning, data collection and model fitting are moving increasingly toward peripheral devices---phones, watches, fitness trackers---away from centralized data collection.

Communication-Efficient Learning of Deep Networks from Decentralized Data

This work presents a practical method for the federated learning of deep networks based on iterative model averaging, and conducts an extensive empirical evaluation, considering five different model architectures and four datasets.

Practical Secure Aggregation for Privacy-Preserving Machine Learning

This protocol allows a server to compute the sum of large, user-held data vectors from mobile devices in a secure manner, and can be used, for example, in a federated learning setting, to aggregate user-provided model updates for a deep neural network.

Differentially Private Federated Learning: A Client Level Perspective

The aim is to hide clients' contributions during training, balancing the trade-off between privacy loss and model performance, and empirical studies suggest that given a sufficiently large number of participating clients, this procedure can maintain client-level differential privacy at only a minor cost in model performance.