• Corpus ID: 53250255

A generic framework for privacy preserving deep learning

  title={A generic framework for privacy preserving deep learning},
  author={Theo Ryffel and Andrew Trask and Morten Dahl and Bobby Wagner and Jason V. Mancuso and Daniel Rueckert and Jonathan Passerat-Palmbach},
We detail a new framework for privacy preserving deep learning and discuss its assets. The framework puts a premium on ownership and secure processing of data and introduces a valuable representation based on chains of commands and tensors. This abstraction allows one to implement complex privacy preserving constructs such as Federated Learning, Secure Multiparty Computation, and Differential Privacy while still exposing a familiar deep learning API to the end-user. We report early results on… 
Privacy-Preserving Deep Learning on Machine Learning as a Service—a Comprehensive Survey
A comprehensive survey of privacy-preserving techniques, starting from classical privacy- Preserving techniques to well-known deep learning techniques, to serve as a single point of reference for detailed knowledge on PPDL and its applicability to MLaaS environments for both new and experienced researchers.
Private Dataset Generation Using Privacy Preserving Collaborative Learning
This work introduces a privacy preserving FedCollabNN framework for training machine learning models at edge, which is computationally efficient and robust against adversarial attacks.
SEALion: a Framework for Neural Network Inference on Encrypted Data
We present SEALion: an extensible framework for privacy-preserving machine learning with homomorphic encryption. It allows one to learn deep neural networks that can be seamlessly utilized for
SoK: Privacy-Preserving Computation Techniques for Deep Learning
This work reviews the evolution of the adaptation of privacy-preserving computation techniques onto DL, to understand the gap between research proposals and practical applications and highlights the relative advantages and disadvantages.
PFDLIS: Privacy-Preserving and Fair Deep Learning Inference Service under Publicly Verifiable Covert Security Setting
This work proposes a privacy-preserving and fair scheme for a deep learning inference service based on secure three-party computation and making commitments under the publicly verifiable covert security setting and demonstrates that the scheme has the following desirable security properties—input data privacy, model privacy and defamation freeness.
Privacy-Preserving Deep Learning Based on Multiparty Secure Computation: A Survey
The state-of-the-art researches in privacy-preserving DL based on multiparty secure computation with data encryption are reviewed and the techniques with respect to the linear and nonlinear computations, which are the two basic building blocks in DL are classified.
SoK: Training Machine Learning Models over Multiple Sources with Privacy Preservation
This work defines the problem of training machine learning models over multiple data sources with privacy-preserving (TMMPP), and compares the recent studies of TMMPP from the aspects of the technical routes, parties supported, data partitioning, threat model, and supported machinelearning models, to show the advantages and limitations.
Privacy-preserving Decentralized Federated Learning
This paper develops SecureD-FL, a privacy-preserving decentralized federated learning algorithm without the traditional centralized aggregation server, and introduces a communication pattern inspired by the combinatorial block design theory and establishes its theoretical privacy guarantee.
Security and Privacy Issues in Deep Learning
The vulnerabilities and the developed defense methods on the security of the models and data privacy under the notion of secure and private AI (SPAI) are reviewed.
MP2ML: a mixed-protocol machine learning framework for private inference
MP2ML is a machine learning framework which integrates nGraph-HE and the secure two-party computation framework ABY to execute DL inference while maintaining the privacy of both the input data and model weights and is compatible with popular DL frameworks such as TensorFlow.


Deep Learning with Differential Privacy
This work develops new algorithmic techniques for learning and a refined analysis of privacy costs within the framework of differential privacy, and demonstrates that deep neural networks can be trained with non-convex objectives, under a modest privacy budget, and at a manageable cost in software complexity, training efficiency, and model quality.
Semi-supervised Knowledge Transfer for Deep Learning from Private Training Data
Private Aggregation of Teacher Ensembles (PATE) is demonstrated, in a black-box fashion, multiple models trained with disjoint datasets, such as records from different subsets of users, which achieves state-of-the-art privacy/utility trade-offs on MNIST and SVHN.
SafetyNets: Verifiable Execution of Deep Neural Networks on an Untrusted Cloud
SafetyNets develops and implements a specialized interactive proof protocol for verifiable execution of a class of deep neural networks, i.e., those that can be represented as arithmetic circuits and demonstrates the run-time costs of this framework for both the client and server are low.
Multiparty Computation from Somewhat Homomorphic Encryption
We propose a general multiparty computation protocol secure against an active adversary corrupting up to $$n-1$$ of the n players. The protocol may be used to compute securely arithmetic circuits
Practical Covertly Secure MPC for Dishonest Majority - Or: Breaking the SPDZ Limits
A covertly secure key generation protocol for obtaining a BGV public key and a shared associated secret key and both a covertly and actively secure preprocessing phase are constructed, both of which compare favourably with previous work in terms of efficiency and provable security.
Pima indian diabetes
  • dataset. Obtained from UCI,
  • 1990
Pima indian diabetes dataset
  • 1990