Privacy-Preserving Visual Learning Using Doubly Permuted Homomorphic Encryption

@article{Yonetani2017PrivacyPreservingVL,
  title={Privacy-Preserving Visual Learning Using Doubly Permuted Homomorphic Encryption},
  author={Ryo Yonetani and Vishnu Naresh Boddeti and Kris M. Kitani and Yoichi Sato},
  journal={2017 IEEE International Conference on Computer Vision (ICCV)},
  year={2017},
  pages={2059-2069}
}
We propose a privacy-preserving framework for learning visual classifiers by leveraging distributed private image data. This framework is designed to aggregate multiple classifiers updated locally using private data and to ensure that no private information about the data is exposed during and after its learning procedure. We utilize a homomorphic cryptosystem that can aggregate the local classifiers while they are encrypted and thus kept secret. To overcome the high computational cost of… 

Figures and Tables from this paper

HERS: Homomorphically Encrypted Representation Search
TLDR
Numerical results show that, for the first time, accurate and fast image search within the encrypted domain is feasible at scale (296 seconds; 46x speedup over state-of-the-art for face search against a background of 1 million).
Deep Poisoning Functions: Towards Robust Privacy-safe Image Data Sharing
TLDR
This paper presents a new framework for privacy-preserving data sharing that is robust to adversarial attacks and overcomes the known issues existing in previous approaches, including a Deep Poisoning Function (DPF), which is a module inserted into a pre-trained deep network designed to perform a specific vision task.
Multitask Identity-Aware Image Steganography via Minimax Optimization
TLDR
This work proposes a framework, called Multitask Identity-Aware Image Steganography (MIAIS), to achieve direct recognition on container images without restoring secret images, and introduces a simple content loss to preserve the identity information and designs a minimax optimization to deal with the contradictory aspects.
Towards Privacy-Preserving Visual Recognition via Adversarial Training: A Pilot Study
This paper aims to improve privacy-preserving visual recognition, an increasingly demanded feature in smart camera applications, by formulating a unique adversarial training framework. The proposed
DeepObfuscator: Obfuscating Intermediate Representations with Privacy-Preserving Adversarial Learning on Smartphones
TLDR
An adversarial training framework, DeepObfuscator, which prevents the usage of the features for reconstruction of the raw images and inference of private attributes, and includes a learnable encoder that is designed to hide privacy-related sensitive information from the features by performing the proposed adversarialTraining algorithm.
Password-conditioned Anonymization and Deanonymization with Face Identity Transformers
TLDR
A novel face identity transformer is proposed which enables automated photo-realistic password-based anonymization as well as deanonymization of human faces appearing in visual data.
Learning to Anonymize Faces for Privacy Preserving Action Detection
TLDR
A new principled approach for learning a video face anonymizer that performs a pixel-level modification to anonymize each person's face, with minimal effect on action detection performance.
Encrypted Image Feature Extraction by Privacy-Preserving MFS
TLDR
Experimental results showed that multifractal feature has a good distinguish ability in the encrypted domain.
Homomorphic Encryption for Secure Computation on Big Data
TLDR
It is argued that, with sufficient investment, HE will become a practical tool for secure processing of big data sets and is rapidly advancing to a point where it is efficient enough for practical use in limited settings.
Image Classification using non-linear Support Vector Machines on Encrypted Data
TLDR
This paper shows how non-linear Support Vector Machines (SVMs) can be practically used for image classification on data encrypted with a Somewhat Homomorphic Encryption (SHE) scheme and enables SVMs with polynomial kernels.
...
...

References

SHOWING 1-10 OF 85 REFERENCES
Efficient Privacy-Preserving Face Recognition
TLDR
A privacy-preserving face recognition scheme that substantially improves over previous work in terms of communication-and computation efficiency and has a substantially smaller online communication complexity.
A Differentially Private Stochastic Gradient Descent Algorithm for Multiparty Classification
TLDR
A new differentially private algorithm for the multiparty setting that uses a stochastic gradient descent based procedure to directly optimize the overall multiparty objective rather than combining classifiers learned from optimizing local objectives.
ML Confidential: Machine Learning on Encrypted Data
TLDR
A new class of machine learning algorithms in which the algorithm's predictions can be expressed as polynomials of bounded degree, and confidential algorithms for binary classification based on polynomial approximations to least-squares solutions obtained by a small number of gradient descent steps are proposed.
Preserving Multi-party Machine Learning with Homomorphic Encryption
TLDR
A privacy preserving multi-party machine learning approach based on homomorphic encryption where the machine learning algorithm of choice is deep neural networks and the theoretical foundation for implementing deep neural Networks over encrypted data is developed.
Privacy-Preserving Face Recognition
TLDR
This paper proposes for the first time a strongly privacy-enhanced face recognition system, which allows to efficiently hide both the biometrics and the result from the server that performs the matching operation, by using techniques from secure multiparty computation.
Privacy Preserving Back-Propagation Neural Network Learning Made Practical with Cloud Computing
TLDR
This paper solves the open problem of collaborative learning by utilizing the power of cloud computing and adopts and tailor the BGN "doubly homomorphic" encryption algorithm for the multiparty setting to support flexible operations over ciphertexts.
Multiparty Differential Privacy via Aggregation of Locally Trained Classifiers
TLDR
This paper proposes a privacy-preserving protocol for composing a differentially private aggregate classifier using classifiers trained locally by separate mutually untrusting parties and presents a proof of differential privacy of the perturbed aggregate classifiers and a bound on the excess risk introduced by the perturbation.
Differentially Private Empirical Risk Minimization
TLDR
This work proposes a new method, objective perturbation, for privacy-preserving machine learning algorithm design, and shows that both theoretically and empirically, this method is superior to the previous state-of-the-art, output perturbations, in managing the inherent tradeoff between privacy and learning performance.
Privacy-preserving logistic regression
TLDR
This paper addresses the important tradeoff between privacy and learnability, when designing algorithms for learning from private databases by providing a privacy-preserving regularized logistic regression algorithm based on a new privacy- Preserving technique.
Cryptographic techniques for privacy-preserving data mining
TLDR
This work describes the results of secure distributed computation using generic constructions that can be applied to any function that has an efficient representation as a circuit, and discusses their efficiency and relevance to privacy preserving computation of data mining algorithms.
...
...