Towards Privacy-Preserving Visual Recognition via Adversarial Training: A Pilot Study

@article{Wu2018TowardsPV,
  title={Towards Privacy-Preserving Visual Recognition via Adversarial Training: A Pilot Study},
  author={Zhenyu Wu and Zhangyang Wang and Zhaowen Wang and Hailin Jin},
  journal={ArXiv},
  year={2018},
  volume={abs/1807.08379}
}
This paper aims to improve privacy-preserving visual recognition, an increasingly demanded feature in smart camera applications, by formulating a unique adversarial training framework. The proposed framework explicitly learns a degradation transform for the original video inputs, in order to optimize the trade-off between target task performance and the associated privacy budgets on the degraded video. A notable challenge is that the privacy budget, often defined and measured in task-driven… Expand
Privacy-Preserving Deep Visual Recognition: An Adversarial Learning Framework and A New Dataset
TLDR
A unique adversarial training framework is formulated, that learns a degradation transform for the original video inputs, in order to explicitly optimize the trade-off between target task performance and the associated privacy budgets on the degraded video. Expand
Adversarial Learning of Privacy-Preserving and Task-Oriented Representations
TLDR
This work proposes an adversarial reconstruction learning framework that prevents the latent representations decoded into original input data from the latent representation of deep networks from being reconstructed through model inversion attacks. Expand
Subverting Privacy-Preserving GANs: Hiding Secrets in Sanitized Images
TLDR
It is shown that it is possible to hide the sensitive identification data in the sanitized output images of such PP-GANs for later extraction, which can even allow for reconstruction of the entire input images, while satisfying privacy checks. Expand
On the (Im)Practicality of Adversarial Perturbation for Image Privacy
TLDR
This paper proposes two practical adversarial perturbation approaches – UEP and k-RTIO, which achieve more than 85% and 90% success against face recognition models, and evaluates the proposed methods against state-of-theart online and offline face recognition Models Clarifai.com and DeepFace, respectively. Expand
Deep Poisoning Functions: Towards Robust Privacy-safe Image Data Sharing
TLDR
This paper presents a new framework for privacy-preserving data sharing that is robust to adversarial attacks and overcomes the known issues existing in previous approaches, including a Deep Poisoning Function (DPF), which is a module inserted into a pre-trained deep network designed to perform a specific vision task. Expand
AutoGAN-based Dimension Reduction for Privacy Preservation
TLDR
This paper first introduces a theoretical tool to evaluate dimension reduction-based privacy preserving mechanisms, then proposes a non-linear dimension reduction framework using state-of-the-art neural network structures for privacy preservation. Expand
Privacy Adversarial Network
TLDR
The privacy adversarial network (PAN) is a novel deep model with the new training algorithm, that can automatically learn representations from the raw data that achieves better utility and better privacy at the same time. Expand
Adversarial Privacy Preservation under Attribute Inference Attack
TLDR
A novel theoretical framework for privacy-preservation under the attack of attribute inference is developed and an information-theoretic lower bound is proved to precisely characterize the fundamental trade-off between utility and privacy. Expand
DeepObfuscator: Adversarial Training Framework for Privacy-Preserving Image Classification
TLDR
An adversarial training framework DeepObfuscator is proposed that can prevent extracted features from being utilized to reconstruct raw images and infer private attributes, while retaining the useful information for the intended cloud service (i.e., image classification). Expand
Visual privacy-preserving level evaluation for multilayer compressed sensing model using contrast and salient structural features
TLDR
An improved Gaussian random measurement matrix is adopted in the proposed multilayer CS (MCS) model to realize multiple image CS and achieve a balance between visual privacy-preserving and recognition tasks and has better prediction effectiveness and performance than conventional methods. Expand
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 71 REFERENCES
Minimax Filter: Learning to Preserve Privacy from Inference Attacks
  • Jihun Hamm
  • Computer Science, Mathematics
  • J. Mach. Learn. Res.
  • 2017
TLDR
Experiments with several real-world tasks show that the minimax filter can simultaneously achieve similar or better target task accuracy and lower inference accuracy, often significantly lower than previous methods. Expand
Towards a Visual Privacy Advisor: Understanding and Predicting Privacy Risks in Images
TLDR
A model is trained to predict the user specific privacy risk and even outperforms the judgment of the users, who often fail to follow their own privacy preferences on image data. Expand
Adversarial Image Perturbation for Privacy Protection A Game Theory Perspective
TLDR
A general game theoretical framework for the user-recogniser dynamics is introduced, and the optimal strategy for the users that assures an upper bound on the recognition rate independent of the recogniser’s counter measure is derived. Expand
Protecting Visual Secrets Using Adversarial Nets
TLDR
This work builds on the existing work of adversarial learning to design a perturbation mechanism that jointly optimizes privacy and utility objectives and provides a feasibility study of the proposed mechanism and ideas on developing a privacy framework based on the adversarial perturbations mechanism. Expand
Privacy-Preserving Visual Learning Using Doubly Permuted Homomorphic Encryption
TLDR
The proposed privacy-preserving framework is designed to aggregate multiple classifiers updated locally using private data and to ensure that no private information about the data is exposed during and after its learning procedure, using a homomorphic cryptosystem that can aggregate the local classifiers while they are encrypted and thus kept secret. Expand
Privacy-Preserving Human Activity Recognition from Extreme Low Resolution
TLDR
This paper introduces the paradigm of inverse super resolution (ISR), the concept of learning the optimal set of image transformations to generate multiple low-resolution (LR) training videos from a single video, and experimentally confirms that the paradigm is able to benefit activity recognition from extreme low- resolution videos. Expand
Personal privacy vs population privacy: learning to attack anonymization
TLDR
It is demonstrated that even under Differential Privacy, such classifiers can be used to infer "private" attributes accurately in realistic data and it is observed that the accuracy of inference of private attributes for differentially private data and $l$-diverse data can be quite similar. Expand
Learning to Succeed while Teaching to Fail: Privacy in Closed Machine Learning Systems
TLDR
A system is designed that learns to succeed on the positive task while simultaneously fail at the negative one, and is illustrated with challenging cases where the negative task is actually harder than the positive one being blocked. Expand
PrivacyCam: a Privacy Preserving Camera Using uCLinux on the Blackfin DSP
TLDR
It is demonstrated how the practical problem of "privacy invasion" can be successfully addressed through DSP hardware in terms of smallness in size and cost optimization. Expand
Individual Privacy vs Population Privacy: Learning to Attack Anonymization
TLDR
It is demonstrated that even under Differential Privacy, such classifiers can be used to accurately infer "private" attributes in realistic data and the accuracy of inference of private attributes for Differentially Private data and l-diverse data can be quite similar. Expand
...
1
2
3
4
5
...