Corpus ID: 174800114

Adversarially Learned Representations for Information Obfuscation and Inference

@inproceedings{Bertrn2019AdversariallyLR,
  title={Adversarially Learned Representations for Information Obfuscation and Inference},
  author={Mart{\'i}n Bertr{\'a}n and Natalia Mart{\'i}nez and Afroditi Papadaki and Qiang Qiu and Miguel R. D. Rodrigues and Galen Reeves and Guillermo Sapiro},
  booktitle={ICML},
  year={2019}
}
Data collection and sharing are pervasive aspects of modern society. This process can either be voluntary, as in the case of a person taking a facial image to unlock his/her phone, or incidental, such as traffic cameras collecting videos on pedestrians. An undesirable side effect of these processes is that shared data can carry information about attributes that users might consider as sensitive, even when such information is of limited use for the task. It is therefore desirable for both data… Expand
InfoScrub: Towards Attribute Privacy by Targeted Obfuscation
TLDR
This work proposes a novel image obfuscation framework based on an encoder-decoder style architecture, introducing a discriminator to perform bi-directional translation simultaneously from multiple unpaired domains and predicting an image interpolation which maximizes uncertainty over a target set of attributes. Expand
Privacy-Preserving Deep Visual Recognition: An Adversarial Learning Framework and A New Dataset
TLDR
A unique adversarial training framework is formulated, that learns a degradation transform for the original video inputs, in order to explicitly optimize the trade-off between target task performance and the associated privacy budgets on the degraded video. Expand
Imparting Fairness to Pre-Trained Biased Representations
TLDR
This paper first studies the "linear" form of the adversarial representation learning problem, and obtains an exact closed-form expression for its global optima through spectral learning and extends this solution and analysis to non-linear functions through kernel representation. Expand
Deep fair models for complex data: Graphs labeling and explainable face recognition
TLDR
This work measures fairness according to Demographic Parity, requiring the probability of the model decisions to be independent of the sensitive information, and investigates how to impose this constraint in the different layers of deep neural networks for complex data, with particular reference to deep networks for graph and face recognition. Expand
Adversarial Representation Learning with Closed-Form Solvers
TLDR
The solution, dubbed OptNet-ARL, reduces to a stable one one-shot optimization problem that can be solved reliably and efficiently and can be easily generalized to the case of multiple target tasks and sensitive attributes. Expand
On the Global Optima of Kernelized Adversarial Representation Learning
TLDR
Numerical experiments indicate that the proposed solution is ideal for ``imparting" provable invariance to any biased pre-trained data representation and the global optima of the ``kernel" form can provide a comparable trade-off between utility and invariance in comparison to iterative minimax optimization of existing deep neural network based approaches, but with provable guarantees. Expand
Balancing Biases and Preserving Privacy on Balanced Faces in the Wild
TLDR
There are demographic biases in the SOTA CNN used for FR that are mitigate using a novel domain adaptation learning scheme on the facial encodings extracted using SOTA deep nets, to preserve identity information in facial features while removing demographic knowledge in the lower dimensional features. Expand
NoPeek-Infer: Preventing face reconstruction attacks in distributed inference after on-premise training
For models trained on-premise but deployed in a distributed fashion across multiple entities, we demonstrate that minimizing distance correlation between sensitive data such as faces and intermediaryExpand
Preserving Privacy in Image-based Emotion Recognition through User Anonymization
TLDR
An adversarial learning problem implemented with a multitask CNN, that minimizes emotion classification and maximizes user identification loss is formulated and the resulting image transformation obtained by the convolutional layer is visually inspected, attesting to the efficacy of the proposed system in preserving emotion-specific information. Expand
Obfuscation via Information Density Estimation
TLDR
This paper proposes a framework to identify information-leaking features via information density estimation, and proposes a novel estimator, named the trimmed information density estimator (TIDE), which is used to implement the mechanism on three real-world datasets. Expand
...
1
2
3
...

References

SHOWING 1-10 OF 40 REFERENCES
Adversarial Image Perturbation for Privacy Protection A Game Theory Perspective
TLDR
A general game theoretical framework for the user-recogniser dynamics is introduced, and the optimal strategy for the users that assures an upper bound on the recognition rate independent of the recogniser’s counter measure is derived. Expand
Privacy-preserving deep learning
  • R. Shokri, Vitaly Shmatikov
  • Computer Science
  • 2015 53rd Annual Allerton Conference on Communication, Control, and Computing (Allerton)
  • 2015
TLDR
This paper presents a practical system that enables multiple parties to jointly learn an accurate neural-network model for a given objective without sharing their input datasets, and exploits the fact that the optimization algorithms used in modern deep learning, namely, those based on stochastic gradient descent, can be parallelized and executed asynchronously. Expand
Fader Networks: Manipulating Images by Sliding Attributes
TLDR
A new encoder-decoder architecture that is trained to reconstruct images by disentangling the salient information of the image and the values of attributes directly in the latent space is introduced, which results in much simpler training schemes and nicely scales to multiple attributes. Expand
VEEGAN: Reducing Mode Collapse in GANs using Implicit Variational Learning
TLDR
VEEGAN is introduced, which features a reconstructor network, reversing the action of the generator by mapping from data to noise, and resists mode collapsing to a far greater extent than other recent GAN variants, and produces more realistic samples. Expand
Protecting Visual Secrets Using Adversarial Nets
TLDR
This work builds on the existing work of adversarial learning to design a perturbation mechanism that jointly optimizes privacy and utility objectives and provides a feasibility study of the proposed mechanism and ideas on developing a privacy framework based on the adversarial perturbations mechanism. Expand
Privacy-preserving Machine Learning through Data Obfuscation
TLDR
This paper proposes a novel and generic methodology to preserve the privacy of training data in machine learning applications and shows that this approach can effective defeat four existing types of machine learning privacy attacks at negligible accuracy cost. Expand
Learning Adversarially Fair and Transferable Representations
TLDR
This paper presents the first in-depth experimental demonstration of fair transfer learning and demonstrates empirically that the authors' learned representations admit fair predictions on new tasks while maintaining utility, an essential goal of fair representation learning. Expand
Generative Adversarial Nets
We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and aExpand
Connecting Pixels to Privacy and Utility: Automatic Redaction of Private Information in Images
TLDR
This work proposes the first sizable dataset of private images "in the wild" annotated with pixel and instance level labels across a broad range of privacy classes and presents the first model for automatic redaction of diverse private information. Expand
Adversarial Discriminative Domain Adaptation
TLDR
It is shown that ADDA is more effective yet considerably simpler than competing domain-adversarial methods, and the promise of the approach is demonstrated by exceeding state-of-the-art unsupervised adaptation results on standard domain adaptation tasks as well as a difficult cross-modality object classification task. Expand
...
1
2
3
4
...