On the Global Optima of Kernelized Adversarial Representation Learning

@article{Sadeghi2019OnTG,
  title={On the Global Optima of Kernelized Adversarial Representation Learning},
  author={Bashir Sadeghi and Runyi Yu and Vishnu Naresh Boddeti},
  journal={2019 IEEE/CVF International Conference on Computer Vision (ICCV)},
  year={2019},
  pages={7970-7978}
}
Adversarial representation learning is a promising paradigm for obtaining data representations that are invariant to certain sensitive attributes while retaining the information necessary for predicting target attributes. Existing approaches solve this problem through iterative adversarial minimax optimization and lack theoretical guarantees. In this paper, we first study the ``linear" form of this problem i.e., the setting where all the players are linear functions. We show that the resulting… 
Imparting Fairness to Pre-Trained Biased Representations
TLDR
This paper first studies the "linear" form of the adversarial representation learning problem, and obtains an exact closed-form expression for its global optima through spectral learning and extends this solution and analysis to non-linear functions through kernel representation.
Adversarial Representation Learning with Closed-Form Solvers
TLDR
The solution, dubbed OptNet-ARL, reduces to a stable one one-shot optimization problem that can be solved reliably and efficiently and can be easily generalized to the case of multiple target tasks and sensitive attributes.
Learning Unbiased Representations via Rényi Minimization
TLDR
This paper proposes an adversarial algorithm to learn unbiased representations via the Hirschfeld-Gebelein-Renyi (HGR) maximal correlation coefficient and leverages recent work which has been done to estimate this coefficient by learning deep neural network transformations to penalize the intrinsic bias in a multi dimensional latent representation.
On the Fundamental Trade-offs in Learning Invariant Representations
TLDR
This paper identifies and determines two fundamental trade-offs between utility and semantic dependence induced by the statistical dependencies between the data and its corresponding target and semantic attributes.
A Theoretical View of Adversarial Domain Generalization in the Hierarchical Model Setting
In many contexts, such as medical forecasting, domain generalization from studies in populous areas (where data are plentiful), to geographically remote populations (for which no training data exist)
Bias-Resilient Neural Network
TLDR
A method based on the adversarial training strategy to learn discriminative features unbiased and invariant to the confounder(s) by incorporating a new adversarial loss function that encourages a vanished correlation between the bias and learned features.
NoPeek-Infer: Preventing face reconstruction attacks in distributed inference after on-premise training
For models trained on-premise but deployed in a distributed fashion across multiple entities, we demonstrate that minimizing distance correlation between sensitive data such as faces and intermediary
Training confounder-free deep learning models for medical applications
TLDR
This article introduces an end-to-end approach for deriving features invariant to confounding factors while accounting for intrinsic correlations between the confounder(s) and prediction outcome, exploiting concepts from traditional statistical methods and recent fair machine learning schemes.
NoPeek: Information leakage reduction to share activations in distributed deep learning
TLDR
This work demonstrates how minimizing distance correlation between raw data and intermediary representations reduces leakage of sensitive raw data patterns across client communications while maintaining model accuracy in distributed deep learning services.
...
1
2
...

References

SHOWING 1-10 OF 33 REFERENCES
Controllable Invariance through Adversarial Feature Learning
TLDR
This paper shows that the proposed framework induces an invariant representation, and leads to better generalization evidenced by the improved performance on three benchmark tasks.
Invariant Representations without Adversarial Training
TLDR
It is shown that adversarial training is unnecessary and sometimes counter-productive; this work casts invariant representation learning as a single information-theoretic objective that can be directly optimized.
Censoring Representations with an Adversary
TLDR
This work forms the adversarial model as a minimax problem, and optimize that minimax objective using a stochastic gradient alternate min-max optimizer, and demonstrates the ability to provide discriminant free representations for standard test problems, and compares with previous state of the art methods for fairness.
Mitigating Information Leakage in Image Representations: A Maximum Entropy Approach
  • P. Roy, Vishnu Naresh Boddeti
  • Computer Science, Mathematics
    2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2019
TLDR
Numerical experiments indicate that the proposed approach is able to learn image representations that exhibit high task performance while mitigating leakage of predefined sensitive information.
Adversarial Discriminative Domain Adaptation
TLDR
It is shown that ADDA is more effective yet considerably simpler than competing domain-adversarial methods, and the promise of the approach is demonstrated by exceeding state-of-the-art unsupervised adaptation results on standard domain adaptation tasks as well as a difficult cross-modality object classification task.
Data Decisions and Theoretical Implications when Adversarially Learning Fair Representations
TLDR
An adversarial training procedure is used to remove information about the sensitive attribute from the latent representation learned by a neural network, and the data distribution empirically drives the adversary's notion of fairness.
Generative Adversarial Nets
We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a
Gradient descent GAN optimization is locally stable
TLDR
This paper analyzes the "gradient descent" form of GAN optimization i.e., the natural setting where the authors simultaneously take small gradient steps in both generator and discriminator parameters, and proposes an additional regularization term for gradient descent GAN updates that is able to guarantee local stability for both the WGAN and the traditional GAN.
Adversarially Learned Representations for Information Obfuscation and Inference
TLDR
This work takes an information theoretic approach that is implemented as an unconstrained adversarial game between Deep Neural Networks in a principled, data-driven manner, and enables us to learn domain-preserving stochastic transformations that maintain performance on existing algorithms while minimizing sensitive information leakage.
Domain-Adversarial Training of Neural Networks
TLDR
A new representation learning approach for domain adaptation, in which data at training and test time come from similar but different distributions, which can be achieved in almost any feed-forward model by augmenting it with few standard layers and a new gradient reversal layer.
...
1
2
3
4
...