• Corpus ID: 12519545

Learning to Pivot with Adversarial Networks

@inproceedings{Louppe2017LearningTP,
  title={Learning to Pivot with Adversarial Networks},
  author={Gilles Louppe and Michael Kagan and Kyle Cranmer},
  booktitle={NIPS},
  year={2017}
}
Several techniques for domain adaptation have been proposed to account for differences in the distribution of the data used for training and testing. The majority of this work focuses on a binary domain label. Similar problems occur in a scientific context where there may be a continuous family of plausible data generation processes associated to the presence of systematic uncertainties. Robust inference is possible if it is based on a pivot -- a quantity whose distribution does not depend on… 

Figures from this paper

Generative Adversarial Networks for Mitigating Biases in Machine Learning Systems
TLDR
Experimental results show that the proposed solution can efficiently mitigate different types of biases, while at the same time enhancing the prediction accuracy of the underlying machine learning model.
DisCo Fever: Robust Networks Through Distance Correlation
TLDR
A new method based on a novel application of “distance correlation” (DisCo), a measure quantifying non-linear correlations, that achieves equal performance to state-of-the-art adversarial decorrelation networks but is much simpler to train and has better convergence properties is presented.
Learning Unbiased Representations via Rényi Minimization
TLDR
This paper proposes an adversarial algorithm to learn unbiased representations via the Hirschfeld-Gebelein-Renyi (HGR) maximal correlation coefficient and leverages recent work which has been done to estimate this coefficient by learning deep neural network transformations to penalize the intrinsic bias in a multi dimensional latent representation.
Controllable Invariance through Adversarial Feature Learning
TLDR
This paper shows that the proposed framework induces an invariant representation, and leads to better generalization evidenced by the improved performance on three benchmark tasks.
Invariant Representations from Adversarially Censored Autoencoders
TLDR
This paper combines conditional variational autoencoders (VAE) with adversarial censoring in order to learn invariant representations that are disentangled from nuisance/sensitive variations and shows this natural approach is theoretically well-founded with information-theoretic arguments.
Improving robustness of jet tagging algorithms with adversarial training
TLDR
This work examines the relationship between performance and vulnerability and presents an adversarial training strategy that mitigates the impact of such simulated attacks and improves the classifier robustness and shows that this method constitutes a promising approach to reduce the vulnerability to poor modeling.
Robust Jet Classifiers through Distance Correlation.
TLDR
A new method based on a novel application of "distance correlation," a measure quantifying nonlinear correlations, that achieves equal performance to state-of-the-art adversarial decorrelation networks but is much simpler and more stable to train is presented.
Adversarially-trained autoencoders for robust unsupervised new physics searches
TLDR
It is proposed to combine the autoencoder with an adversarial neural network to remove its sensitivity to the smearing of the final-state objects and it is shown that one can achieve a robust anomaly detection in resonance-induced resonance.
Fairness-Aware Neural Réyni Minimization for Continuous Features
TLDR
The objective in this paper is to ensure some independence level between the outputs of regression models and any given continuous sensitive variables, using the Hirschfeld-Gebelein-Rényi (HGR) maximal correlation coefficient as a fairness metric.
ZK-GanDef: A GAN Based Zero Knowledge Adversarial Training Defense for Neural Networks
TLDR
A generative adversarial net (GAN) based zero knowledge adversarial training defense, dubbed ZK-GanDef, which is not only efficient in training but also adaptive to new adversarial examples, at the cost of small degradation in test accuracy compared to full knowledge approaches.
...
...

References

SHOWING 1-10 OF 43 REFERENCES
Generative Adversarial Nets
We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a
Censoring Representations with an Adversary
TLDR
This work forms the adversarial model as a minimax problem, and optimize that minimax objective using a stochastic gradient alternate min-max optimizer, and demonstrates the ability to provide discriminant free representations for standard test problems, and compares with previous state of the art methods for fairness.
The Variational Fair Autoencoder
TLDR
This model is based on a variational autoencoding architecture with priors that encourage independence between sensitive and latent factors of variation that is more effective than previous work in removing unwanted sources of variation while maintaining informative latent representations.
Domain-Adversarial Neural Networks
TLDR
A new neural network learning algorithm suited to the context of domain adaptation, in which data at training and test time come from similar but different distributions, which has better performance than either a standard neural networks and a SVM.
Unsupervised Domain Adaptation by Backpropagation
TLDR
The method performs very well in a series of image classification experiments, achieving adaptation effect in the presence of big domain shifts and outperforming previous state-of-the-art on Office datasets.
Decorrelated jet substructure tagging using adversarial neural networks
TLDR
It is shown that in the presence of systematic uncertainties on the background rate, the adversarially-trained, decorrelated tagger considerably outperforms a conventionally trained neural network, despite having a slightly worse signal-background separation power.
Unsupervised Domain Adaptation by Domain Invariant Projection
TLDR
This paper learns a projection of the data to a low-dimensional latent space where the distance between the empirical distributions of the source and target examples is minimized and demonstrates the effectiveness of the approach on the task of visual object recognition.
Learning Fair Representations
We propose a learning algorithm for fair classification that achieves both group fairness (the proportion of members in a protected group receiving positive classification is identical to the
Connecting the Dots with Landmarks: Discriminatively Learning Domain-Invariant Features for Unsupervised Domain Adaptation
TLDR
This paper automatically discovers the existence of landmarks and uses them to bridge the source to the target by constructing provably easier auxiliary domain adaptation tasks, and shows how this composition can be optimized discriminatively without requiring labels from the target domain.
Domain Adaptation via Transfer Component Analysis
TLDR
This work proposes a novel dimensionality reduction framework for reducing the distance between domains in a latent space for domain adaptation and proposes both unsupervised and semisupervised feature extraction approaches, which can dramatically reduce thedistance between domain distributions by projecting data onto the learned transfer components.
...
...