• Corpus ID: 202577978

Wasserstein Diffusion Tikhonov Regularization

@article{Lin2019WassersteinDT,
  title={Wasserstein Diffusion Tikhonov Regularization},
  author={Alex Tong Lin and Yonatan Dukler and Wuchen Li and Guido Mont{\'u}far},
  journal={ArXiv},
  year={2019},
  volume={abs/1909.06860}
}
We propose regularization strategies for learning discriminative models that are robust to in-class variations of the input data. We use the Wasserstein-2 geometry to capture semantically meaningful neighborhoods in the space of images, and define a corresponding input-dependent additive noise data augmentation model. Expanding and integrating the augmented loss yields an effective Tikhonov-type Wasserstein diffusion smoothness regularizer. This approach allows us to apply high levels of… 

Figures and Tables from this paper

On the human-recognizability phenomenon of adversarially trained deep image classifiers
TLDR
This work demonstrates that state-of-theart methods for adversarial training incorporate two terms – one that orients the decision boundary via minimizing the expected loss, and another that induces smoothness of the classifier’s decision surface by penalizing the local Lipschitz constant.
Gaussian MRF Covariance Modeling for Efficient Black-Box Adversarial Attacks
TLDR
This work proposes to capture correlations within gradients of the loss function with respect to the input images via a Gaussian Markov random field (GMRF), and shows that the covariance structure can be efficiently represented using the Fast Fourier Transform (FFT), along with low-rank updates to perform exact posterior estimation under this model.

References

SHOWING 1-10 OF 41 REFERENCES
Wasserstein of Wasserstein Loss for Learning Generative Models
TLDR
The Wasserstein distance serves as a loss function for unsupervised learning which depends on the choice of a ground metric on sample space and the new formulation is more robust to the natural variability of images and provides for a more continuous discriminator in sample space.
Wasserstein Proximal of GANs
TLDR
A new method for training generative adversarial networks by applying the Wasserstein-2 metric proximal on the generators is introduced, which defines a parametrization invariant natural gradient by pulling back optimal transport structures from probability space to parameter space.
Improved robustness to adversarial examples using Lipschitz regularization of the loss
TLDR
This work augments AT with worst case adversarial training (WCAT) which improves adversarial robustness by 11% over the current state-of-the-art result in the $\ell_2$ norm on CIFAR-10, and obtains verifiable average case and worst case robustness guarantees.
Wasserstein Adversarial Examples via Projected Sinkhorn Iterations
TLDR
A new threat model for adversarial attacks based on the Wasserstein distance is proposed, which can successfully attack image classification models, and it is demonstrated that PGD-based adversarial training can improve this adversarial accuracy to 76%.
Wasserstein Distributional Robustness and Regularization in Statistical Learning
TLDR
A broad class of loss functions are identified, for which the Wasserstein DRSO is asymptotically equivalent to a regularization problem with a gradient-norm penalty, which suggests a principled way to regularize high-dimensional, non-convex problems.
Adversarial Defense via Data Dependent Activation Function and Total Variation Minimization
TLDR
This work improves the robustness of deep neural nets to adversarial attacks by using an interpolating function as the output activation, and combines this data-dependent activation with total variation minimization on adversarial images and training data augmentation.
Improving DNN Robustness to Adversarial Attacks using Jacobian Regularization
TLDR
This work suggests a theoretically inspired novel approach to improve the networks' robustness using the Frobenius norm of the Jacobian of the network, which is applied as post-processing, after regular training has finished and demonstrates empirically that it leads to enhanced robustness results with a minimal change in the original network's accuracy.
Universal Adversarial Perturbations
TLDR
The surprising existence of universal perturbations reveals important geometric correlations among the high-dimensional decision boundary of classifiers and outlines potential security breaches with the existence of single directions in the input space that adversaries can possibly exploit to break a classifier on most natural images.
Parseval Networks: Improving Robustness to Adversarial Examples
TLDR
It is shown that Parseval networks match the state-of-the-art in terms of accuracy on CIFAR-10/100 and Street View House Numbers while being more robust than their vanilla counterpart against adversarial examples.
Regularization via Mass Transportation
TLDR
This paper introduces new regularization techniques using ideas from distributionally robust optimization, and gives new probabilistic interpretations to existing techniques to minimize the worst-case expected loss, where the worst case is taken over the ball of all distributions that have a bounded transportation distance from the empirical distribution.
...
...