• Corpus ID: 246485553

Certifying Out-of-Domain Generalization for Blackbox Functions

@inproceedings{Weber2022CertifyingOG,
  title={Certifying Out-of-Domain Generalization for Blackbox Functions},
  author={Maurice Weber and Linyi Li and Boxin Wang and Zhikuan Zhao and Bo Li and Ce Zhang},
  booktitle={ICML},
  year={2022}
}
Certifying the robustness of model performance under bounded data distribution drifts has recently attracted intensive interest under the umbrella of distributional robustness . However, existing techniques either make strong assumptions on the model class and loss functions that can be certified, such as smoothness expressed via Lipschitz continuity of gradients, or require to solve complex optimization problems. As a result, the wider application of these techniques is currently lim-ited by… 

Figures and Tables from this paper

On Certifying and Improving Generalization to Unseen Domains

TLDR
This work demonstrates the effectiveness of a universal certification framework based on distributionally robust optimization (DRO) that enables a data-independent evaluation of a DG method complementary to the empirical evaluations on benchmark datasets and proposes a training algorithm that can be used with any DG method to provably improve their certi fied performance.

Certifying Some Distributional Fairness with Subpopulation Decomposition

TLDR
This paper formulate the certified fairness of an ML model trained on a given data distribution as an optimization problem based on the model performance loss bound on a fairness constrained distribution, which is within bounded distributional distance with the training distribution.

SoK: Certified Robustness for Deep Neural Networks

TLDR
This paper provides a taxonomy for the robustness verification and training approaches, and provides an open-sourced unified platform to evaluate 20+ representative verification and corresponding robust training approaches on a wide range of DNNs.

GeoECG: Data Augmentation via Wasserstein Geodesic Perturbation for Robust Electrocardiogram Prediction

TLDR
This paper proposes a physiologically-inspired data augmentation method to improve performance and increase the robustness of heart disease detection based on ECG signals, and designs a ground metric that recognizes theerence betweenECG signals based on physiologically determined features.

References

SHOWING 1-10 OF 55 REFERENCES

Distributional Robustness with IPMs and links to Regularization and GANs

TLDR
The results intimately link GANs to distributional robustness, extend previous results on DRO and contribute to the understanding of the link between regularization and robustness at large.

Certifiable Distributional Robustness with Principled Adversarial Training

TLDR
This work provides a training procedure that augments model parameter updates with worst-case perturbations of training data by considering a Lagrangian penalty formulation of perturbation of the underlying data distribution in a Wasserstein ball.

Scaling provable adversarial defenses

TLDR
This paper presents a technique for extending these training procedures to much more general networks, with skip connections and general nonlinearities, and shows how to further improve robust error through cascade models.

Lipschitz regularity of deep neural networks: analysis and efficient estimation

TLDR
This paper provides AutoLip, the first generic algorithm for upper bounding the Lipschitz constant of any automatically differentiable function, and proposes an improved algorithm named SeqLip that takes advantage of the linear computation graph to split the computation per pair of consecutive layers.

Certified Robustness to Adversarial Examples with Differential Privacy

TLDR
This paper presents the first certified defense that both scales to large networks and datasets and applies broadly to arbitrary model types, based on a novel connection between robustness against adversarial examples and differential privacy, a cryptographically-inspired privacy formalism.

Generalised Lipschitz Regularisation Equals Distributional Robustness

TLDR
A very general equality result is given regarding the relationship between distributional robustness and regularisation, as defined with a transportation cost uncertainty set, to certify the robustness properties of a Lipschitz-regularised model with very mild assumptions.

Regularization via Mass Transportation

TLDR
This paper introduces new regularization techniques using ideas from distributionally robust optimization, and gives new probabilistic interpretations to existing techniques to minimize the worst-case expected loss, where the worst case is taken over the ball of all distributions that have a bounded transportation distance from the empirical distribution.

Parseval Networks: Improving Robustness to Adversarial Examples

TLDR
It is shown that Parseval networks match the state-of-the-art in terms of accuracy on CIFAR-10/100 and Street View House Numbers while being more robust than their vanilla counterpart against adversarial examples.

Evaluating Model Robustness and Stability to Dataset Shift

TLDR
A “debiased” estimator is derived which maintains p N -consistency even when machine learning methods with slower convergence rates are used to estimate the nuisance parameters, and in experiments on a real medical risk prediction task, this estimator can be used to analyze stability and accounts for realistic shifts that could not previously be expressed.

In Search of Lost Domain Generalization

TLDR
This paper implements DomainBed, a testbed for domain generalization including seven multi-domain datasets, nine baseline algorithms, and three model selection criteria, and finds that, when carefully implemented, empirical risk minimization shows state-of-the-art performance across all datasets.
...