Towards out of distribution generalization for problems in mechanics

@article{Yuan2022TowardsOO,
  title={Towards out of distribution generalization for problems in mechanics},
  author={Lingxiao Yuan and Harold S Park and Emma Lejeune},
  journal={ArXiv},
  year={2022},
  volume={abs/2206.14917}
}

References

SHOWING 1-10 OF 119 REFERENCES

Invariant Risk Minimization

This work introduces Invariant Risk Minimization, a learning paradigm to estimate invariant correlations across multiple training distributions and shows how the invariances learned by IRM relate to the causal structures governing the data and enable out-of-distribution generalization.

Out-of-Distribution Generalization via Risk Extrapolation (REx)

This work introduces the principle of Risk Extrapolation (REx), and shows conceptually how this principle enables extrapolation, and demonstrates the effectiveness and scalability of instantiations of REx on various OoD generalization tasks.

Understanding and Testing Generalization of Deep Networks on Out-of-Distribution Data

This study analyzes the problem of experimental ID test and proposes novel OOD test paradigms to evaluate the generalization capacity of models to unseen data, and discusses how to use Ood test results to find bugs of models and guide model debugging.

Understanding the Failure Modes of Out-of-Distribution Generalization

This work identifies the fundamental factors that give rise to why models fail this way in easy-to-learn tasks where one would expect these models to succeed, and uncovers two complementary failure modes.

Kernelized Heterogeneous Risk Minimization

This paper proposes Kernelized Heterogeneous Risk Minimization (KerHRM), which achieves both the latent heterogeneity exploration and invariant learning in kernel space, and then gives feedback to the original neural network by appointing invariant gradient direction.

Out-of-Distribution Generalization with Maximal Invariant Predictor

The basic results of probability are used to prove maximal Invariant Predictor condition, a theoretical result that can be used to identify the OOD optimal solution and the superiority of IGA over previous methods on both the original and the extended version of Colored-MNIST.

Overparameterization Improves Robustness to Covariate Shift in High Dimensions

This work examines the exact high-dimensional asymptotics of random feature regression under covariate shift and presents a precise characterization of the limiting test error, bias, and variance in this setting, providing one of the first theoretical explanations for this ubiquitous empirical phenomenon.

Towards a Theoretical Framework of Out-of-Distribution Generalization

This work takes the first step towards rigorous and quantitative definitions of 1) what is OOD; and 2) what does it mean by saying an OOD problem is learnable; and introduces a new concept of expansion function, which characterizes to what extent the variance is amplified in the test domains over the training domains, and therefore give a quantitative meaning of invariant features.

Robustness May Be at Odds with Accuracy

It is shown that there may exist an inherent tension between the goal of adversarial robustness and that of standard generalization, and it is argued that this phenomenon is a consequence of robust classifiers learning fundamentally different feature representations than standard classifiers.

In Search of Lost Domain Generalization

This paper implements DomainBed, a testbed for domain generalization including seven multi-domain datasets, nine baseline algorithms, and three model selection criteria, and finds that, when carefully implemented, empirical risk minimization shows state-of-the-art performance across all datasets.
...