Uncertainty Quantification of the 4th kind; optimal posterior accuracy-uncertainty tradeoff with the minimum enclosing ball

@article{Bajgiran2021UncertaintyQO,
  title={Uncertainty Quantification of the 4th kind; optimal posterior accuracy-uncertainty tradeoff with the minimum enclosing ball},
  author={Hamed Hamze Bajgiran and Paula Franch and Houman Owhadi and Clint Scovel and Mahdy Shirdel and Michael Stanley and Peyman Tavallali},
  journal={J. Comput. Phys.},
  year={2021},
  volume={471},
  pages={111608}
}

Figures and Tables from this paper

Uncertainty quantification for wide-bin unfolding: one-at-a-time strict bounds and prior-optimized confidence intervals

Unfolding is an ill-posed inverse problem in particle physics aiming to infer a true particle-level spectrum from smeared detector-level data. For computational and practical reasons, these spaces

Aggregation of Pareto optimal models

Pareto efficiency is a concept commonly used in economics, statistics, and engineering. In the setting of statistical decision theory, a model is said to be Pareto efficient/optimal (or admissible)

Learning dynamical systems from data: a simple cross-validation perspective

Variants of cross-validation (Kernel Flows and its variants based on Maximum Mean Discrepancy and Lyapunov exponents) are presented as simple approaches for learning the kernel used in these emulators.

Do ideas have shape? Idea registration as the continuous limit of artificial neural networks

  • H. Owhadi
  • Computer Science
    Physica D: Nonlinear Phenomena
  • 2022

References

SHOWING 1-10 OF 47 REFERENCES

Optimal Uncertainty Quantification

A general algorithmic framework is developed for OUQ and is tested on the Caltech surrogate model for hypervelocity impact and on the seismic safety assessment of truss structures, suggesting the feasibility of the framework for important complex systems.

On the Brittleness of Bayesian Inference

It is reported that, although Bayesian methods are robust when the number of possible outcomes is finite or when only a finite number of marginals of the data-generating distribution are unknown, they could be generically brittle when applied to continuous systems with finite information.

Brittleness of Bayesian inference and new Selberg formulas

The incorporation of priors in the Optimal Uncertainty Quantification (OUQ) framework \cite{OSSMO:2011} reveals brittleness in Bayesian inference; a model may share an arbitrarily large number of

Brittleness of Bayesian Inference Under Finite Information in a Continuous World

It is observed that learning and robustness are antagonistic properties, and optimal lower and upper bounds on posterior values obtained from Bayesian models that exactly capture an arbitrarily large number of finite-dimensional marginals of the data-generating distribution are derived.

Qualitative Robustness in Bayesian Inference

The practical implementation of Bayesian inference requires numerical approximation when closed-form expressions are not available. What types of accuracy (convergence) of the numerical

Adam: A Method for Stochastic Optimization

This work introduces Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments, and provides a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework.

Minimax analysis of stochastic problems

It is shown that, under mild regularity conditions, such a min-max problem generates a probability distribution on the set of permissible distributions with the min- max problem being equivalent to the expected value problem with respect to the corresponding weighted distribution.

Statistical Inference in Science

Unfortunately, for practical purposes such as an add-on to a spatial statistics class, the book is not very suitable. Here are some of its major drawbacks: First, the overall visual appearance is not

Two Algorithms for the Minimum Enclosing Ball Problem

The second algorithm asymptotically exhibits linear convergence, indicating that the latter algorithm indeed terminates faster with smaller core sets in comparison with the first one, and establishes the existence of a core set of size $O(1/\epsilon)$ for a much wider class of input sets.

Numerical specification of discrete least favorable prior distributions

A general algorithm for specifying such distributions is presented which exploits the statistical properties of minimax procedures and is demonstrated by characterizing the procedure which simultaneously minimizes a Bayes risk and a maximum risk under different loss functions in a simple multi-objective decision problem.