Consistent Approximations in Composite Optimization

@article{Royset2022ConsistentAI,
  title={Consistent Approximations in Composite Optimization},
  author={Johannes O. Royset},
  journal={ArXiv},
  year={2022},
  volume={abs/2201.05250}
}
  • J. Royset
  • Published 13 January 2022
  • Computer Science
  • ArXiv
. Approximations of optimization problems arise in computational procedures and sensitivity analysis. The resulting effect on solutions can be significant, with even small approximations of components of a problem translating into large errors in the solutions. We specify conditions under which approximations are well behaved in the sense of minimizers, stationary points, and level-sets and this leads to a framework of consistent approximations. The framework is developed for a broad class of… 

Risk-Adaptive Approaches to Learning and Decision Making: A Survey

The rapid development of risk measures over the last quarter century is surveyed, which recalls connections with utility theory and distributionally robust optimization, points to emerging applications areas such as fair machine learning, and defines measures of reliability.

References

SHOWING 1-10 OF 63 REFERENCES

Stability and Error Analysis for Optimization and Generalized Equations

This work considers nonconvex optimization and generalized equations defined on metric spaces and develops bounds on solution errors using the truncated Hausdorff distance applied to graphs and epigraphs of the underlying set-valued mappings and functions.

Uniform Graphical Convergence of Subgradients in Nonconvex Optimization and Learning

This work investigates the stochastic optimization problem of minimizing population risk, where the loss defining the risk is assumed to be weakly convex and establishes dimension-dependent rates on subgradient estimation in full generality and dimension-independent rates when the loss is a generalized linear model.

Gradient Consistency for Integral-convolution Smoothing Functions

Chen and Mangasarian (Comput Optim Appl 5:97–138, 1996) developed smoothing approximations to the plus function built on integral-convolution with density functions. X. Chen (Math Program 134:71–99,

Search-Trajectory Optimization: Part I, Formulation and Theory

A search-trajectory optimization problem, with multiple searchers looking for multiple targets in continuous time and space, is formulated as a parameter-distributed optimal control model and discretization schemes are constructed and proved that they lead to consistent approximations in the sense of E. Polak.

Rate of Convergence Analysis of Discretization and Smoothing Algorithms for Semiinfinite Minimax Problems

This work constructs optimal policies that achieve the best possible rate of convergence of discretization algorithms and finds that, under certain circumstances, the better rate is obtained by inexpensive gradient methods.

An Optimization Primer

  • J. RoysetR. Wets
  • Springer Series in Operations Research and Financial Engineering
  • 2021

Influence Functions in Deep Learning Are Fragile

It is suggested that in general influence functions in deep learning are fragile and call for developing improved influence estimation methods to mitigate these issues in non-convex setups.

Epi-Regularization of Risk Measures

This paper presents a meta-modelling framework for estimating uncertainty in partial differential equations (PDEs) using a simple model called LaSalle's inequality (SLS).

A Study of Convex Convex-Composite Functions via Infimal Convolution with Applications

A full conjugacy and subdifferential calculus for convex convex-composite functions in finite-dimensional space is provided, based on infimal convolution and cone convexity, with versatility in optimization and matrix analysis.

Nondifferential and Variational Techniques in Optimization

...