# Consistent Approximations in Composite Optimization

@article{Royset2022ConsistentAI, title={Consistent Approximations in Composite Optimization}, author={Johannes O. Royset}, journal={ArXiv}, year={2022}, volume={abs/2201.05250} }

. Approximations of optimization problems arise in computational procedures and sensitivity analysis. The resulting eﬀect on solutions can be signiﬁcant, with even small approximations of components of a problem translating into large errors in the solutions. We specify conditions under which approximations are well behaved in the sense of minimizers, stationary points, and level-sets and this leads to a framework of consistent approximations. The framework is developed for a broad class of…

## One Citation

### Risk-Adaptive Approaches to Learning and Decision Making: A Survey

- Computer ScienceArXiv
- 2022

The rapid development of risk measures over the last quarter century is surveyed, which recalls connections with utility theory and distributionally robust optimization, points to emerging applications areas such as fair machine learning, and defines measures of reliability.

## References

SHOWING 1-10 OF 63 REFERENCES

### Stability and Error Analysis for Optimization and Generalized Equations

- MathematicsSIAM J. Optim.
- 2020

This work considers nonconvex optimization and generalized equations defined on metric spaces and develops bounds on solution errors using the truncated Hausdorff distance applied to graphs and epigraphs of the underlying set-valued mappings and functions.

### Uniform Graphical Convergence of Subgradients in Nonconvex Optimization and Learning

- Mathematics, Computer ScienceMath. Oper. Res.
- 2022

This work investigates the stochastic optimization problem of minimizing population risk, where the loss defining the risk is assumed to be weakly convex and establishes dimension-dependent rates on subgradient estimation in full generality and dimension-independent rates when the loss is a generalized linear model.

### Gradient Consistency for Integral-convolution Smoothing Functions

- MathematicsSet-Valued and Variational Analysis
- 2013

Chen and Mangasarian (Comput Optim Appl 5:97–138, 1996) developed smoothing approximations to the plus function built on integral-convolution with density functions. X. Chen (Math Program 134:71–99,…

### Search-Trajectory Optimization: Part I, Formulation and Theory

- Computer ScienceJournal of Optimization Theory and Applications
- 2015

A search-trajectory optimization problem, with multiple searchers looking for multiple targets in continuous time and space, is formulated as a parameter-distributed optimal control model and discretization schemes are constructed and proved that they lead to consistent approximations in the sense of E. Polak.

### Rate of Convergence Analysis of Discretization and Smoothing Algorithms for Semiinfinite Minimax Problems

- Computer ScienceJournal of Optimization Theory and Applications
- 2012

This work constructs optimal policies that achieve the best possible rate of convergence of discretization algorithms and finds that, under certain circumstances, the better rate is obtained by inexpensive gradient methods.

### Influence Functions in Deep Learning Are Fragile

- Computer ScienceICLR
- 2021

It is suggested that in general influence functions in deep learning are fragile and call for developing improved influence estimation methods to mitigate these issues in non-convex setups.

### Epi-Regularization of Risk Measures

- Computer ScienceMath. Oper. Res.
- 2020

This paper presents a meta-modelling framework for estimating uncertainty in partial differential equations (PDEs) using a simple model called LaSalle's inequality (SLS).

### A Study of Convex Convex-Composite Functions via Infimal Convolution with Applications

- MathematicsMath. Oper. Res.
- 2021

A full conjugacy and subdifferential calculus for convex convex-composite functions in finite-dimensional space is provided, based on infimal convolution and cone convexity, with versatility in optimization and matrix analysis.