Expert-Calibrated Learning for Online Optimization with Switching Costs

@article{Li2022ExpertCalibratedLF,
  title={Expert-Calibrated Learning for Online Optimization with Switching Costs},
  author={Peng Li and Jianyi Yang and Shaolei Ren},
  journal={Proceedings of the ACM on Measurement and Analysis of Computing Systems},
  year={2022},
  volume={6},
  pages={1 - 35}
}
We study online convex optimization with switching costs, a practically important but also extremely challenging problem due to the lack of complete offline information. By tapping into the power of machine learning (ML) based optimizers, ML-augmented online algorithms (also referred to as expert calibration in this paper) have been emerging as state of the art, with provable worst-case performance guarantees. Nonetheless, by using the standard practice of training an ML model as a standalone… 

Figures from this paper

Expert-Calibrated Learning for Online Optimization with Switching Costs

Expert-Calibrated Learning for Online Optimization with Switching Costs. In Abstract Proceed- ings of the 2022 ACM SIGMETRICS/IFIP PERFORMANCE Joint

References

SHOWING 1-10 OF 65 REFERENCES

Online Optimization with Predictions and Non-convex Losses

This work provides two general sufficient conditions that specify a relationship between the hitting and movement costs, which guarantees that a new algorithm, Synchronized Fixed Horizon Control (SFHC), achieves a near-optimal competitive ratio with the help of predictions.

A Regression Approach to Learning-Augmented Online Algorithms

This paper shows that the key is to incorporate online optimization benchmarks in the design of the loss function for the regression problem, thereby diverging from the use of off-the-shelf regression tools with standard bounds on statistical error.

Using Predictions in Online Optimization: Looking Forward with an Eye on the Past

This paper introduces a new class of policies, Committed Horizon Control (CHC), that generalizes both RHC and AFHC and provides average-case analysis and concentration results for CHC policies, yielding the first analysis of RHC for OCO problems with noisy predictions.

Online Optimization with Untrusted Predictions

A novel algorithm is proposed, Adaptive Online Switching (AOS), and it is proved that, for any desired Ξ΄ > 0, it is (1 + 2Ξ΄)-competitive if predictions are perfect, while also maintaining a uniformly bounded competitive ratio of 2Γ•(1/(Ξ±Ξ΄)) even when predictions are adversarial.

Revisiting Smoothed Online Learning

In this paper, we revisit the problem of smoothed online learning, in which the online learner suffers both a hitting cost and a switching cost, and target two performance metrics: competitive ratio…

Smoothed Online Convex Optimization in High Dimensions via Online Balanced Descent

OBD is the first algorithm to achieve a dimension-free competitive ratio, 3 + O(1/\alpha)$, for locally polyhedral costs, where $\alpha$ measures the "steepness" of the costs.

Learning to Optimize: A Primer and A Benchmark

This article is poised to be the first comprehensive survey and benchmark of L2O for continuous optimization, set up taxonomies, categorize existing works and research directions, present insights, and identify open challenges.

A Modern Introduction to Online Learning

This monograph introduces the basic concepts of Online Learning through a modern view of Online Convex Optimization, and presents first-order and second-order algorithms for online learning with convex losses, in Euclidean and non-Euclidean settings.

Melding the Data-Decisions Pipeline: Decision-Focused Learning for Combinatorial Optimization

This work focuses on combinatorial optimization problems and introduces a general framework for decision-focused learning, where the machine learning model is directly trained in conjunction with the optimization algorithm to produce highquality decisions, and shows that decisionfocused learning often leads to improved optimization performance compared to traditional methods.

Wasserstein Distributionally Robust Optimization: Theory and Applications in Machine Learning

This tutorial argues that Wasserstein distributionally robust optimization has interesting ramifications for statistical learning and motivates new approaches for fundamental learning tasks such as classification, regression, maximum likelihood estimation or minimum mean square error estimation, among others.
...