# Risk regularization through bidirectional dispersion

@article{Holland2022RiskRT, title={Risk regularization through bidirectional dispersion}, author={Matthew J. Holland}, journal={ArXiv}, year={2022}, volume={abs/2203.14434} }

Many alternative notions of “risk” (e.g., CVaR, entropic risk, DRO risk) have been proposed and studied, but these risks are all at least as sensitive as the mean to loss tails on the upside, and tend to ignore deviations on the downside. In this work, we study a complementary new risk class that penalizes loss deviations in a bidirectional manner, while having more ﬂexibility in terms of tail sensitivity than is oﬀered by classical mean-variance, without sacriﬁcing computational or analytical…

## Figures from this paper

## References

SHOWING 1-10 OF 53 REFERENCES

Tilted Empirical Risk Minimization

- Computer ScienceICLR
- 2021

This work shows that it is possible to flexibly tune the impact of individual losses through a straightforward extension to ERM using a hyperparameter called the tilt, and demonstrates that TERM can be used for a multitude of applications, such as enforcing fairness between subgroups, mitigating the effect of outliers, and handling class imbalance.

Empirical risk minimization for heavy-tailed losses

- Mathematics
- 2015

The purpose of this paper is to discuss empirical risk minimization when the losses are not necessarily bounded and may have a distribution with heavy tails. In such situations, usual empirical…

Learning with risk-averse feedback under potentially heavy tails

- Computer ScienceAISTATS
- 2021

A general-purpose estimator of CVaR for potentially heavy-tailed random variables is studied, which is easy to implement in practice, and requires nothing more than finite variance and a distribution function that does not change too fast or slow around just the quantile of interest.

Robust Empirical Optimization is Almost the Same As Mean-Variance Optimization

- MathematicsOper. Res. Lett.
- 2018

Abstract We formulate a distributionally robust optimization problem where the deviation of the alternative distribution is controlled by a ϕ -divergence penalty in the objective, and show that a…

Learning Bounds for Risk-sensitive Learning

- Computer ScienceNeurIPS
- 2020

This paper proposes to study the generalization properties of risk-sensitive learning schemes whose optimand is described via optimized certainty equivalents (OCE): the general scheme can handle various known risks, e.g., the entropic risk, mean-variance, and conditional value-at-risk, as special cases.

ENTROPIC RISK MEASURES: COHERENCE VS. CONVEXITY, MODEL AMBIGUITY AND ROBUST LARGE DEVIATIONS

- Mathematics
- 2011

We study a coherent version of the entropic risk measure, both in the law-invariant case and in a situation of model ambiguity. In particular, we discuss its behavior under the pooling of independent…

Spectral risk-based learning using unbounded losses

- Computer Science, MathematicsAISTATS
- 2022

It can be argued that prioritizing average off-sample performance is a substantial value judgement that requires more serious consideration, both by stakeholders involved in the practical side of machine learning systems, and by the theoretician interested in providing learning algorithms with formal guarantees.

On Tilted Losses in Machine Learning: Theory and Applications

- Computer ScienceArXiv
- 2021

This work studies a simple extension to ERM—tilted empirical risk minimization (TERM)—which uses exponential tilting to flexibly tune the impact of individual losses and finds that the framework can consistently outperform ERM and deliver competitive performance with state-of-the-art, problem-specific approaches.

Conditional Value-at-Risk for General Loss Distributions

- Economics
- 2001

Fundamental properties of conditional value-at-risk, as a measure of risk with significant advantages over value-at-risk, are derived for loss distributions in finance that can involve discreetness.…

Stochastic Gradient Methods for Distributionally Robust Optimization with f-divergences

- Computer ScienceNIPS
- 2016

This work develops efficient solution methods for a robust empirical risk minimization problem designed to give calibrated confidence intervals on performance and provide optimal tradeoffs between bias and variance and solves the resulting minimax problems with nearly the same computational cost of stochastic gradient descent.