Risk regularization through bidirectional dispersion

@article{Holland2022RiskRT,
  title={Risk regularization through bidirectional dispersion},
  author={Matthew J. Holland},
  journal={ArXiv},
  year={2022},
  volume={abs/2203.14434}
}
Many alternative notions of “risk” (e.g., CVaR, entropic risk, DRO risk) have been proposed and studied, but these risks are all at least as sensitive as the mean to loss tails on the upside, and tend to ignore deviations on the downside. In this work, we study a complementary new risk class that penalizes loss deviations in a bidirectional manner, while having more flexibility in terms of tail sensitivity than is offered by classical mean-variance, without sacrificing computational or analytical… 

References

SHOWING 1-10 OF 53 REFERENCES
Tilted Empirical Risk Minimization
TLDR
This work shows that it is possible to flexibly tune the impact of individual losses through a straightforward extension to ERM using a hyperparameter called the tilt, and demonstrates that TERM can be used for a multitude of applications, such as enforcing fairness between subgroups, mitigating the effect of outliers, and handling class imbalance.
Empirical risk minimization for heavy-tailed losses
The purpose of this paper is to discuss empirical risk minimization when the losses are not necessarily bounded and may have a distribution with heavy tails. In such situations, usual empirical
Learning with risk-averse feedback under potentially heavy tails
TLDR
A general-purpose estimator of CVaR for potentially heavy-tailed random variables is studied, which is easy to implement in practice, and requires nothing more than finite variance and a distribution function that does not change too fast or slow around just the quantile of interest.
Robust Empirical Optimization is Almost the Same As Mean-Variance Optimization
Abstract We formulate a distributionally robust optimization problem where the deviation of the alternative distribution is controlled by a ϕ -divergence penalty in the objective, and show that a
Learning Bounds for Risk-sensitive Learning
TLDR
This paper proposes to study the generalization properties of risk-sensitive learning schemes whose optimand is described via optimized certainty equivalents (OCE): the general scheme can handle various known risks, e.g., the entropic risk, mean-variance, and conditional value-at-risk, as special cases.
ENTROPIC RISK MEASURES: COHERENCE VS. CONVEXITY, MODEL AMBIGUITY AND ROBUST LARGE DEVIATIONS
We study a coherent version of the entropic risk measure, both in the law-invariant case and in a situation of model ambiguity. In particular, we discuss its behavior under the pooling of independent
Spectral risk-based learning using unbounded losses
TLDR
It can be argued that prioritizing average off-sample performance is a substantial value judgement that requires more serious consideration, both by stakeholders involved in the practical side of machine learning systems, and by the theoretician interested in providing learning algorithms with formal guarantees.
On Tilted Losses in Machine Learning: Theory and Applications
TLDR
This work studies a simple extension to ERM—tilted empirical risk minimization (TERM)—which uses exponential tilting to flexibly tune the impact of individual losses and finds that the framework can consistently outperform ERM and deliver competitive performance with state-of-the-art, problem-specific approaches.
Conditional Value-at-Risk for General Loss Distributions
Fundamental properties of conditional value-at-risk, as a measure of risk with significant advantages over value-at-risk, are derived for loss distributions in finance that can involve discreetness.
Variance-based Regularization with Convex Objectives
TLDR
An approach to risk minimization and stochastic optimization that provides a convex surrogate for variance, allowing near-optimal and computationally efficient trading between approximation and estimation error, and it is shown that this procedure comes with certificates of optimality.
...
...