Corpus ID: 220302148

Tilted Empirical Risk Minimization

@article{Li2021TiltedER,
  title={Tilted Empirical Risk Minimization},
  author={Tian Li and Ahmad Beirami and Maziar Sanjabi and Virginia Smith},
  journal={ArXiv},
  year={2021},
  volume={abs/2007.01162}
}
Empirical risk minimization (ERM) is typically designed to perform well on the average loss, which can result in estimators that are sensitive to outliers, generalize poorly, or treat subgroups unfairly. While many methods aim to address these problems individually, in this work, we explore them through a unified framework---tilted empirical risk minimization (TERM). In particular, we show that it is possible to flexibly tune the impact of individual losses through a straightforward extension… Expand
Robust Risk Minimization for Statistical Learning From Corrupted Data
TLDR
A robust learning method that only requires specifying an upper bound on the corrupted data fraction is developed, which minimizes a risk function defined by a non-parametric distribution with unknown probability weights. Expand
Fairness-Aware Learning from Corrupted Data
TLDR
It is shown that an adversary can force any learner to return a biased classifier, with or without degrading accuracy, and that the strength of this bias increases for learning problems with underrepresented protected groups in the data. Expand
Federated Multi-Task Learning for Competing Constraints
TLDR
This work develops a scalable solver for the objective and shows that multi-task learning can enable more accurate, robust, and fair models relative to state-of-the-art baselines across a suite of federated datasets. Expand
Tilted Cross-Entropy (TCE): Promoting Fairness in Semantic Segmentation
TLDR
Through quantitative and qualitative performance analyses, it is demonstrated that the proposed Stochastic TCE for semantic segmentation can offer improved overall fairness by efficiently minimizing the performance disparity among the target classes of Cityscapes. Expand
GIFAIR-FL: An Approach for Group and Individual Fairness in Federated Learning
TLDR
This paper proposes GIFAIR-FL: an approach that imposes group and individual fairness to federated learning settings by adding a regularization term and shows improved fairness results while superior or similar prediction accuracy. Expand
Ditto: Fair and Robust Federated Learning Through Personalization
TLDR
This work identifies that robustness to data and model poisoning attacks and fairness, measured as the uniformity of performance across devices, are competing constraints in statistically heterogeneous networks and proposes a simple, general framework, Ditto, that can inherently provide fairness and robustness benefits. Expand
A Field Guide to Federated Optimization
TLDR
This paper provides recommendations and guidelines on formulating, designing, evaluating and analyzing federated optimization algorithms through concrete examples and practical implementation, with a focus on conducting effective simulations to infer real-world performance. Expand
Age-Optimal Power Allocation in Industrial IoT: A Risk-Sensitive Federated Learning Approach
TLDR
This work characterize the extreme AoI staleness using results in extreme value theory and proposes a distributed power allocation approach by weaving in together principles of Lyapunov optimization and federated learning (FL). Expand
DAIR: Data Augmented Invariant Regularization
TLDR
Data augmented invariant regularization (DAIR) introduces a regularizer on DA-ERM to penalize such loss inconsistency and is applied to multiple real-world learning problems involving domain shift, namely robust regression, visual question answering, robust deep neural network training, and task-oriented dialog modeling. Expand
Designing off-sample performance metrics
TLDR
This work considers an approach to building learning systems which treats the question of “how should the authors quantify good off-sample performance?” as a key design decision, using a simple and general formulation. Expand
...
1
2
...

References

SHOWING 1-10 OF 99 REFERENCES
Variance-based Regularization with Convex Objectives
TLDR
An approach to risk minimization and stochastic optimization that provides a convex surrogate for variance, allowing near-optimal and computationally efficient trading between approximation and estimation error, and it is shown that this procedure comes with certificates of optimality. Expand
Empirical Risk Minimization under Fairness Constraints
TLDR
This work presents an approach based on empirical risk minimization, which incorporates a fairness constraint into the learning problem, and derives both risk and fairness bounds that support the statistical consistency of the approach. Expand
Fairness Without Demographics in Repeated Loss Minimization
TLDR
This paper develops an approach based on distributionally robust optimization (DRO), which minimizes the worst case risk over all distributions close to the empirical distribution and proves that this approach controls the risk of the minority group at each time step, in the spirit of Rawlsian distributive justice. Expand
Consistent Robust Regression
TLDR
It is shown that CRR not only offers consistent estimates, but is empirically far superior to several other recently proposed algorithms for the robust regression problem, including extended Lasso and the TORRENT algorithm. Expand
Conditional Value-at-Risk for General Loss Distributions
Fundamental properties of conditional value-at-risk, as a measure of risk with significant advantages over value-at-risk, are derived for loss distributions in finance that can involve discreetness.Expand
Empirical Bernstein Bounds and Sample-Variance Penalization
TLDR
Improved constants for data dependent and variance sensitive confidence bounds are given, called empirical Bernstein bounds, and extended to hold uniformly over classes of functions whose growth function is polynomial in the sample size n, and sample variance penalization is considered. Expand
Can gradient clipping mitigate label noise?
TLDR
It is proved that for the common problem of label noise in classification, standard gradient clipping does not in general provide robustness, and it is shown that a simple variant of gradient clipping is provably robust, and corresponds to suitably modifying the underlying loss function. Expand
Efficient Fair Principal Component Analysis
TLDR
An adaptive first-order algorithm to learn a subspace that preserves fairness, while slightly compromising the reconstruction loss is proposed, which can be efficiently generalized to multiple group sensitive features and effectively reduce the unfairness decisions in downstream tasks such as classification. Expand
Optimization of conditional value-at risk
A new approach to optimizing or hedging a portfolio of nancial instruments to reduce risk is presented and tested on applications. It focuses on minimizing Conditional Value-at-Risk (CVaR) ratherExpand
Relaxed Clipping: A Global Training Method for Robust Regression and Classification
TLDR
It is demonstrated that a relaxation of this form of "loss clipping" can be made globally solvable and applicable to any standard loss while guaranteeing robustness against outliers. Expand
...
1
2
3
4
5
...