• Corpus ID: 235825425

Implicit rate-constrained optimization of non-decomposable objectives

@article{Kumar2021ImplicitRO,
  title={Implicit rate-constrained optimization of non-decomposable objectives},
  author={Abhishek Kumar and Harikrishna Narasimhan and Andrew Cotter},
  journal={ArXiv},
  year={2021},
  volume={abs/2107.10960}
}
We consider a popular family of constrained optimization problems arising in machine learning that involve optimizing a non-decomposable evaluation metric with a certain thresholded form, while constraining another metric of interest. Examples of such problems include optimizing the false negative rate at a fixed false positive rate, optimizing precision at a fixed recall, optimizing the area under the precision-recall or ROC curves, etc. Our key idea is to formulate a rate-constrained… 

Optimizing Two-way Partial AUC with an End-to-end Framework

TLDR
A generic framework to construct surrogate optimization problems, which supports efficient end-to-end training with deep learning and theoretical analyses show that: 1) the objective function of the surrogate problems will achieve an upper bound of the original problem under mild conditions, and 2) optimizing the surrogate Problems leads to good generalization performance in terms of TPAUC with a high probability.

AUC Maximization in the Era of Big Data and AI: A Survey

TLDR
This paper aims to address the gap by reviewing the literature in the past two decades for AUC maximization by giving a holistic view of the literature and presenting detailed explanations and comparisons of different papers from formulations to algorithms and theoretical guarantees.

Rank-based Decomposable Losses in Machine Learning: A Survey

TLDR
This survey provides a systematic and comprehensive review of rank-based decomposable losses in machine learning and provides a new taxonomy of loss functions that follows the perspectives of aggregate loss and individual loss.

Training Over-parameterized Models with Non-decomposable Objectives

TLDR
This work points out that the standard approach of re-weighting the loss to incorporate label costs can produce unsatisfactory results when used to train over-parameterized models, and proposes new costsensitive losses that extend the classical idea of logit adjustment to handle more general cost matrices.

References

SHOWING 1-10 OF 55 REFERENCES

Optimizing Non-decomposable Performance Measures: A Tale of Two Classes

TLDR
It is revealed that for two large families of performance measures that can be expressed as functions of true positive/negative rates, it is indeed possible to implement point stochastic updates.

Satisfying Real-world Goals with Dataset Constraints

TLDR
This paper proposes handling multiple goals on multiple datasets by training with dataset constraints, using the ramp penalty to accurately quantify costs, and presents an efficient algorithm to approximately optimize the resulting non-convex constrained optimization problem.

Constrained Classification and Ranking via Quantiles

TLDR
A novel framework for learning with constraints that can be expressed as a predicted positive rate (or negative rate) on a subset of the training data is proposed, yielding a surrogate loss function which avoids the complexity of constrained optimization.

Approximate Heavily-Constrained Learning with Lagrange Multiplier Models

TLDR
This work proposes a “multiplier model” that maps each such vector to the corresponding Lagrange multiplier, and proves optimality, approximate feasibility and generalization guarantees under assumptions on the flexibility of the multiplier model.

Scalable Learning of Non-Decomposable Objectives

TLDR
A unified framework is presented that, using straightforward building block bounds, allows for highly scalable optimization of a wide range of ranking-based objectives and achieves substantial improvement in performance over the accuracy-objective baseline.

Learning with Complex Loss Functions and Constraints

TLDR
This work develops a general approach for solving constrained classification problems, where the loss and constraints are defined in terms of a general function of the confusion matrix, and reduces the constrained learning problem to a sequence of cost-sensitive learning tasks.

Calibrated Surrogate Maximization of Linear-fractional Utility in Binary Classification

TLDR
This paper considers linear-fractional metrics, which are a family of classification performance metrics that encompasses many standard ones such as the F${}_\beta$-measure and Jaccard index, and proposes methods to directly maximize performances under those metrics.

Adam: A Method for Stochastic Optimization

TLDR
This work introduces Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments, and provides a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework.

AP-Perf: Incorporating Generic Performance Metrics in Differentiable Learning

TLDR
A marginal distribution technique is formulated to reduce the complexity of optimizing the adversarial prediction formulation over a vast range of non-decomposable metrics.

Online and Stochastic Gradient Methods for Non-decomposable Loss Functions

TLDR
This work shows that for a large family of loss functions satisfying a certain uniform convergence property, their methods provably converge to the empirical risk minimizer for these losses and establishes these using novel proof techniques.
...