• Corpus ID: 231741229

Fast rates in structured prediction

@inproceedings{Cabannes2021FastRI,
  title={Fast rates in structured prediction},
  author={Vivien A. Cabannes and Alessandro Rudi and Francis R. Bach},
  booktitle={Annual Conference Computational Learning Theory},
  year={2021}
}
Discrete supervised learning problems such as classification are often tackled by introducing a continuous surrogate problem akin to regression. Bounding the original error, between estimate and solution, by the surrogate error endows discrete problems with convergence rates already shown for continuous instances. Yet, current approaches do not leverage the fact that discrete problems are essentially predicting a discrete output when continuous problems are predicting a continuous value. In… 

Figures from this paper

Towards Sharper Generalization Bounds for Structured Prediction

This paper investigates the generalization performance of structured prediction learning and obtains state-of-the-art generalization bounds from three different perspectives: Lipschitz continuity, smoothness, and space capacity condition.

Disambiguation of Weak Supervision leading to Exponential Convergence rates

This paper focuses on partial labelling, an instance of weak supervision where, from a given input, the authors are given a set of potential targets, and proposes an empirical disambiguation algorithm to recover full supervision from weak supervision.

Disambiguation of weak supervision with exponential convergence rates

This paper focuses on partial labelling, an instance of weak supervision where, from a given input, the authors are given a set of potential targets and proposes an empirical disambiguation algorithm to recover full supervision from weak supervision.

Prediction of concrete compressive strength with GGBFS and fly ash using multilayer perceptron algorithm, random forest regression and k-nearest neighbor regression

In this study, supervised learning and neural networks were applied to predict the compressive strength of concrete mixes with GGBFS and fly ash. Three models: Multilayer perceptron network (MLP),

Active Labeling: Streaming Stochastic Gradients

After formalizing the “active labeling” problem, which focuses on active learning with partial supervision, this paper provides a streaming technique that provably minimizes the ratio of generalization error over the number of samples.

A Case of Exponential Convergence Rates for SVM

A simple mechanism to obtain fast convergence rates and its usage for SVM is presented and it is shown that SVM can exhibit exponential convergence rates even without assuming the hard Tsybakov margin condition.

Towards Empirical Process Theory for Vector-Valued Functions: Metric Entropy of Smooth Function Classes

It is demonstrated how these entropy bounds can be used to show the uniform law of large numbers and asymptotic equicontinuity of the function classes, and also apply it to statistical learning theory in which the output space is a Hilbert space.

Multiclass learning with margin: exponential rates with no bias-variance trade-off

For a wide variety of methods it is proved that the classification error under a hard-margin condition decreases exponentially fast without any bias-variance trade-off.

Robust Linear Predictions: Analyses of Uniform Concentration, Fast Rates and Model Misspecification

This study offers a unified robust framework that includes a broad variety of linear prediction problems on a Hilbert space, coupled with a generic class of loss functions, and shows that this rate can be improved to achieve so-called “fast rates” under additional assumptions.

Machine classification for probe-based quantum thermometry

This work considers the problem of probe-based quantum thermometry, and shows that machine classification can provide reliable estimates over a broad range of scenarios, based on the k-nearest-neighbor algorithm, and argues that classification may become an experimentally relevant tool for thermometry in the quantum regime.

References

SHOWING 1-10 OF 56 REFERENCES

Sobolev Norm Learning Rates for Regularized Least-Squares Algorithms

This paper combines the well-known integral operator techniques with an embedding property, which results in new finite sample bounds with respect to the stronger norms in the special case of Sobolev reproducing kernel Hilbert spaces used as hypotheses spaces.

Structured Prediction with Partial Labelling through the Infimum Loss

This paper provides a unified framework based on structured prediction and on the concept of infimum loss to deal with partial labelling over a wide family of learning problems and loss functions and leads naturally to explicit algorithms that can be easily implemented and which proved statistical consistency and learning rates.

A General Framework for Consistent Structured Prediction with Implicit Loss Embeddings

A large class of loss functions is identified and study that implicitly defines a suitable geometry on the problem that is the key to develop an algorithmic framework amenable to a sharp statistical analysis and yielding efficient computations.

Sharp Analysis of Learning with Discrete Losses

A least-squares framework is studied to systematically design learning algorithms for discrete losses, with quantitative characterizations in terms of statistical and computational complexity, to improve existing results by providing explicit dependence on the number of labels.

A Consistent Regularization Approach for Structured Prediction

This work characterize a large class of loss functions that allows to naturally embed structured outputs in a linear space and proves universal consistency and finite sample bounds characterizing the generalization properties of the proposed methods.

Risk bounds for statistical learning

A general theorem providing upper bounds for the risk of an empirical risk minimizer (ERM) when the classification rules belong to some VC-class under margin conditions is proposed and discussed the optimality of these bounds in a minimax sense.

Rates of Convergence for Nearest Neighbor Classification

This work analyzes the behavior of nearest neighbor classification in metric spaces and provides finite-sample, distribution-dependent rates of convergence under minimal assumptions, and finds that under the Tsybakov margin condition the convergence rate of nearest neighbors matches recently established lower bounds for nonparametric classification.

Sur une modification de l’inéqualite de Tchebychev

  • Annals Science Institute Sav. Ukraine.
  • 1924

Remarks on Inequalities for Large Deviation Probabilities

...