• Corpus ID: 53988898

Cross-validation improved by aggregation: Agghoo

@article{Maillard2017CrossvalidationIB,
  title={Cross-validation improved by aggregation: Agghoo},
  author={Guillaume Maillard and Sylvain Arlot and Matthieu Lerasle},
  journal={arXiv: Statistics Theory},
  year={2017}
}
Cross-validation is widely used for selecting among a family of learning rules. This paper studies a related method, called aggregated hold-out (Agghoo), which mixes cross-validation with aggregation; Agghoo can also be related to bagging. According to numerical experiments, Agghoo can improve significantly cross-validation's prediction error, at the same computational cost; this makes it very promising as a general-purpose tool for prediction. We provide the first theoretical guarantees on… 

Figures and Tables from this paper

X-Vectors: New Quantitative Biomarkers for Early Parkinson's Disease Detection From Speech

TLDR
X-vectors technique provided better classification performances than MFCC-GMM for the text-independent tasks, and seemed to be particularly suited for the early detection of PD in women (7–15% improvement).

References

SHOWING 1-10 OF 30 REFERENCES

A K-fold averaging cross-validation procedure

TLDR
This work proposes a new K-fold CV procedure to select a candidate ‘optimal’ model from each hold-out fold and average the K candidate � 'optimal' models to obtain the ultimate model.

Choice of V for V-Fold Cross-Validation in Least-Squares Density Estimation

TLDR
A non-asymptotic oracle inequality is proved for V-fold cross-validation and its bias-corrected version (V-fold penalization), implying that V- fold penalization is asymptotically optimal in the nonparametric case.

Bagging predictors

TLDR
Tests on real and simulated data sets using classification and regression trees and subset selection in linear regression show that bagging can give substantial gains in accuracy.

Optimal aggregation of classifiers in statistical learning

TLDR
The main result of the paper concerns optimal aggregation of classifiers: a classifier that automatically adapts both to the complexity and to the margin, and attains the optimal fast rates, up to a logarithmic factor.

Fast learning rates in statistical inference through aggregation

We develop minimax optimal risk bounds for the general learning task consisting in predicting as well as the best function in a reference set G up to the smallest possible additive term, called the

Classification and Regression by randomForest

TLDR
random forests are proposed, which add an additional layer of randomness to bagging and are robust against overfitting, and the randomForest package provides an R interface to the Fortran programs by Breiman and Cutler.

A decision-theoretic generalization of on-line learning and an application to boosting

TLDR
The model studied can be interpreted as a broad, abstract extension of the well-studied on-line prediction model to a general decision-theoretic setting, and it is shown that the multiplicative weight-update Littlestone?Warmuth rule can be adapted to this model, yielding bounds that are slightly weaker in some cases, but applicable to a considerably more general class of learning problems.

Adaptive Regression by Mixing

TLDR
Under mild conditions, it is shown that the squared L2 risk of the estimator based on ARM is basically bounded above by the risk of each candidate procedure plus a small penalty term of order 1/n, giving the automatically optimal rate of convergence for ARM.

Classification and regression trees

  • W. Loh
  • Computer Science
    WIREs Data Mining Knowl. Discov.
  • 2011
TLDR
This article gives an introduction to the subject of classification and regression trees by reviewing some widely available algorithms and comparing their capabilities, strengths, and weakness in two examples.

Fast learning rates for plug-in classifiers

TLDR
This work constructs plug-in classifiers that can achieve not only fast, but also super-fast rates, that is, rates faster than n -1 , and establishes minimax lower bounds showing that the obtained rates cannot be improved.