• Corpus ID: 3586027

# Linear regression through PAC-Bayesian truncation

@article{Audibert2011LinearRT,
title={Linear regression through PAC-Bayesian truncation},
author={Jean-Yves Audibert and Olivier Catoni},
journal={arXiv: Statistics Theory},
year={2011}
}
• Published 1 October 2010
• Mathematics, Computer Science
• arXiv: Statistics Theory
We consider the problem of predicting as well as the best linear combination of d given functions in least squares regression under L^\infty constraints on the linear combination. When the input distribution is known, there already exists an algorithm having an expected excess risk of order d/n, where n is the size of the training data. Without this strong assumption, standard results often contain a multiplicative log(n) factor, complex constants involving the conditioning of the Gram matrix…

### Robust linear least squares regression

• Mathematics, Computer Science
• 2011
A new estimator is provided based on truncating differences of losses in a min-max framework and satisfies a d/n risk bound both in expectation and in deviations, which is the absence of exponential moment condition on the output distribution while achieving exponential deviations.

### Hard-Margin Active Linear Regression

• Computer Science, Mathematics
ICML
• 2014
It is shown that active learning admits significantly better sample complexity bounds than the passive learning counterpart, and give efficient algorithms that attain near-optimal bounds.

### PAC-Bayesian estimation and prediction in sparse additive models

• Computer Science
Electronic Journal of Statistics
• 2013
A PAC-Bayesian strategy is investigated, delivering oracle inequalities in probability in high-dimensional additive models under a sparsity assumption, and its performance is assessed on simulated data.

### Dimension-free Bounds for Sums of Independent Matrices and Simple Tensors via the Variational Principle

This work considers the deviation inequalities for the sums of independent d by d random matrices, as well as rank one random tensors, and presents the bounds that do not depend explicitly on the dimension d, but rather on the e-ective rank.

## References

SHOWING 1-10 OF 30 REFERENCES

### Robust linear least squares regression

• Mathematics, Computer Science
• 2011
A new estimator is provided based on truncating differences of losses in a min-max framework and satisfies a d/n risk bound both in expectation and in deviations, which is the absence of exponential moment condition on the output distribution while achieving exponential deviations.

### Fast learning rates in statistical inference through aggregation

We develop minimax optimal risk bounds for the general learning task consisting in predicting as well as the best function in a reference set G up to the smallest possible additive term, called the

### Pac-Bayesian Bounds for Sparse Regression Estimation with Exponential Weights

• Computer Science, Mathematics
• 2011
A sparsity oracle inequality in probability for the true excess risk for a version of exponential weight estimator is presented and aMCMC method is proposed to compute the estimator for reasonably large values of p.

### PAC-BAYESIAN SUPERVISED CLASSIFICATION: The Thermodynamics of Statistical Learning

An alternative selection scheme based on relative bounds between estimators is described and study, and a two step localization technique which can handle the selection of a parametric model from a family of those is presented.

### Aggregating Regression Procedures for a Better Performance

Methods have been proposed to linearly combine candidate regression procedures to improve estimation accuraccy. Applications of these methods in many examples are very succeesful, pointing to the

### Model selection for regression on a fixed design

This work considers some collection of finite dimensional linear spaces and the least-squares estimator built on a data driven selected model among this collection and deduce adaptivity properties from which the estimator from which it is derived holds under mild moment conditions on the errors.

### Optimal Rates for the Regularized Least-Squares Algorithm

• Mathematics, Computer Science
Found. Comput. Math.
• 2007
A complete minimax analysis of the problem is described, showing that the convergence rates obtained by regularized least-squares estimators are indeed optimal over a suitable class of priors defined by the considered kernel.

### PAC-Bayesian bounds for randomized empirical risk minimizers

The aim of this paper is to generalize the PAC-Bayesian theorems proved by Catoni in the classification setting to more general problems of statistical inference, and to bound the risk of very general estimation procedures.

### Challenging the empirical mean and empirical variance: a deviation study

We present new M-estimators of the mean and variance of real valued random variables, based on PAC-Bayes bounds. We analyze the non-asymptotic minimax properties of the deviations of those estimators