#### Filter Results:

- Full text PDF available (41)

#### Publication Year

1965

2017

- This year (5)
- Last 5 years (28)
- Last 10 years (36)

#### Publication Type

#### Co-author

#### Publication Venue

#### Key Phrases

Learn More

We perform a finite sample analysis of the detection levels for sparse principal components of a high-dimensional covariance matrix. Our mini-max optimal test is based on a sparse eigenvalue statistic. Alas, computing this test is known to be NP-complete in general, and we describe a compu-tationally efficient alternative test using convex relaxations. Our… (More)

In high-dimensional linear regression, the goal pursued here is to estimate an unknown regression function using linear combinations of a suitable set of covariates. One of the key assumptions for the success of any statistical procedure in this setup is to assume that the linear combination is sparse in some sense, for example, that it involves only few… (More)

Given a collection of M different estimators or classifiers, we study the problem of model selection type aggregation, i.e., we construct a new estimator or classifier, called aggregate, which is nearly as good as the best among them with respect to a given risk criterion. We define our aggregate by a simple recursive procedure which solves an auxiliary… (More)

- Philippe Rigollet
- Journal of Machine Learning Research
- 2007

We consider semi-supervised classification when part of the available data is unlabeled. These unlabeled data can be useful for the classification problem when we make an assumption relating the behavior of the regression function to that of the marginal distribution. Seeger (2000) proposed the well-known cluster assumption as a reasonable one. We propose a… (More)

In the context of density level set estimation, we study the convergence of general plug-in methods under two main assumptions on the density for a given level λ. More precisely, it is assumed that the density (i) is smooth in a neighborhood of λ and (ii) has γ-exponent at level λ. Condition (i) ensures that the density can be estimated at a standard… (More)

- Quentin Berthet, Philippe Rigollet
- COLT
- 2013

In the context of sparse principal component detection, we bring evidence towards the existence of a statistical price to pay for computational efficiency. We measure the performance of a test by the smallest signal strength that it can detect and we propose a computationally efficient method based on semidefinite programming. We also prove that the… (More)

- Vianney Perchet, Philippe Rigollet
- ArXiv
- 2011

We consider a multi-armed bandit problem in a setting where each arm produces a noisy reward realization which depends on an observable random covariate. As opposed to the traditional static multi-armed bandit problem , this setting allows for dynamically changing rewards that better describe applications where side information is available. We adopt a… (More)

We study the problem of learning the best linear and convex combination of M estimators of a density with respect to the mean squared risk. We suggest aggregation procedures and we prove sharp oracle inequalities for their risks, i.e., oracle inequalities with leading constant 1. We also obtain lower bounds showing that these procedures attain optimal rates… (More)

- Sébastien Bubeck, Vianney Perchet, Philippe Rigollet
- COLT
- 2013

We study the stochastic multi-armed bandit problem when one knows the value µ (⋆) of an optimal arm, as a well as a positive lower bound on the smallest positive gap ∆. We propose a new randomized policy that attains a regret uniformly bounded over time in this setting. We also prove several lower bounds, which show in particular that bounded regret is not… (More)