Share This Author
Prediction, learning, and games
This chapter discusses prediction with expert advice, efficient forecasters for large classes of experts, and randomized prediction for specific losses.
A Probabilistic Theory of Pattern Recognition
The Bayes Error and Vapnik-Chervonenkis theory are applied as guide for empirical classifier selection on the basis of explicit specification and explicit enforcement of the maximum likelihood principle.
Concentration Inequalities - A Nonasymptotic Theory of Independence
Deep connections with isoperimetric problems are revealed whilst special attention is paid to applications to the supremum of empirical processes.
Combinatorial methods in density estimation
A comparison of the Kernel Estimate and the Vapnik-Chervonenkis Dimension and Covering Numbers shows that the former is significantly more accurate than the latter and the latter is significantly less accurate.
Benign overfitting in linear regression
- P. Bartlett, Philip M. Long, G. Lugosi, Alexander Tsigler
- Computer ScienceProceedings of the National Academy of Sciences
- 26 June 2019
A characterization of linear regression problems for which the minimum norm interpolating prediction rule has near-optimal prediction accuracy shows that overparameterization is essential for benign overfitting in this setting: the number of directions in parameter space that are unimportant for prediction must significantly exceed the sample size.
Theory of classification : a survey of some recent advances
The last few years have witnessed important new developments in the theory and practice of pattern classification. We intend to survey some of the main new ideas that have lead to these important…
Ranking and empirical minimization of U-statistics
This paper forms the ranking problem in a rigorous statistical framework, establishes in particular a tail inequality for degenerate U-processes, and applies it for showing that fast rates of convergence may be achieved under specific noise assumptions, just like in classification.
- G. Lugosi
This text attempts to summarize some of the basic tools used in establishing concentration inequalities, which are at the heart of the mathematical analysis of various problems in machine learning and made it possible to derive new efficient algorithms.
Bandits With Heavy Tail
- Sébastien Bubeck, N. Cesa-Bianchi, G. Lugosi
- Computer Science, MathematicsIEEE Transactions on Information Theory
- 8 September 2012
This paper examines the bandit problem under the weaker assumption that the distributions have moments of order 1 + ε, and derives matching lower bounds that show that the best achievable regret deteriorates when ε <; 1.
Model Selection and Error Estimation
A tight relationship between error estimation and data-based complexity penalization is pointed out: any good error estimate may be converted into a data- based penalty function and the performance of the estimate is governed by the quality of the error estimate.