• Corpus ID: 52008637

Adaptive Sampling for Convex Regression

  title={Adaptive Sampling for Convex Regression},
  author={Max Simchowitz and Kevin G. Jamieson and Jordan W. Suchow and Thomas L. Griffiths},
In this paper, we introduce the first principled adaptive-sampling procedure for learning a convex function in the $L_\infty$ norm, a problem that arises often in the behavioral and social sciences. We present a function-specific measure of complexity and use it to prove that, for each convex function $f_{\star}$, our algorithm nearly attains the information-theoretically optimal, function-specific error rate. We also corroborate our theoretical contributions with numerical experiments, finding… 

Figures from this paper

Efficient Minimax Optimal Estimators For Multivariate Convex Regression
This work is the first to show the existence of efficient minimax optimal estimators for non-Donsker classes that their corresponding Least Squares Estimators are provably minimax sub-optimal ; a result of independent interest.
Problem Dependent View on Structured Thresholding Bandit Problems
This work investigates the problem dependent regime in the stochastic Thresholding Bandit problem under several shape constraints and provides upper and lower bounds for the probability of error in both the concave and monotone settings, as well as associated algorithms.
The Influence of Shape Constraints on the Thresholding Bandit Problem
These rates demonstrate that the dependence on $K$ of the minimax regret varies significantly depending on the shape constraint, which highlights the fact that the shape constraints modify fundamentally the nature of the TBP.


An Improved Global Risk Bound in Concave Regression
It is shown that indeed the logarithmic term is unnecessary and it is proved a risk bound which scales like n-4/5 up to constant factors and also extends to the case of model misspecification, where the true function may not be concave.
Global risk bounds and adaptation in univariate convex regression
We consider the problem of nonparametric estimation of a convex regression function $$\phi _0$$ϕ0. We study the risk of the least squares estimator (LSE) under the natural squared error loss. We show
Sharp oracle inequalities for Least Squares estimators in shape restricted regression
The performance of Least Squares (LS) estimators is studied in isotonic, unimodal and convex regression. Our results have the form of sharp oracle inequalities that account for the model
Local Minimax Complexity of Stochastic Convex Optimization
This work shows how the computational modulus of continuity can be explicitly calculated in concrete cases, and relates to the curvature of the function at the optimum, and proves a superefficiency result that demonstrates it is a meaningful benchmark, acting as a computational analogue of the Fisher information in statistical estimation.
Faster Rates in Regression via Active Learning
A practical algorithm capable of exploiting the extra flexibility of the active setting and provably improving upon the classical passive techniques is described.
Consistency of Concave Regression with an Application to Current-Status Data
We consider the problem of nonparametric estimation of a concave regression function F. We show that the supremum distance between the least square s estimatorand F on a compact interval is
On the Complexity of Best-Arm Identification in Multi-Armed Bandit Models
This work introduces generic notions of complexity for the two dominant frameworks considered in the literature: fixed-budget and fixed-confidence settings, and provides the first known distribution-dependent lower bound on the complexity that involves information-theoretic quantities and holds when m ≥ 1 under general assumptions.
Optimal Confidence Bands for Shape-Restricted Curves
Let Y be a stochastic process on [0,1] satisfying dY(t)=n 1/2 f(t)dt+dW(t) , where n≥1 is a given scale parameter (`sample size'), W is standard Brownian motion and f is an unknown function.
Adaptive confidence intervals for regression functions under shape constraints
Adaptive confidence intervals for regression functions are constructed under shape constraints of monotonicity and convexity. A natural benchmark is established for the minimum expected length of
Concentration Inequalities - A Nonasymptotic Theory of Independence
Deep connections with isoperimetric problems are revealed whilst special attention is paid to applications to the supremum of empirical processes.