Corpus ID: 88522363

Multivariate convex regression: global risk bounds and adaptation

@article{Han2016MultivariateCR,
  title={Multivariate convex regression: global risk bounds and adaptation},
  author={Qiyang Han and Jon A. Wellner},
  journal={arXiv: Statistics Theory},
  year={2016}
}
We study the problem of estimating a multivariate convex function defined on a convex body in a regression setting with random design. We are interested in optimal rates of convergence under a squared global continuous $l_2$ loss in the multivariate setting $(d\geq 2)$. One crucial fact is that the minimax risks depend heavily on the shape of the support of the regression function. It is shown that the global minimax risk is on the order of $n^{-2/(d+1)}$ when the support is sufficiently smooth… Expand

Figures from this paper

Convex Regression in Multidimensions: Suboptimality of Least Squares Estimators
The least squares estimator (LSE) is shown to be suboptimal in squared error loss in the usual nonparametric regression model with Gaussian errors for $d \geq 5$ for each of the following families ofExpand
Optimality of Maximum Likelihood for Log-Concave Density Estimation and Bounded Convex Regression
TLDR
It is proved that estimating a log-concave density - even a uniform distribution on a convex set - up to a fixed accuracy requires the number of samples at least exponential in the dimension. Expand
Multivariate Convex Regression at Scale
TLDR
This framework can solve instances of the convex regression problem with $n=10^5$ and $d=10$---a QP with 10 billion variables---within minutes; and offers significant computational gains compared to current algorithms. Expand
Adaptation in log-concave density estimation
The log-concave maximum likelihood estimator of a density on the real line based on a sample of size $n$ is known to attain the minimax optimal rate of convergence of $O(n^{-4/5})$ with respect to,Expand
Max-affine regression with universal parameter estimation for small-ball designs
TLDR
This paper shows that AM is significantly more robust than the setting of [1]: It converges locally under small-ball design assumptions, and even when the underlying parameters are chosen with knowledge of the realized covariates. Expand
On Degrees of Freedom of Projection Estimators With Applications to Multivariate Nonparametric Regression
Abstract–In this article, we consider the nonparametric regression problem with multivariate predictors. We provide a characterization of the degrees of freedom and divergence for estimators of theExpand
Multivariate Distributionally Robust Convex Regression under Absolute Error Loss
TLDR
A novel non-parametric multidimensional convex regression estimator designed to be robust to adversarial perturbations in the empirical measure is proposed, matching the bounds of alternative estimators based on square-loss minimization. Expand
Adaptation in multivariate log-concave density estimation
We study the adaptation properties of the multivariate log-concave maximum likelihood estimator over two subclasses of log-concave densities. The first consists of densities with polyhedral supportExpand
Convex Regression: Theory, Practice, and Applications
TLDR
The new excess risk upper bound is developed for the general empirical risk minimization setting without any shape constraints, and provides a probabilistic guarantee for cases with unbounded hypothesis classes, targets, and noise models. Expand
Max-Affine Regression: Provable, Tractable, and Near-Optimal Statistical Estimation
TLDR
This work analyzes a natural alternating minimization (AM) algorithm for the non-convex least squares objective and shows that the AM algorithm, when initialized suitably, converges with high probability and at a geometric rate to a small ball around the optimal coefficients. Expand
...
1
2
3
4
...

References

SHOWING 1-10 OF 62 REFERENCES
Global risk bounds and adaptation in univariate convex regression
We consider the problem of nonparametric estimation of a convex regression function $$\phi _0$$ϕ0. We study the risk of the least squares estimator (LSE) under the natural squared error loss. We showExpand
Global rates of convergence in log-concave density estimation
The estimation of a log-concave density on $\mathbb{R}^d$ represents a central problem in the area of nonparametric inference under shape constraints. In this paper, we study the performance ofExpand
An Improved Global Risk Bound in Concave Regression
A new risk bound is presented for the problem of convex/concave function estimation, using the least squares estimator. The best known risk bound, as had appeared in \citet{GSvex}, scaled likeExpand
Sharp oracle inequalities for Least Squares estimators in shape restricted regression
The performance of Least Squares (LS) estimators is studied in isotonic, unimodal and convex regression. Our results have the form of sharp oracle inequalities that account for the modelExpand
On risk bounds in isotonic and other shape restricted regression problems
We consider the problem of estimating an unknown $\theta\in {\mathbb{R}}^n$ from noisy observations under the constraint that $\theta$ belongs to certain convex polyhedral cones in ${\mathbb{R}}^n$.Expand
Model selection for regression on a random design
We consider the problem of estimating an unknown regression function when the design is random with values in . Our estimation procedure is based on model selection and does not rely on any priorExpand
A new perspective on least squares under convex constraint
Consider the problem of estimating the mean of a Gaussian random vector when the mean vector is assumed to be in a given convex set. The most natural solution is to take the Euclidean projection ofExpand
Entropy of Convex Functions on ℝ d.
TLDR
The results imply that the universal lower bound ε-d/2 is also an upper bound for all d-polytopes, and the universal upper bound of [Formula: see text] is attained by the closed unit ball. Expand
Consistency of Multidimensional Convex Regression
TLDR
This paper studies a least-squares estimator that is computable as the solution of a quadratic program and establishes that it converges almost surely to the “true” function as n → ∞ under modest technical assumptions. Expand
Risk bounds for model selection via penalization
Abstract Performance bounds for criteria for model selection are developed using recent theory for sieves. The model selection criteria are based on an empirical loss or contrast function with anExpand
...
1
2
3
4
5
...