Isotonic regression in general dimensions

@article{Han2019IsotonicRI,
  title={Isotonic regression in general dimensions},
  author={Qiyang Han and Tengyao Wang and Sabyasachi Chatterjee and Richard J. Samworth},
  journal={The Annals of Statistics},
  year={2019}
}
We study the least squares regression function estimator over the class of real-valued functions on $[0,1]^d$ that are increasing in each coordinate. For uniformly bounded signals and with a fixed, cubic lattice design, we establish that the estimator achieves the minimax rate of order $n^{-\min\{2/(d+2),1/d\}}$ in the empirical $L_2$ loss, up to poly-logarithmic factors. Further, we prove a sharp oracle inequality, which reveals in particular that when the true regression function is piecewise… Expand

Tables from this paper

ISOTONIC REGRESSION IN GENERAL DIMENSIONS1
We study the least squares regression function estimator over the class of real-valued functions on [0,1]d that are increasing in each coordinate. For uniformly bounded signals and with a fixed,Expand
ISOTONIC REGRESSION IN GENERAL DIMENSIONS By Qiyang
We study the least squares regression function estimator over the class of real-valued functions on [0, 1] that are increasing in each coordinate. For uniformly bounded signals and with a fixed,Expand
Isotonic regression in multi-dimensional spaces and graphs
In this paper we study minimax and adaptation rates in general isotonic regression. For uniform deterministic and random designs in $[0,1]^d$ with $d\ge 2$ and $N(0,1)$ noise, the minimax rate forExpand
Convex Regression in Multidimensions: Suboptimality of Least Squares Estimators
The least squares estimator (LSE) is shown to be suboptimal in squared error loss in the usual nonparametric regression model with Gaussian errors for $d \geq 5$ for each of the following families ofExpand
Limit distribution theory for block estimators in multiple isotonic regression
We study limit distributions for the tuning-free max-min block estimator originally proposed in [FLN17] in the problem of multiple isotonic regression, under both fixed lattice design and randomExpand
Multivariate extensions of isotonic regression and total variation denoising via entire monotonicity and Hardy–Krause variation
We consider the problem of nonparametric regression when the covariate is $d$-dimensional, where $d \geq 1$. In this paper we introduce and study two nonparametric least squares estimators (LSEs) inExpand
Adaptive confidence sets in shape restricted regression
We construct adaptive confidence sets in isotonic and convex regression. In univariate isotonic regression, if the true parameter is piecewise constant with $k$ pieces, then the Least-SquaresExpand
High-dimensional nonparametric density estimation via symmetry and shape constraints
We tackle the problem of high-dimensional nonparametric density estimation by taking the class of log-concave densities on $\mathbb{R}^p$ and incorporating within it symmetry assumptions, whichExpand
Adaptation in multivariate log-concave density estimation
We study the adaptation properties of the multivariate log-concave maximum likelihood estimator over two subclasses of log-concave densities. The first consists of densities with polyhedral supportExpand
On Suboptimality of Least Squares with Application to Estimation of Convex Bodies
TLDR
It is established that Least Squares is mimimax sub-optimal, and achieves a rate of $\tilde{\Theta}_d (n-2/(d-1)})$ whereas the minimax rate is $\Theta_d(n^{-4/(d+3)})$. Expand
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 66 REFERENCES
On risk bounds in isotonic and other shape restricted regression problems
We consider the problem of estimating an unknown $\theta\in {\mathbb{R}}^n$ from noisy observations under the constraint that $\theta$ belongs to certain convex polyhedral cones in ${\mathbb{R}}^n$.Expand
Sharp oracle inequalities for Least Squares estimators in shape restricted regression
The performance of Least Squares (LS) estimators is studied in isotonic, unimodal and convex regression. Our results have the form of sharp oracle inequalities that account for the modelExpand
Global risk bounds and adaptation in univariate convex regression
We consider the problem of nonparametric estimation of a convex regression function $$\phi _0$$ϕ0. We study the risk of the least squares estimator (LSE) under the natural squared error loss. We showExpand
On matrix estimation under monotonicity constraints
We consider the problem of estimating an unknown $n_1 \times n_2$ matrix $\mathbf{\theta^*}$ from noisy observations under the constraint that $\mathbf{\theta}^*$ is nondecreasing in both rows andExpand
Global rates of convergence in log-concave density estimation
The estimation of a log-concave density on $\mathbb{R}^d$ represents a central problem in the area of nonparametric inference under shape constraints. In this paper, we study the performance ofExpand
A sharp multiplier inequality with applications to heavy-tailed regression problems
∥ F is determined jointly by the growth rate of the corresponding empirical process, and the size of the maxima of the multipliers. The new inequality sheds light on the long-standing open questionExpand
Convergence rates of least squares regression estimators with heavy-tailed errors
We study the performance of the Least Squares Estimator (LSE) in a general nonparametric regression model, when the errors are independent of the covariates but may only have a $p$-th moment ($p\geqExpand
On the $\mathbb{L}_p$-error of monotonicity constrained estimators
We aim at estimating a function $\lambda:[0,1]\to \mathbb {R}$, subject to the constraint that it is decreasing (or increasing). We provide a unified approach for studying the $\mathbb {L}_p$-loss ofExpand
Adaptation in log-concave density estimation
The log-concave maximum likelihood estimator of a density on the real line based on a sample of size $n$ is known to attain the minimax optimal rate of convergence of $O(n^{-4/5})$ with respect to,Expand
Contraction and uniform convergence of isotonic regression
We consider the problem of isotonic regression, where the underlying signal $x$ is assumed to satisfy a monotonicity constraint, that is, $x$ lies in the cone $\{ x\in\mathbb{R}^n : x_1 \leq \dotsExpand
...
1
2
3
4
5
...