Oracle inequalities for sign constrained generalized linear models

@article{Koike2019OracleIF,
  title={Oracle inequalities for sign constrained generalized linear models},
  author={Yuta Koike and Yuta Tanoue},
  journal={Econometrics and Statistics},
  year={2019}
}

Figures from this paper

Bayesian inference for generalized linear model with linear inequality constraints

High-dimensional sign-constrained feature selection and grouping

In this paper, we propose a non-negative feature selection/feature grouping (nnFSG) method for general sign-constrained high-dimensional regression problems that allows regression coefficients to be

References

SHOWING 1-10 OF 31 REFERENCES

Sign-constrained least squares estimation for high-dimensional regression

Network tomography is shown to be an application where the necessary conditions for success of non-negative least squares are naturally fulfilled and empirical results confirm the effectiveness of the sign constraint for sparse recovery.

Non-negative least squares for high-dimensional linear models: consistency and sparse recovery without regularization

It is argued further that in specific cases, NNLS may have a better $\ell_{\infty}$-rate in estimation and hence also advantages with respect to support recovery when combined with thresholding, and from a practical point of view, NnLS does not depend on a regularization parameter and is hence easier to use.

Variable Selection via Nonconcave Penalized Likelihood and its Oracle Properties

In this article, penalized likelihood approaches are proposed to handle variable selection problems, and it is shown that the newly proposed estimators perform as well as the oracle procedure in variable selection; namely, they work as well if the correct submodel were known.

Oracle Inequalities for Convex Loss Functions with Nonlinear Targets

This article considers penalized empirical loss minimization of convex loss functions with unknown target functions. Using the elastic net penalty, of which the Least Absolute Shrinkage and Selection

Square-Root Lasso: Pivotal Recovery of Sparse Signals via Conic Programming

A pivotal method for estimating high-dimensional sparse linear regression models, where the overall number of regressors p is large, possibly much larger than n, but only s regressors are significant, which achieves near-oracle performance, attaining the convergence rate σl(s/n) log pr-super-1/2 in the prediction norm.

On the conditions used to prove oracle results for the Lasso

Oracle inequalities and variable selection properties for the Lasso in linear models have been established under a variety of different assumptions on the design matrix. We show in this paper how the

Quasi-Likelihood and/or Robust Estimation in High Dimensions

An extension of the oracle results to the case of quasi-likelihood loss is presented, and bounds for the prediction error and $\ell_1$-error are proved and it is shown that under an irrepresentable condition, the $\ell-1 $-penalized quasi- likelihood estimator has no false positives.

On the Uniqueness of Nonnegative Sparse Solutions to Underdetermined Systems of Equations

It is shown that for matrices A with a row-span intersecting the positive orthant, if this problem admits a sufficiently sparse solution, it is necessarily unique, and the bound on the required sparsity depends on a coherence property of the matrix A.

AIC for the Lasso in generalized linear models

A criterion is derived from the original definition of the AIC, that is, an asymptotically unbiased estimator of the Kullback-Leibler divergence that becomes the Gaussian regression setting of the Lasso and can be regarded as its generalization.

The Adaptive Lasso and Its Oracle Properties

A new version of the lasso is proposed, called the adaptive lasso, where adaptive weights are used for penalizing different coefficients in the ℓ1 penalty, and the nonnegative garotte is shown to be consistent for variable selection.