• Corpus ID: 170079209

Essential regression

@inproceedings{Bing2019EssentialR,
  title={Essential regression},
  author={Xin Bing and Florentina Bunea and Marten H. Wegkamp and Seth Strimas-Mackey},
  year={2019}
}
Essential Regression is a new type of latent factor regression model, where unobserved factors Z ∈ R influence linearly both the response Y ∈ R and the covariates X ∈ R with K p. Its novelty consists in the conditions that give Z interpretable meaning and render the regression coefficients β ∈ R relating Y to Z – along with other important parameters of the model – identifiable. It provides tools for high dimensional regression modelling that are especially powerful when the relationship… 

Figures and Tables from this paper

Inference in latent factor regression with clusterable features

Inferential tools for β are developed in a class of factor regression models in which the observed features are signed mixtures of the latent factors, and the proposed estimator (cid:2) β is minimax-rate adaptive, which enables the determination of the top latent antibody-centric mechanisms associated with the vaccine response.

Interpolating Predictors in High-Dimensional Factor Regression

The minimum-norm interpolating predictor analyzed under the factor regression model, despite being model-agnostic and devoid of tuning parameters, can have similar risk to predictors based on principal components regression and ridge regression, and can improve over LASSO based predictors, in the high-dimensional regime.

Inference in Interpretable Latent Factor Regression Models

Regression models, in which the observed features $X \in \R^p$ and the response $Y \in \R$ depend, jointly, on a lower dimensional, unobserved, latent vector $Z \in \R^K$, with $K n$.

Adaptive Estimation of Multivariate Regression with Hidden Variables.

A identifiability proof is constructive and leads to a novel and computationally efficient estimation algorithm, called HIVE, which is further extended to the setting with heteroscedastic errors.

Nonsparse Learning with Latent Variables

A new Nonsparse Learning Methodology for high-Dimensional data analysis is coming and it’s going to be pretty good.

Confidence Intervals for Diffusion Index Forecasts and Inference for Factor-Augmented Regressions

We consider the situation when there is a large number of series, N, each with T observations, and each series has some predictive ability for some variable of interest. A methodology of growing

Adaptive estimation in structured factor models with applications to overlapping clustering

This work introduces a novel estimation method, called LOVE, of the entries and structure of a loading matrix A in a sparse latent factor model X = AZ + E, for an observable random vector X in Rp,

Regression Shrinkage and Selection via the Lasso

A new method for estimation in linear models called the lasso, which minimizes the residual sum of squares subject to the sum of the absolute value of the coefficients being less than a constant, is proposed.

Confidence intervals for low dimensional parameters in high dimensional linear models

The method proposed turns the regression data into an approximate Gaussian sequence of point estimators of individual regression coefficients, which can be used to select variables after proper thresholding, and demonstrates the accuracy of the coverage probability and other desirable properties of the confidence intervals proposed.

Linear and conic programming estimators in high dimensional errors‐in‐variables models

It is shown that the procedure introduced by Rosenbaum and Tsybakov, which is almost optimal in a minimax sense, can be efficiently computed by a single linear programming problem despite non‐convexities.

Model selection for regression on a random design

We consider the problem of estimating an unknown regression function when the design is random with values in . Our estimation procedure is based on model selection and does not rely on any prior

Confidence intervals and hypothesis testing for high-dimensional regression

This work considers here high-dimensional linear regression problem, and proposes an efficient algorithm for constructing confidence intervals and p-values, based on constructing a 'de-biased' version of regularized M-estimators.

HIGH DIMENSIONAL COVARIANCE MATRIX ESTIMATION IN APPROXIMATE FACTOR MODELS.

The sparse covariance is estimated using the adaptive thresholding technique as in Cai and Liu (2011), taking into account the fact that direct observations of the idiosyncratic components are unavailable, and the impact of high dimensionality on the covariance matrix estimation based on the factor structure is studied.

Sufficient Forecasting Using Factor Models

The sufficient forecasting improves upon the linear forecasting in both simulation studies and an empirical study of forecasting macroeconomic variables.

Large covariance estimation by thresholding principal orthogonal complements

It is shown that the effect of estimating the unknown factors vanishes as the dimensionality increases, and the principal orthogonal complement thresholding method ‘POET’ is introduced to explore such an approximate factor structure with sparsity.
...