• Corpus ID: 233739736

Nonparametric Trace Regression in High Dimensions via Sign Series Representation

  title={Nonparametric Trace Regression in High Dimensions via Sign Series Representation},
  author={Chanwoo Lee and Lexin Li and Hao Helen Zhang and Miaoyan Wang},
Learning of matrix-valued data has recently surged in a range of scientific and business applications. Trace regression is a widely used method to model effects of matrix predictors and has shown great success in matrix learning. However, nearly all existing trace regression solutions rely on two assumptions: (i) a known functional form of the conditional mean, and (ii) a global low-rank structure in the entire range of the regression function, both of which may be violated in practice. In this… 

Figures and Tables from this paper

Beyond the Signs: Nonparametric Tensor Completion via Sign Series
This work provably reduce the tensor estimation problem to a series of structured classification tasks, and develops a learning reduction machinery to empower existing low-rank tensor algorithms for more challenging high-rank estimation.


Regularized matrix regression
  • Hua Zhou, Lexin Li
  • Computer Science
    Journal of the Royal Statistical Society. Series B, Statistical methodology
  • 2014
A class of regularized matrix regression methods based on spectral regularization is proposed, and a degrees‐of‐freedom formula is derived to facilitate model selection along the regularization path.
Generalized high-dimensional trace regression via nuclear norm regularization
Matrix Completion Under Monotonic Single Index Models
This paper proposes a novel matrix completion method that alternates between low-rank matrix estimation and monotonic function estimation to estimate the missing matrix elements and demonstrates the competitiveness of the proposed approach.
On Low-rank Trace Regression under General Sampling Distribution
A unifying technique for analyzing all of these problems via both estimators that leads to short proofs for the existing results as well as new results and proves similar error bounds when the regularization parameter is chosen via K-fold cross-validation.
Statistical Learning with Sparsity: The Lasso and Generalizations
Statistical Learning with Sparsity: The Lasso and Generalizations presents methods that exploit sparsity to help recover the underlying signal in a set of data and extract useful and reproducible patterns from big datasets.
Tensor denoising and completion based on ordinal observations
A multi-linear cumulative link model is proposed, a rank-constrained M-estimator is developed, theoretical accuracy guarantees are obtained, and the mean squared error bound enjoys a faster convergence rate than previous results.
On Learning High Dimensional Structured Single Index Models
The proposed computationally efficient algorithms for SIM inference in high dimensions with structural constraints enjoys superior predictive performance when compared to generalized linear models, and achieves results comparable to or better than single layer feedforward neural networks with significantly less computational cost.
Structured Matrix Completion with Applications to Genomic Data Integration
This work proposes a new framework of structured matrix completion (SMC) to treat structured missingness by design and aims at efficient matrix recovery when a subset of the rows and columns of an approximately low-rank matrix are observed.
Provable Convex Co-clustering of Tensors
A provable convex formulation of tensor co-clustering is developed and a non-asymptotic error bound is established for the CoCo estimator, which reveals a surprising “blessing of dimensionality” phenomenon that does not exist in vector or matrix-variate cluster analysis.
Optimal Estimation and Completion of Matrices with Biclustering Structures
This paper develops a unified theory for the estimation and completion of matrices with biclustering structures, where the data is a partially observed and noise contaminated data matrix with a certain bic Lustering structure, and shows that a constrained least squares estimator achieves minimax rate-optimal performance in several of the most important scenarios.