# Multivariate Regression with Grossly Corrupted Observations: A Robust Approach and its Applications

@article{Zhang2017MultivariateRW, title={Multivariate Regression with Grossly Corrupted Observations: A Robust Approach and its Applications}, author={Xiaowei Zhang and Chi Xu and Yu Zhang and Tingshao Zhu and Li Cheng}, journal={ArXiv}, year={2017}, volume={abs/1701.02892} }

This paper studies the problem of multivariate linear regression where a portion of the observations is grossly corrupted or is missing, and the magnitudes and locations of such occurrences are unknown in priori. To deal with this problem, we propose a new approach by explicitly consider the error source as well as its sparseness nature. An interesting property of our approach lies in its ability of allowing individual regression output elements or tasks to possess their unique noise levels…

## Figures and Tables from this paper

## 6 Citations

### A General Family of Trimmed Estimators for Robust High-dimensional Data Analysis

- Computer Science, Mathematics
- 2016

This work focuses on trimmed versions of structurally regularized M-estimators in the high-dimensional setting, including the popular Least Trimmed Squares estimator, as well as analogous estimators for generalized linear models and graphical models, using possibly non-convex loss functions.

### Hand pose estimation through semi-supervised and weakly-supervised learning

- Computer ScienceComput. Vis. Image Underst.
- 2017

### DeepPrior++: Improving Fast and Accurate 3D Hand Pose Estimation

- Computer Science2017 IEEE International Conference on Computer Vision Workshops (ICCVW)
- 2017

With simple improvements: adding ResNet layers, data augmentation, and better initial hand localization, DeepPrior achieves better or similar performance than more sophisticated recent methods on the three main benchmarks (NYU, ICVL, MSRA) while keeping the simplicity of the original method.

### A CNN model for real time hand pose estimation

- Computer ScienceJ. Vis. Commun. Image Represent.
- 2021

### A block symmetric Gauss–Seidel decomposition theorem for convex composite quadratic programming and its applications

- Mathematics, Computer ScienceMath. Program.
- 2019

For a symmetric positive semidefinite linear system of equations $$\mathcal{Q}{{\varvec{x}}}= {{\varvec{b}}}$$Qx=b, where $${{\varvec{x}}}= (x_1,\ldots ,x_s)$$x=(x1,…,xs) is partitioned into s…

### A Survey on 3D Hand Skeleton and Pose Estimation by Convolutional Neural Network

- Computer Science
- 2020

## References

SHOWING 1-10 OF 46 REFERENCES

### Robust Multivariate Regression with Grossly Corrupted Observations and Its Application to Personality Prediction

- Computer ScienceACML
- 2015

This work considers the problem of multivariate linear regression with a small fraction of the responses being missing and grossly corrupted, and proposes a new algorithm that is theoretically shown to always converge to the optimal solution of its induced non-smooth optimization problem.

### Outlier-Robust PCA: The High-Dimensional Case

- Computer ScienceIEEE Transactions on Information Theory
- 2013

This work proposes a high-dimensional robust principal component analysis algorithm that is efficient, robust to contaminated points, and easily kernelizable, and achieves maximal robustness.

### Robust Regression via Hard Thresholding

- Computer ScienceNIPS
- 2015

A simple hard-thresholding algorithm called TORRENT is studied which, under mild conditions on X, can recover w* exactly even if b corrupts the response variables in an adversarial manner, i.e. both the support and entries of b are selected adversarially after observing X and w*.

### Multivariate Regression with Calibration

- Computer Science, MathematicsNIPS
- 2014

An efficient smoothed proximal gradient algorithm which has a worst-case iteration complexity O(1/ε), where ε is a pre-specified numerical accuracy and it is proved that CMR achieves the optimal rate of convergence in parameter estimation.

### Smoothing proximal gradient method for general structured sparse regression

- Computer ScienceThe Annals of Applied Statistics
- 2012

This paper proposes a general optimization approach, the smoothing proximal gradient method, which can solve structured sparse regression problems with any smooth convex loss under a wide spectrum of structured sparsity-inducing penalties.

### Estimation of high-dimensional low-rank matrices

- Mathematics, Computer Science
- 2010

This work investigates penalized least squares estimators with a Schatten-p quasi-norm penalty term and derives bounds for the kth entropy numbers of the quasi-convex Schatten class embeddings S M p → S M 2 , p < 1, which are of independent interest.

### Structured Sparsity via Alternating Direction Methods

- Computer ScienceJ. Mach. Learn. Res.
- 2012

This paper proposes a unified framework based on the augmented Lagrangian method, under which problems with both types of regularization and their variants can be efficiently solved, and develops new algorithms using an alternating partial-linearization/splitting technique.

### Dense error correction via l1-minimization

- Computer ScienceICASSP
- 2009

It is proved that for highly correlated dictionaries A, any non-negative, sufficiently sparse signal x can be recovered by solving an l1-minimization problem: min ‖x‖ 1 + ‖e‚ 1 subject to y = Ax + e, which suggests that accurate and efficient recovery of sparse signals is possible even with nearly 100% of the observations corrupted.

### Convex multi-task feature learning

- Computer ScienceMachine Learning
- 2007

It is proved that the method for learning sparse representations shared across multiple tasks is equivalent to solving a convex optimization problem for which there is an iterative algorithm which converges to an optimal solution.

### Oracle Inequalities and Optimal Inference under Group Sparsity

- Computer Science, Mathematics
- 2010

The Group Lasso can achieve an improvement in the prediction and estimation properties as compared to the Lasso, and it is proved that the rate of convergence of the upper bounds is optimal in a minimax sense.