Corpus ID: 236318325

Robust Estimation of High-Dimensional Vector Autoregressive Models

@inproceedings{Wang2021RobustEO,
  title={Robust Estimation of High-Dimensional Vector Autoregressive Models},
  author={Di Wang and Ruey S. Tsay},
  year={2021}
}
  • Di Wang, R. Tsay
  • Published 2021
  • Mathematics
High-dimensional time series data appear in many scientific areas in the current data-rich environment. Analysis of such data poses new challenges to data analysts because of not only the complicated dynamic dependence between the series, but also the existence of aberrant observations, such as missing values, contaminated observations, and heavy-tailed distributions. For high-dimensional vector autoregressive (VAR) models, we introduce a unified estimation procedure that is robust to model… Expand

Figures and Tables from this paper

References

SHOWING 1-10 OF 63 REFERENCES
Regularized estimation in sparse high-dimensional time series models
Many scientific and economic problems involve the analysis of high-dimensional time series datasets. However, theoretical studies in high-dimensional statistics to date rely primarily on theExpand
Robust Estimation of Transition Matrices in High Dimensional Heavy-tailed Vector Autoregressive Processes
TLDR
A unified framework for modeling and estimating heavy-tailed VAR processes is developed, and the robust transition matrix estimator induces sign-consistent estimators of Granger causality in the elliptical VAR process. Expand
Low Rank and Structured Modeling of High-Dimensional Vector Autoregressions
TLDR
This work introduces a novel approach for estimating low-rank and structured sparse high-dimensional VAR models using a regularized framework involving a combination of nuclear norm and lasso penalties, and establishes nonasymptotic probabilistic upper bounds on the estimation error rates of the low- rank and the structured sparse components. Expand
A direct estimation of high dimensional stationary vector autoregressions
TLDR
An alternative way in estimating the VAR model is proposed, via exploiting the temporal dependence structure, formulating the estimating problem to a linear program and there are empirical advantages of the method over the lasso-type estimators in parameter estimation and forecasting. Expand
Estimation of high dimensional mean regression in the absence of symmetry and light tail assumptions.
TLDR
The results reveal that the ultra-high dimensional setting, where the dimensionality can grow exponentially with the sample size, the RA-lasso estimator produces a consistent estimator at the same rate as the optimal rate under the light-tail situation. Expand
High-dimensional Linear Regression for Dependent Data with Applications to Nowcasting
In recent years, extensive research has focused on the `1 penalized least squares (Lasso) estimators of high-dimensional linear regression when the number of covariates p is considerably larger thanExpand
Robust Estimation of the Mean and Covariance Matrix for High Dimensional Time Series
High dimensional non-Gaussian time series data are increasingly encountered in a wide range of applications. Conventional methods are inadequate for estimating mean vectors and second-orderExpand
High-Dimensional Low-Rank Tensor Autoregressive Time Series Modeling
Modern technological advances have enabled an unprecedented amount of structured data with complex temporal dependence, urging the need for new methods to efficiently model and forecastExpand
Finite-time analysis of vector autoregressive models under linear restrictions
This paper develops a unified finite-time theory for the ordinary least squares estimation of possibly unstable and even slightly explosive vector autoregressive models under linear restrictions,Expand
Statistical consistency and asymptotic normality for high-dimensional robust M-estimators
TLDR
This work establishes a form of local statistical consistency for the penalized regression estimators under fairly mild conditions on the error distribution, and analysis of the local curvature of the loss function has useful consequences for optimization when the robust regression function and/or regularizer is nonconvex and the objective function possesses stationary points outside the local region. Expand
...
1
2
3
4
5
...