Variable selection in sparse GLARMA models

  title={Variable selection in sparse GLARMA models},
  author={Marina Gomtsyan and C{\'e}line L{\'e}vy-Leduc and Sarah Ouadah and Laure Sansonnet and Thomas Blein},
  pages={755 - 784}
In this paper, we propose a novel and efficient two-stage variable selection approach for sparse GLARMA models, which are pervasive for modelling discrete-valued time series. Our approach consists in iteratively combining the estimation of the autoregressive moving average (ARMA) coefficients of GLARMA models with regularized methods designed for performing variable selection in regression coefficients of Generalized Linear Models (GLM). We first establish the consistency of the ARMA part… 

Variable selection in sparse multivariate GLARMA models: Application to germination control by environment

This work proposes a novel and eficient iterative two-stage variable selection approach for multivariate sparse GLARMA models, which can be used for modelling multivariate discrete-valued time series and is able to outperform the other methods for recovering the null and non-null coefficients.



Log-linear Poisson autoregression

Markov regression models for time series: a quasi-likelihood approach.

A quasi-likelihood (QL) approach to regression analysis with time series data is discussed, analogous to QL for independent observations, large-sample properties of the regression coefficients depend only on correct specification of the first conditional moment.

Statistical Learning with Sparsity: The Lasso and Generalizations

Statistical Learning with Sparsity: The Lasso and Generalizations presents methods that exploit sparsity to help recover the underlying signal in a set of data and extract useful and reproducible patterns from big datasets.

Stability selection

It is proved for the randomized lasso that stability selection will be variable selection consistent even if the necessary conditions for consistency of the original lasso method are violated.

Efficient order selection algorithms for integer‐valued ARMA processes

A very efficient reversible jump Markov chain Monte Carlo (RJMCMC) algorithm is constructed for moving between INARMA processes of different orders and an alternative in the form of the EM algorithm is given for determining the order of an integer‐valued autoregressive (INAR) process.

MCMC for Integer‐Valued ARMA processes

An efficient Markov chain Monte Carlo algorithm for a wide class of integer‐valued autoregressive moving‐average (INARMA) processes is outlined and its inferential and predictive capabilities are assessed.

Poisson Autoregression

In this article we consider geometric ergodicity and likelihood-based inference for linear and nonlinear Poisson autoregression. In the linear case, the conditional mean is linked linearly to its

The glarma Package for Observation-Driven Time Series Regression of Counts

We review the theory and application of generalized linear autoregressive moving average observation-driven models for time series of counts with explanatory variables and describe the estimation of


ABSTRACT: Simple models are presented for use in the modeling and generation of sequences of dependent discrete random variables. The models are essentially Markov Chains, but are structurally

Regularization Paths for Generalized Linear Models via Coordinate Descent.

In comparative timings, the new algorithms are considerably faster than competing methods and can handle large problems and can also deal efficiently with sparse features.