Asymptotic properties of the maximum likelihood estimation in misspecified Hidden Markov models

  title={Asymptotic properties of the maximum likelihood estimation in misspecified Hidden Markov models},
  author={Randal Douc and {\'E}ric Moulines},
  journal={arXiv: Statistics Theory},
Let $(Y_k)_{k\in \mathbb{Z}}$ be a stationary sequence on a probability space $(\Omega,\mathcal{A},\mathbb{P})$ taking values in a standard Borel space $\mathsf{Y}$. Consider the associated maximum likelihood estimator with respect to a parametrized family of hidden Markov models such that the law of the observations $(Y_k)_{k\in \mathbb{Z}}$ is not assumed to be described by any of the hidden Markov models of this family. In this paper we investigate the consistency of this estimator in such… 
The maximizing set of the asymptotic normalized log-likelihood for partially observed Markov chains
This paper deals with a parametrized family of partially observed bivariate Markov chains. We establish that, under very mild assumptions, the limit of the normalized log-likelihood function is
Nonasymptotic control of the MLE for misspecified nonparametric hidden Markov models
  • Luc Lehéricy
  • Mathematics, Computer Science
    Electronic Journal of Statistics
  • 2021
A finite sample bound on the resulting error is proved and it is shown that it is optimal in the minimax sense–up to logarithmic factors–when the model is well specified.
Posterior consistency for partially observed Markov models
Asymptotic analysis of model selection criteria for general hidden Markov models
Maximum likelihood estimation in partially observed Markov models with applications to time series of counts
This thesis aims to establish the equivalence-class consistency for time series models that belong to the class of partially observed Markov models (PMMs) such as HMMs and observation-driven models (ODMs).
The Viterbi process, decay-convexity and parallelized maximum a-posteriori estimation
Bounds on the distance to the Viterbi process show that approximate estimation via parallelization can indeed be accurate and scaleable to high-dimensional problems because the rate of convergence to the SOTA process does not necessarily depend on $d$.
Bayesian model comparison and asymptotics for state-space models
This thesis studies the implementation and properties of a novel criterion for model comparison, with a keen interest in the task of selecting Bayesian state-space models. This criterion, based on
Long-term stability of sequential Monte Carlo methods under verifiable conditions
This paper discusses particle filtering in general hidden Markov models (HMMs) and presents novel theoretical results on the long-term stability of bootstrap-type particle filters. More specifically,
Maximum Likelihood Estimation in Markov Regime-Switching Models with Covariate-Dependent Transition Probabilities
This paper considers maximum likelihood (ML) estimation in a large class of models with hidden Markov regimes. We investigate consistency of the ML estimator and local asymptotic normality for the
Bayesian Model Comparison with the Hyvärinen Score: Computation and Consistency
A method to consistently estimate the Hyvärinen score, a difference of out-of-sample predictive scores under the logarithmic scoring rule, is proposed for parametric models, using sequential Monte Carlo methods and it is shown that this score can be estimated for models with tractable likelihoods as well as nonlinear non-Gaussian state-space models with intractablelihoods.


It is proved that the maximum likelihood estimator (MLE) of the parameter is strongly consistent under a rather minimal set of assumptions, which could form a foundation for the investigation of MLE consistency in more general dependent and non-Markovian time series.
Efficient likelihood estimation in state space models
Motivated by studying asymptotic properties of the maximum likelihood estimator (MLE) in stochastic volatility (SV) models, in this paper we investigate likelihood estimation in state space models.
Asymptotical statistics of misspecified hidden Markov models
  • L. Mevel, L. Finesso
  • Mathematics, Computer Science
    IEEE Transactions on Automatic Control
  • 2004
The main asymptotic results are derived: almost sure consistency of the maximum likelihood estimator, asymPTotic normality of the estimation error and the exact rates of almost sure convergence.
Exponential Forgetting and Geometric Ergodicity in Hidden Markov Models
It is proved that the prediction filter, and its gradient with respect to some parameter in the model, forget almost surely their initial condition exponentially fast, and the extended Markov chain is geometrically ergodic and has a unique invariant probability distribution.
The behavior of maximum likelihood estimates under nonstandard conditions
This paper proves consistency and asymptotic normality of maximum likelihood (ML) estimators under weaker conditions than usual. In particular, (i) it is not assumed that the true distribution
Asymptotic properties of the maximum likelihood estimator in autoregressive models with Markov regime
An autoregressive process with Markov regime is an autoregressive process for which the regression function at each time point is given by a nonobservable Markov chain. In this paper we consider the
Maximum-likelihood estimation for hidden Markov models
Leroux's method for general hidden Markov models
Discrete time nonlinear filters with informative observations are stable
The nonlinear filter associated with the discrete time signal-observation model $(X_k,Y_k)$ is known to forget its initial condition as $k\to\infty$ regardless of the observation structure when the
Inference in hidden Markov models
This book is a comprehensive treatment of inference for hidden Markov models, including both algorithms and statistical theory, and builds on recent developments to present a self-contained view.