Loss of memory of hidden Markov models and Lyapunov exponents.

@article{Collet2014LossOM,
  title={Loss of memory of hidden Markov models and Lyapunov exponents.},
  author={P. Collet and Florencia G. Leonardi},
  journal={Annals of Applied Probability},
  year={2014},
  volume={24},
  pages={422-446}
}
  • P. Collet, F. Leonardi
  • Published 1 August 2009
  • Mathematics, Computer Science
  • Annals of Applied Probability
In this paper we prove that the asymptotic rate of exponential loss of memory of a finite state hidden Markov model is bounded above by the difference of the first two Lyapunov exponents of a certain product of matrices. We also show that this bound is in fact realized, namely for almost all realizations of the observed process we can find symbols where the asymptotic exponential rate of loss of memory attains the difference of the first two Lyapunov exponents. These results are derived in… 

Bounds on Convergence of Entropy Rate Approximations in Hidden Markov Processes

There is no general closed form expression for the entropy rate of a hidden Markov process. However, the finite length block estimates h(t) often converge to the true entropy rate h quite rapidly. We

Estimate exponential memory decay in Hidden Markov Model and its applications

TLDR
The inherent memory decay in hidden Markov models is utilized, such that the forward and backward probabilities can be carried out with subsequences, enabling efficient inference over long sequences of observations.

Exponential Bounds for Convergence of Entropy Rate Approximations and Rate of Memory Loss in Hidden Markov Models Satisfying a Path-Mergeability Condition

TLDR
It is shown that for a finite HMM with path-mergeable states the block estimates of the entropy rate converge exponentially fast, and also that the initial state is almost surely forgotten at an exponential rate.

Nonparametric statistical inference for the context tree of a stationary ergodic process

TLDR
It is proved that one-sided inference is possible in this general setting and a consistent estimator is constructed that is a lower bound for the context tree of the process with an explicit formula for the coverage probability.

Autonomous choices among deterministic evolution-laws as source of uncertainty

VARIATIONAL BAYESIAN ANALYSIS OF NONHOMOGENEOUS HIDDEN MARKOV MODELS WITH LONG AND ULTRA-LONG SEQUENCES

TLDR
A variational Bayes (VB) method is developed for NHMMs, which utilizes a structured variational family of Gaussian distributions with factorized covariance matrices to approximate target posteriors, combining a forward-backward algorithm and stochastic gradient ascent in estimation.

Stochastic Dynamics: Markov Chains, Random Transformations and Applications

TLDR
The theory of noise-induced synchronization is introduced together with a more intuitive version of the multiplicative ergodic theory, and then is applied to hidden Markov models for developing an efficient algorithm of parameter inference.

Predictive Power of Markovian Models: Evidence from U.S. Recession Forecasting

This paper brings new evidence of predicting the U.S. recessions through Markovian models. The Markovian models, including the Hidden Markov and Markov models, incorporate the temporal

References

SHOWING 1-10 OF 26 REFERENCES

On the entropy of a hidden Markov process

TLDR
This paper presents the probability of a sequence under the model as a product of random matrices, and shows that the entropy rate sought is a top Lyapunov exponent of the product, which explains the difficulty in its explicit computation.

Forgetting the initial distribution for Hidden Markov Models

Forgetting of the initial condition for the filter in general state-space hidden Markov chain: a coupling approach

We give simple conditions that ensure exponential forgetting of the initial conditions of the filter for general state-space hidden Markov chain. The proofs are based on the coupling argument applied

Random perturbations of stochastic processes with unbounded variable length memory

TLDR
In the case of stochastic chains with unbounded but otherwise finite variable length memory, it is shown that it is possible to recover the context tree of the original chain, using a suitable version of the algorithm Context, provided that the noise is small enough.

Inference in hidden Markov models

TLDR
This book is a comprehensive treatment of inference for hidden Markov models, including both algorithms and statistical theory, and builds on recent developments to present a self-contained view.

ON THE PRESERVATION OF GIBBSIANNESS UNDER SYMBOL AMALGAMATION

Starting from the full-shift on a finite alphabet A, mingling some symbols of A, we obtain a new full shift on a smaller alphabet B. This amal- gamation defines a factor map from (A N ,TA) to (B N

ERGODIC THEOREMS

Every one of the important strong limit theorems that we have seen thus far – the strong law of large numbers, the martingale convergence theorem, and the ergodic theorem – has relied in a crucial

Probabilistic functions of finite-state markov chains.

  • T. Petrie
  • Mathematics
    Proceedings of the National Academy of Sciences of the United States of America
  • 1967
These papers* are statistically motivated; the content is mathematical. The motivation is this: Given is an s X s stochastic matrix A = ((aij)) and an s X r stochastic matrix B = ((bj0)) where A

Inference in Hidden Markov Models

TLDR
Introduction to Linear Models and Statistical Inference is not meant to compete with these texts—rather, its audience is primarily those taking a statistics course within a mathematics department.