Detection and Evaluation of Clusters within Sequential Data

@article{Werde2022DetectionAE,
  title={Detection and Evaluation of Clusters within Sequential Data},
  author={Alexander Van Werde and Albert Senen-Cerda and Gianluca Kosmella and Jaron Sanders},
  journal={ArXiv},
  year={2022},
  volume={abs/2210.01679}
}
—Motivated by theoretical advancements in dimensionality reduction techniques we use a recent model, called Block Markov Chains, to conduct a practical study of clustering in real-world sequential data. Clustering algorithms for Block Markov Chains possess theoretical optimality guarantees and can be deployed in sparse data regimes. Despite these favorable theoretical properties, a thorough evaluation of these algorithms in realistic settings has been lacking. We address this issue and… 

References

SHOWING 1-10 OF 71 REFERENCES

Model selection and Akaike's Information Criterion (AIC): The general theory and its analytical extensions

During the last fifteen years, Akaike's entropy-based Information Criterion (AIC) has had a fundamental impact in statistical model evaluation problems. This paper studies the general theory of the

Singular value distribution of dense random matrices with block Markovian dependence

A low-rank spectral method for learning Markov models

Numerical comparisons with the rank-constrained maximum likelihood estimator which computed by DC (difference of convex function) programming algorithm (Zhu et al. in Oper Res, 2021) illustrate the merits of the proposed estimator in terms of the recoverability and the required computing time.

Spectral norm bounds for block Markov chain random matrices

Mode Clustering for Markov Jump Systems

  • Zhe DuN. OzayL. Balzano
  • Computer Science
    2019 IEEE 8th International Workshop on Computational Advances in Multi-Sensor Adaptive Processing (CAMSAP)
  • 2019
This work considers the problem of mode clustering in Markov jump models, and shows that, under certain conditions, the clustering error can be bounded, and the reduced-rank Markov chain is a good approximation to the original Markov chains.

Harnessing Structures for Value-Based Planning and Reinforcement Learning

A general framework to exploit the underlying low-rank structure in Q functions is proposed, which leads to a more efficient planning procedure for classical control, and additionally, a simple scheme that can be applied to any value-based RL techniques to consistently achieve better performance on "low-rank" tasks.

Learning Markov Models Via Low-Rank Optimization

The authors propose to equip the standard MLE with either nuclear norm regularization or rank constraint and develop a new DC (difference) programming algorithm, which can help balance supply and demand of taxi service and optimize the allocation of traffic resources.

Asset Pricing Using Finite State Markov Chain Stochastic Discount Functions

This article fuses two pieces of theory to make a tractable model for asset pricing. The first is the theory of asset pricing using a stochastic discounting function (SDF). This will be reviewed. The
...