Few-shot time series segmentation using prototype-defined infinite hidden Markov models
@article{Qarout2021FewshotTS, title={Few-shot time series segmentation using prototype-defined infinite hidden Markov models}, author={Yazan Qarout and Yordan P. Raykov and Max A. Little}, journal={ArXiv}, year={2021}, volume={abs/2102.03885} }
We propose a robust framework for interpretable, few-shot analysis of non-stationary sequential data based on flexible graphical models to express the structured distribution of sequential events, using prototype radial basis function (RBF) neural network emissions. A motivational link is demonstrated between prototypical neural network architectures for few-shot learning and the proposed RBF network infinite hidden Markov model (RBF-iHMM). We show that RBF networks can be efficiently specified…
References
SHOWING 1-10 OF 39 REFERENCES
Stationary Activations for Uncertainty Calibration in Deep Learning
- Computer ScienceNeurIPS
- 2020
A new family of non-linear neural network activation functions that mimic the properties induced by the widely-used Matern family of kernels in Gaussian process (GP) models are introduced and it is demonstrated that the local stationarity property together with limited mean-square differentiability shows both good performance and uncertainty calibration in Bayesian deep learning tasks.
Learning higher-order sequential structure with cloned HMMs
- Computer ScienceArXiv
- 2019
The experiments show that CHMMs can beat n-grams, sequence memoizers, and LSTMs on character-level language modeling tasks and can be a viable alternative to these methods in some tasks that require variable order sequence modeling and the handling of uncertainty.
Neural Autoregressive Distribution Estimation
- Computer ScienceJ. Mach. Learn. Res.
- 2016
We present Neural Autoregressive Distribution Estimation (NADE) models, which are neural network architectures applied to the problem of unsupervised distribution and density estimation. They…
Composing graphical models with neural networks for structured representations and fast inference
- Computer ScienceNIPS
- 2016
A general modeling and inference framework that composes probabilistic graphical models with deep learning methods and combines their respective strengths is proposed, giving a scalable algorithm that leverages stochastic variational inference, natural gradients, graphical model message passing, and the reparameterization trick.
NICE: Non-linear Independent Components Estimation
- Computer Science, MathematicsICLR
- 2015
We propose a deep learning framework for modeling complex high-dimensional densities called Non-linear Independent Component Estimation (NICE). It is based on the idea that a good representation is…
Nonlinear time series modelling with the radial basis function-based state-dependent autoregressive model
- Computer Science, MathematicsInt. J. Syst. Sci.
- 1999
It is shown that the RBF-AR model can not only reconstruct the dynamics of given nonlinear time series effectively, but also give much better fitting to complextime series than the approach of directly RBF neural network modelling.
Spectral Representations for Convolutional Neural Networks
- Computer ScienceNIPS
- 2015
This work proposes spectral pooling, which performs dimensionality reduction by truncating the representation in the frequency domain, and demonstrates the effectiveness of complex-coefficient spectral parameterization of convolutional filters.
The sequence memoizer
- Computer ScienceCommun. ACM
- 2011
The sequence memoizer is a new hierarchical Bayesian model for discrete sequence data that captures long range dependencies and power-law characteristics, while remaining computationally attractive.
Prototypical Networks for Few-shot Learning
- Computer ScienceNIPS
- 2017
This work proposes Prototypical Networks for few-shot classification, and provides an analysis showing that some simple design decisions can yield substantial improvements over recent approaches involving complicated architectural choices and meta-learning.
Nonparametric Bayesian Learning of Switching Linear Dynamical Systems
- Computer ScienceNIPS
- 2008
This work develops a sampling algorithm that combines a truncated approximation to the Dirichlet process with efficient joint sampling of the mode and state sequences in an unknown number of persistent, smooth dynamical modes.