# A Novel Variational Family for Hidden Nonlinear Markov Models

@article{Hernandez2018ANV, title={A Novel Variational Family for Hidden Nonlinear Markov Models}, author={Daniel Hernandez and Antonio Khalil Moretti and Ziqiang Wei and Shreya Saxena and John P. Cunningham and Liam Paninski}, journal={ArXiv}, year={2018}, volume={abs/1811.02459} }

Latent variable models have been widely applied for the analysis and visualization of large datasets. In the case of sequential data, closed-form inference is possible when the transition and observation functions are linear. However, approximate inference techniques are usually necessary when dealing with nonlinear dynamics and observation functions. Here, we propose a novel variational inference framework for the explicit modeling of time series, Variational Inference for Nonlinear Dynamics…

## 14 Citations

### Variational Objectives for Markovian Dynamics with Backward Simulation

- Computer ScienceECAI
- 2020

Particle Smoothing Variational Objectives (SVO) is introduced, a novel backward simulation technique and variational objective constructed from a smoothed approximate posterior that consistently outperforms objectives when given fewer Monte Carlo samples.

### Ensemble Kalman Variational Objectives: Nonlinear Latent Trajectory Inference with A Hybrid of Variational Inference and Ensemble Kalman Filter

- Computer ScienceArXiv
- 2020

It is demonstrated that EnKOs outperform the SMC based methods in terms of predictive ability for three benchmark nonlinear dynamics systems tasks and can identify the latent dynamics given fewer particles because of its rich particle diversity.

### Filtering Normalizing Flows

- Computer Science
- 2019

Two different methods based on normalizing flows for posterior inference in latent non-linear dynamical systems are introduced and gradient-based amortized posterior inference approaches using the auto-encoding variational Bayes framework are presented.

### A large-scale neural network training framework for generalized estimation of single-trial population dynamics

- Computer SciencebioRxiv
- 2021

AutoLFADS is demonstrated, an automated model-tuning framework that can characterize dynamics using only neural data, without the need for supervisory information, which enables inference of dynamics out-of-the-box in diverse brain areas and behaviors.

### Deep inference of latent dynamics with spatio-temporal super-resolution using selective backpropagation through time

- Computer ScienceNeurIPS
- 2021

It is demonstrated that it is possible to obtain spatio-temporal super-resolution in neuronal time series by exploiting relationships among neurons, embedded in latent low-dimensional population dynamics.

### Particle Smoothing Variational Objectives

- Computer ScienceArXiv
- 2019

SVO is introduced, a novel backward simulation technique and smoothed approximate posterior defined through a subsampling process that augments support of the proposal and boosts particle diversity and SVO consistently outperforms filtered objectives when given fewer Monte Carlo samples on three nonlinear systems of increasing complexity.

### Graph Gamma Process Linear Dynamical Systems

- Computer ScienceAISTATS
- 2021

On both synthetic and real-world time series, the proposed nonparametric Bayesian dynamic models consistently exhibit good predictive performance in comparison to a variety of baseline models, revealing interpretable latent state transition patterns and decomposing the time series into distinctly behaved sub-sequences.

### Hierarchical recurrent state space models reveal discrete and continuous dynamics of neural activity in C. elegans

- Computer SciencebioRxiv
- 2019

This work develops state space models that decompose neural time-series into segments with simple, linear dynamics and incorporates these models into a hierarchical framework that combines partial recordings from many worms to learn shared structure, while still allowing for individual variability.

### Inference of Multiplicative Factors Underlying Neural Variability in Calcium Imaging Data

- BiologyNeural Computation
- 2022

A flexible modeling framework is developed that identifies low-dimensional latent factors in calcium imaging data with distinct additive and multiplicative modulatory effects that govern trial-to-trial variation in evoked responses and applies it to experimental data from the zebrafish optic tectum.

### Graph Gamma Process Generalized Linear Dynamical Systems

- Computer ScienceArXiv
- 2020

The proposed nonparametric Bayesian dynamic models, which are initialized at random, consistently exhibit good predictive performance in comparison to a variety of baseline models, revealing interpretable latent state transition patterns and decomposing the time series into distinctly behaved sub-sequences.

## References

SHOWING 1-10 OF 35 REFERENCES

### BLACK BOX VARIATIONAL INFERENCE FOR STATE SPACE MODELS

- Computer Science
- 2016

A structured Gaussian variational approximate posterior is proposed that carries the same intuition as the standard Kalman filter-smoother but permits us to use the same inference approach to approximate the posterior of much more general, nonlinear latent variable generative models.

### Structured Inference Networks for Nonlinear State Space Models

- Computer ScienceAAAI
- 2017

A unified algorithm is introduced to efficiently learn a broad class of linear and non-linear state space models, including variants where the emission and transition distributions are modeled by deep neural networks.

### Linear dynamical neural population models through nonlinear embeddings

- Computer ScienceNIPS
- 2016

A body of recent work in modeling neural activity focuses on recovering low-dimensional latent features that capture the statistical structure of large-scale neural populations. Most such approaches…

### An Introduction to Variational Methods for Graphical Models

- Computer ScienceMachine Learning
- 2004

This paper presents a tutorial introduction to the use of variational methods for inference and learning in graphical models (Bayesian networks and Markov random fields), and describes a general framework for generating variational transformations based on convex duality.

### Bayesian Learning and Inference in Recurrent Switching Linear Dynamical Systems

- Computer ScienceAISTATS
- 2017

This work develops a model class and Bayesian inference algorithms that not only discover these dynamical units but also, by learning how transition probabilities depend on observations or continuous latent states, explain their switching behavior.

### Deep Kalman Filters

- Computer ScienceArXiv
- 2015

A unified algorithm is introduced to efficiently learn a broad spectrum of Kalman filters and investigates the efficacy of temporal generative models for counterfactual inference, and introduces the "Healing MNIST" dataset where long-term structure, noise and actions are applied to sequences of digits.

### Stochastic Backpropagation and Approximate Inference in Deep Generative Models

- Computer ScienceICML
- 2014

We marry ideas from deep neural networks and approximate Bayesian inference to derive a generalised class of deep, directed generative models, endowed with a new algorithm for scalable inference and…

### Composing graphical models with neural networks for structured representations and fast inference

- Computer ScienceNIPS
- 2016

A general modeling and inference framework that composes probabilistic graphical models with deep learning methods and combines their respective strengths is proposed, giving a scalable algorithm that leverages stochastic variational inference, natural gradients, graphical model message passing, and the reparameterization trick.

### Nonparametric Bayesian sparse graph linear dynamical systems

- Computer ScienceAISTATS
- 2018

A nonparametric Bayesian sparse graph linear dynamical system (SGLDS) is proposed to model sequentially observed multivariate data. SGLDS uses the Bernoulli-Poisson link together with a gamma process…

### A Recurrent Latent Variable Model for Sequential Data

- Computer ScienceNIPS
- 2015

It is argued that through the use of high-level latent random variables, the variational RNN (VRNN)1 can model the kind of variability observed in highly structured sequential data such as natural speech.