# Deep Latent Dirichlet Allocation with Topic-Layer-Adaptive Stochastic Gradient Riemannian MCMC

@inproceedings{Cong2017DeepLD, title={Deep Latent Dirichlet Allocation with Topic-Layer-Adaptive Stochastic Gradient Riemannian MCMC}, author={Yulai Cong and Bo Chen and Hongwei Liu and Mingyuan Zhou}, booktitle={ICML}, year={2017} }

It is challenging to develop stochastic gradient based scalable inference for deep discrete latent variable models (LVMs), due to the difficulties in not only computing the gradients, but also adapting the step sizes to different latent factors and hidden layers. For the Poisson gamma belief network (PGBN), a recently proposed deep discrete LVM, we derive an alternative representation that is referred to as deep latent Dirichlet allocation (DLDA). Exploiting data augmentation and…

## Figures and Tables from this paper

## 48 Citations

### Deep Autoencoding Topic Model With Scalable Hybrid Bayesian Inference

- Computer ScienceIEEE Transactions on Pattern Analysis and Machine Intelligence
- 2021

A topic-layer-adaptive stochastic gradient Riemannian MCMC that jointly learns simplex-constrained global parameters across all layers and topics, with topic and layer specific learning rates, and a supervised DATM that enhances the discriminative power of its latent representations is proposed.

### Sawtooth Factorial Topic Embeddings Guided Gamma Belief Network

- Computer ScienceICML
- 2021

Seetooth factorial topic embedding guided GBN is proposed, a deep generative model of documents that captures the dependencies and semantic similarities between the topics in the embedding space and outperforms other neural topic models on extracting deeper interpretable topics and deriving better document representations.

### Max-Margin Deep Diverse Latent Dirichlet Allocation With Continual Learning

- Computer ScienceIEEE Transactions on Cybernetics
- 2022

This article proposes deep diverse latent Dirichlet allocation (DDLDA), a deep hierarchical topic model that can yield more meaningful semantic topics with less common and meaningless words by introducing shared topics and develops a variational inference network for DDLDA.

### Decoupling Sparsity and Smoothness in the Dirichlet Variational Autoencoder Topic Model

- Computer ScienceJ. Mach. Learn. Res.
- 2019

This work rewrite the Dirichlet parameter vector into a product of a sparse binary vector and a smoothness vector, leading to a model that features both a competitive topic coherence and a high log-likelihood.

### WHAI: Weibull Hybrid Autoencoding Inference for Deep Topic Modeling

- Computer ScienceICLR
- 2018

To train an inference network jointly with a deep generative topic model, making it both scalable to big corpora and fast in out-of-sample prediction, we develop Weibull hybrid autoencoding inference…

### Deep Relational Topic Modeling via Graph Poisson Gamma Belief Network

- Computer ScienceNeurIPS
- 2020

A novel hierarchical RTM named graph Poisson gamma belief network (GPGBN) is developed, and two different Weibull distribution based variational graph auto-encoders are introduced for efficient model inference and effective network information aggregation.

### Neural variational sparse topic model for sparse explainable text representation

- Computer ScienceInf. Process. Manag.
- 2021

### Convolutional Poisson Gamma Belief Network

- Computer ScienceICML
- 2019

Experimental results demonstrate that CPGBN can extract high-quality text latent representations that capture the word order information, and hence can be leveraged as a building block to enrich a wide variety of existing latent variable models that ignore word order.

### Multimodal Weibull Variational Autoencoder for Jointly Modeling Image-Text Data

- Computer ScienceIEEE Transactions on Cybernetics
- 2022

A novel multimodal Poisson gamma belief network (mPGBN) is developed that tightly couples the observations of different modalities via imposing sparse connections between their modality-specific hidden layers, resulting in a novel Weibull variational autoencoder (MWVAE), which is fast in out-of-sample prediction and can handle large-scale multimodAL datasets.

### Dirichlet belief networks for topic structure learning

- Computer ScienceNeurIPS
- 2018

A new multi-layer generative process on word distributions of topics, where each layer consists of a set of topics and each topic is drawn from a mixture of the topics of the layer above, which is able to discover interpretable topic hierarchies.

## References

SHOWING 1-10 OF 51 REFERENCES

### Scalable Deep Poisson Factor Analysis for Topic Modeling

- Computer ScienceICML
- 2015

A new framework for topic modeling is developed, based on deep graphical models, where interactions between topics are inferred through deep latent binary hierarchies, and Scalable inference algorithms are derived by applying Bayesian conditional density filtering algorithm.

### Deep Poisson Factor Modeling

- Computer ScienceNIPS
- 2015

A new deep architecture for topic modeling, based on Poisson Factor Analysis (PFA) modules, that derives efficient inference via MCMC and stochastic variational methods, that scale with the number of non-zeros in the data and binary units, yielding significant efficiency, relative to models based on logistic links.

### Neural Variational Inference and Learning in Belief Networks

- Computer ScienceICML
- 2014

This work proposes a fast non-iterative approximate inference method that uses a feedforward network to implement efficient exact sampling from the variational posterior and shows that it outperforms the wake-sleep algorithm on MNIST and achieves state-of-the-art results on the Reuters RCV1 document dataset.

### Augmentable Gamma Belief Networks

- Computer ScienceJ. Mach. Learn. Res.
- 2016

An augmentable gamma belief network (GBN) that factorizes each of its hidden layers into the product of a sparse connection weight matrix and the nonnegative real hidden units of the next layer to infer multilayer deep representations of high-dimensional discrete and non negative real vectors.

### Auto-Encoding Variational Bayes

- Computer ScienceICLR
- 2014

A stochastic variational inference and learning algorithm that scales to large datasets and, under some mild differentiability conditions, even works in the intractable case is introduced.

### Preconditioned Stochastic Gradient Langevin Dynamics for Deep Neural Networks

- Computer ScienceAAAI
- 2016

This work proposes combining adaptive preconditioners with Stochastic Gradient Langevin Dynamics, and gives theoretical properties on asymptotic convergence and predictive risk, and empirical results for Logistic Regression, Feedforward Neural Nets, and Convolutional Neural Nets demonstrate that the preconditionsed SGLD method gives state-of-the-art performance.

### Online Learning for Latent Dirichlet Allocation

- Computer ScienceNIPS
- 2010

An online variational Bayes (VB) algorithm for Latent Dirichlet Allocation (LDA) based on online stochastic optimization with a natural gradient step is developed, which shows converges to a local optimum of the VB objective function.

### Neural Variational Inference for Text Processing

- Computer ScienceICML
- 2016

This paper introduces a generic variational inference framework for generative and conditional models of text, and constructs an inference network conditioned on the discrete text input to provide the variational distribution.

### Learning Sigmoid Belief Networks via Monte Carlo Expectation Maximization

- Computer ScienceAISTATS
- 2016

This work proposes using an online Monte Carlo expectationmaximization (MCEM) algorithm to learn the maximum a posteriori (MAP) estimator of the generative model or optimize the variational lower bound of a recognition network.

### Pachinko allocation: DAG-structured mixture models of topic correlations

- Computer ScienceICML
- 2006

Improved performance of PAM is shown in document classification, likelihood of held-out data, the ability to support finer-grained topics, and topical keyword coherence.