Corpus ID: 235125525

Variational Gaussian Topic Model with Invertible Neural Projections

@article{Wang2021VariationalGT,
  title={Variational Gaussian Topic Model with Invertible Neural Projections},
  author={Rui Wang and Deyu Zhou and Yuxuan Xiong and Haiping Huang},
  journal={ArXiv},
  year={2021},
  volume={abs/2105.10095}
}
  • Rui Wang, Deyu Zhou, +1 author Haiping Huang
  • Published 21 May 2021
  • Computer Science
  • ArXiv
Neural topic models have triggered a surge of interest in extracting topics from text automatically since they avoid the sophisticated derivations in conventional topic models. However, scarce neural topic models incorporate the word relatedness information captured in word embedding into the modeling process. To address this issue, we propose a novel topic modeling approach, called Variational Gaussian Topic Model (VaGTM). Based on the variational auto-encoder, the proposed VaGTM models each… Expand

Figures and Tables from this paper

References

SHOWING 1-10 OF 34 REFERENCES
Topic Modeling in Embedding Spaces
TLDR
The embedded topic model (etm) is developed, a generative model of documents that marries traditional topic models with word embeddings and outperforms existing document models, such as latent Dirichlet allocation, in terms of both topic quality and predictive performance. Expand
ATM: Adversarial-neural Topic Model
TLDR
The proposed Adversarial-neural Topic Model (ATM) models topics with Dirichlet prior and employs a generator network to capture the semantic patterns among latent topics, and shows that ATM generates more coherence topics, outperforming a number of competitive baselines. Expand
Gaussian LDA for Topic Models with Word Embeddings
TLDR
Gaussian LDA is replaced with multivariate Gaussian distributions on the embedding space, which encourages the model to group words that are a priori known to be semantically related into topics into topics. Expand
Neural Variational Inference for Text Processing
TLDR
This paper introduces a generic variational inference framework for generative and conditional models of text, and constructs an inference network conditioned on the discrete text input to provide the variational distribution. Expand
Autoencoding Variational Inference For Topic Models
TLDR
This work presents what is to their knowledge the first effective AEVB based inference method for latent Dirichlet allocation (LDA), which it is called Autoencoded Variational Inference For Topic Model (AVITM). Expand
Rethinking LDA: Why Priors Matter
TLDR
The prior structure advocated substantially increases the robustness of topic models to variations in the number of topics and to the highly skewed word frequency distributions common in natural language. Expand
Reading Tea Leaves: How Humans Interpret Topic Models
TLDR
New quantitative methods for measuring semantic meaning in inferred topics are presented, showing that they capture aspects of the model that are undetected by previous measures of model quality based on held-out likelihood. Expand
Unsupervised Learning of Syntactic Structure with Invertible Neural Projections
TLDR
A novel generative model is proposed that jointly learns discrete syntactic structure and continuous word representations in an unsupervised fashion by cascading an invertible neural network with a structured generative prior so long as the prior is well-behaved. Expand
Latent Dirichlet Allocation
We propose a generative model for text and other collections of discrete data that generalizes or improves on several previous models including naive Bayes/unigram, mixture of unigrams [6], andExpand
Neural Models for Documents with Metadata
TLDR
A general neural framework is proposed, based on topic models, to enable flexible incorporation of metadata and allow for rapid exploration of alternative models, and achieves strong performance, with a manageable tradeoff between perplexity, coherence, and sparsity. Expand
...
1
2
3
4
...