• Corpus ID: 239998465

TopicNet: Semantic Graph-Guided Topic Discovery

@article{Duan2021TopicNetSG,
  title={TopicNet: Semantic Graph-Guided Topic Discovery},
  author={Zhibin Duan and Yishi Xu and Bo Chen and Dongsheng Wang and Chaojie Wang and Mingyuan Zhou},
  journal={ArXiv},
  year={2021},
  volume={abs/2110.14286}
}
Existing deep hierarchical topic models are able to extract semantically meaningful topics from a text corpus in an unsupervised manner and automatically organize them into a topic hierarchy. However, it is unclear how to incorporate prior belief such as knowledge graph to guide the learning of the topic hierarchy. To address this issue, we introduce TopicNet as a deep hierarchical topic model that can inject prior structural knowledge as an inductive bias to influence the learning. TopicNet… 
1 Citations

Figures and Tables from this paper

Alignment Attention by Matching Key and Query Distributions
TLDR
Alignment attention is introduced that explicitly encourages self-attention to match the distributions of the key and query within each head, and can be optimized as an unsupervised regularization in the existing attention framework.

References

SHOWING 1-10 OF 45 REFERENCES
Hierarchical Topic Mining via Joint Spherical Tree and Text Embedding
TLDR
This work proposes a new task, Hierarchical Topic Mining, which takes a category tree described by category names only, and aims to mine a set of representative terms for each category from a text corpus to help a user comprehend his/her interested topics.
Dirichlet belief networks for topic structure learning
TLDR
A new multi-layer generative process on word distributions of topics, where each layer consists of a set of topics and each topic is drawn from a mixture of the topics of the layer above, which is able to discover interpretable topic hierarchies.
Topic Modeling in Embedding Spaces
TLDR
The embedded topic model (etm) is developed, a generative model of documents that marries traditional topic models with word embeddings and outperforms existing document models, such as latent Dirichlet allocation, in terms of both topic quality and predictive performance.
Sawtooth Factorial Topic Embeddings Guided Gamma Belief Network
TLDR
Seetooth factorial topic embedding guided GBN is proposed, a deep generative model of documents that captures the dependencies and semantic similarities between the topics in the embedding space and outperforms other neural topic models on extracting deeper interpretable topics and deriving better document representations.
Deep Relational Topic Modeling via Graph Poisson Gamma Belief Network
TLDR
A novel hierarchical RTM named graph Poisson gamma belief network (GPGBN) is developed, and two different Weibull distribution based variational graph auto-encoders are introduced for efficient model inference and effective network information aggregation.
Scalable Deep Poisson Factor Analysis for Topic Modeling
TLDR
A new framework for topic modeling is developed, based on deep graphical models, where interactions between topics are inferred through deep latent binary hierarchies, and Scalable inference algorithms are derived by applying Bayesian conditional density filtering algorithm.
Deep Autoencoding Topic Model With Scalable Hybrid Bayesian Inference
TLDR
A topic-layer-adaptive stochastic gradient Riemannian MCMC that jointly learns simplex-constrained global parameters across all layers and topics, with topic and layer specific learning rates, and a supervised DATM that enhances the discriminative power of its latent representations is proposed.
Topic Discovery for Short Texts Using Word Embeddings
TLDR
A novel topic model for short text corpus using word embeddings, which is able to extract more coherent topics from short texts compared with the baseline methods and learn better topic representation for each short document is proposed.
Convolutional Poisson Gamma Belief Network
TLDR
Experimental results demonstrate that CPGBN can extract high-quality text latent representations that capture the word order information, and hence can be leveraged as a building block to enrich a wide variety of existing latent variable models that ignore word order.
Reading Tea Leaves: How Humans Interpret Topic Models
TLDR
New quantitative methods for measuring semantic meaning in inferred topics are presented, showing that they capture aspects of the model that are undetected by previous measures of model quality based on held-out likelihood.
...
1
2
3
4
5
...