• Corpus ID: 53670013

Concept-Oriented Deep Learning: Generative Concept Representations

@article{Chang2018ConceptOrientedDL,
  title={Concept-Oriented Deep Learning: Generative Concept Representations},
  author={Daniel T. Chang},
  journal={ArXiv},
  year={2018},
  volume={abs/1811.06622}
}
  • Daniel T. Chang
  • Published 15 November 2018
  • Mathematics, Computer Science
  • ArXiv
Generative concept representations have three major advantages over discriminative ones: they can represent uncertainty, they support integration of learning and reasoning, and they are good for unsupervised and semi-supervised learning. We discuss probabilistic and generative deep learning, which generative concept representations are based on, and the use of variational autoencoders and generative adversarial networks for learning generative concept representations, particularly for concepts… 
Latent Variable Modeling for Generative Concept Representations and Deep Generative Models
TLDR
This work investigates and discusses latent variable modeling, including latent variable models, latent representations and latent spaces, particularly hierarchical latent representation and latent space vectors and geometry and that used in variational autoencoders and generative adversarial networks.
Probabilistic Generative Deep Learning for Molecular Design
TLDR
This work discusses the major components of probabilistic generative deep learning for molecular design, which include molecular structure, molecular representations, deep generative models, molecular latent representations and latent space, molecular structure-property and structure-activity relationships, molecular similarity and molecular design.
Tiered Latent Representations and Latent Spaces for Molecular Graphs
TLDR
This work proposes an architecture for learning tiered latent representations and latent spaces for molecular graphs as a simple way to explicitly represent and utilize groups, which consist of the atom (node) tier, the group tier and the molecule (graph) tier.
Probabilistic Deep Learning with Probabilistic Neural Networks and Deep Probabilistic Models
TLDR
TensorFlow Probability is a library for Probabilistic modeling and inference which can be used for both approaches of probabilistic deep learning and discusses some major examples of each approach including Bayesian neural networks and mixture density networks.
Tiered Graph Autoencoders with PyTorch Geometric for Molecular Graphs
TLDR
This paper discusses adapting tiered graph autoencoders for use with PyTorch Geometric, for both the deterministic tiered graphs autoencoder model and the probabilistic tiered variational graph aut Koencoder models.
On the Generation of Novel Ligands for SARS-CoV-2 Protease and ACE2 Receptor via Constrained Graph Variational Autoencoders
TLDR
This research focuses on the generation of novel candidate inhibitors via constrained graph variational autoencoders and the calculation of their Tanimoto similarities against existing drugs---repurposing these existing drugs and considering the novel ligands as possible SARS-CoV-2 main protease inhibitors and ACE2 receptor blockers.

References

SHOWING 1-10 OF 31 REFERENCES
Concept-Oriented Deep Learning
TLDR
The proposed concept-oriented deep learning (CODL) addresses some of the major limitations of deep learning: interpretability, transferability, contextual adaptation, and requirement for lots of labeled training data.
Grammar Variational Autoencoder
TLDR
Surprisingly, it is shown that not only does the model more often generate valid outputs, it also learns a more coherent latent space in which nearby points decode to similar discrete outputs.
Variational Graph Auto-Encoders
TLDR
The variational graph auto-encoder (VGAE) is introduced, a framework for unsupervised learning on graph-structured data based on the variational auto- Encoder (VAE) that can naturally incorporate node features, which significantly improves predictive performance on a number of benchmark datasets.
On Unifying Deep Generative Models
TLDR
It is shown that GANs and VAEs involve minimizing KL divergences of respective posterior and inference distributions with opposite directions, extending the two learning phases of classic wake-sleep algorithm, respectively.
SeqGAN: Sequence Generative Adversarial Nets with Policy Gradient
TLDR
Modeling the data generator as a stochastic policy in reinforcement learning (RL), SeqGAN bypasses the generator differentiation problem by directly performing gradient policy update.
Tutorial on Variational Autoencoders
TLDR
This tutorial introduces the intuitions behind VAEs, explains the mathematics behind them, and describes some empirical behavior.
A Review of Learning with Deep Generative Models from perspective of graphical modeling
TLDR
A review on learning with deep generative models (DGMs), which is an highly-active area in machine learning and more generally, artificial intelligence, with more emphasis on reviewing, differentiating and connecting different learning algorithms.
GraphGAN: Graph Representation Learning with Generative Adversarial Nets
TLDR
GraphGAN is proposed, an innovative graph representation learning framework unifying above two classes of methods, in which the generative model and discriminative model play a game-theoretical minimax game.
Stochastic Backpropagation and Approximate Inference in Deep Generative Models
We marry ideas from deep neural networks and approximate Bayesian inference to derive a generalised class of deep, directed generative models, endowed with a new algorithm for scalable inference and
Auto-Encoding Variational Bayes
TLDR
A stochastic variational inference and learning algorithm that scales to large datasets and, under some mild differentiability conditions, even works in the intractable case is introduced.
...
1
2
3
4
...