# Concept-Oriented Deep Learning: Generative Concept Representations

@article{Chang2018ConceptOrientedDL, title={Concept-Oriented Deep Learning: Generative Concept Representations}, author={Daniel T. Chang}, journal={ArXiv}, year={2018}, volume={abs/1811.06622} }

Generative concept representations have three major advantages over discriminative ones: they can represent uncertainty, they support integration of learning and reasoning, and they are good for unsupervised and semi-supervised learning. We discuss probabilistic and generative deep learning, which generative concept representations are based on, and the use of variational autoencoders and generative adversarial networks for learning generative concept representations, particularly for conceptsâ€¦Â

## Topics from this paper

## 6 Citations

Latent Variable Modeling for Generative Concept Representations and Deep Generative Models

- Computer Science, MathematicsArXiv
- 2018

This work investigates and discusses latent variable modeling, including latent variable models, latent representations and latent spaces, particularly hierarchical latent representation and latent space vectors and geometry and that used in variational autoencoders and generative adversarial networks.

Probabilistic Generative Deep Learning for Molecular Design

- Computer ScienceArXiv
- 2019

This work discusses the major components of probabilistic generative deep learning for molecular design, which include molecular structure, molecular representations, deep generative models, molecular latent representations and latent space, molecular structure-property and structure-activity relationships, molecular similarity and molecular design.

Tiered Latent Representations and Latent Spaces for Molecular Graphs

- Computer ScienceArXiv
- 2019

This work proposes an architecture for learning tiered latent representations and latent spaces for molecular graphs as a simple way to explicitly represent and utilize groups, which consist of the atom (node) tier, the group tier and the molecule (graph) tier.

Probabilistic Deep Learning with Probabilistic Neural Networks and Deep Probabilistic Models

- Computer Science, MathematicsArXiv
- 2021

TensorFlow Probability is a library for Probabilistic modeling and inference which can be used for both approaches of probabilistic deep learning and discusses some major examples of each approach including Bayesian neural networks and mixture density networks.

Tiered Graph Autoencoders with PyTorch Geometric for Molecular Graphs

- Computer Science, MathematicsArXiv
- 2019

This paper discusses adapting tiered graph autoencoders for use with PyTorch Geometric, for both the deterministic tiered graphs autoencoder model and the probabilistic tiered variational graph aut Koencoder models.

On the Generation of Novel Ligands for SARS-CoV-2 Protease and ACE2 Receptor via Constrained Graph Variational Autoencoders

- Medicine
- 2020

This research focuses on the generation of novel candidate inhibitors via constrained graph variational autoencoders and the calculation of their Tanimoto similarities against existing drugs---repurposing these existing drugs and considering the novel ligands as possible SARS-CoV-2 main protease inhibitors and ACE2 receptor blockers.

## References

SHOWING 1-10 OF 31 REFERENCES

Concept-Oriented Deep Learning

- Computer ScienceArXiv
- 2018

The proposed concept-oriented deep learning (CODL) addresses some of the major limitations of deep learning: interpretability, transferability, contextual adaptation, and requirement for lots of labeled training data.

Grammar Variational Autoencoder

- Computer Science, MathematicsICML
- 2017

Surprisingly, it is shown that not only does the model more often generate valid outputs, it also learns a more coherent latent space in which nearby points decode to similar discrete outputs.

Variational Graph Auto-Encoders

- Mathematics, Computer ScienceArXiv
- 2016

The variational graph auto-encoder (VGAE) is introduced, a framework for unsupervised learning on graph-structured data based on the variational auto- Encoder (VAE) that can naturally incorporate node features, which significantly improves predictive performance on a number of benchmark datasets.

On Unifying Deep Generative Models

- Computer Science, MathematicsICLR
- 2018

It is shown that GANs and VAEs involve minimizing KL divergences of respective posterior and inference distributions with opposite directions, extending the two learning phases of classic wake-sleep algorithm, respectively.

SeqGAN: Sequence Generative Adversarial Nets with Policy Gradient

- Computer Science, MathematicsAAAI
- 2017

Modeling the data generator as a stochastic policy in reinforcement learning (RL), SeqGAN bypasses the generator differentiation problem by directly performing gradient policy update.

Tutorial on Variational Autoencoders

- Computer Science, MathematicsArXiv
- 2016

This tutorial introduces the intuitions behind VAEs, explains the mathematics behind them, and describes some empirical behavior.

A Review of Learning with Deep Generative Models from perspective of graphical modeling

- Computer Science, MathematicsArXiv
- 2018

A review on learning with deep generative models (DGMs), which is an highly-active area in machine learning and more generally, artificial intelligence, with more emphasis on reviewing, differentiating and connecting different learning algorithms.

GraphGAN: Graph Representation Learning with Generative Adversarial Nets

- Computer Science, MathematicsAAAI
- 2018

GraphGAN is proposed, an innovative graph representation learning framework unifying above two classes of methods, in which the generative model and discriminative model play a game-theoretical minimax game.

Stochastic Backpropagation and Approximate Inference in Deep Generative Models

- Computer Science, MathematicsICML
- 2014

We marry ideas from deep neural networks and approximate Bayesian inference to derive a generalised class of deep, directed generative models, endowed with a new algorithm for scalable inference andâ€¦

Auto-Encoding Variational Bayes

- Mathematics, Computer ScienceICLR
- 2014

A stochastic variational inference and learning algorithm that scales to large datasets and, under some mild differentiability conditions, even works in the intractable case is introduced.