Corpus ID: 225062193

Generative Neurosymbolic Machines

@article{Jiang2020GenerativeNM,
  title={Generative Neurosymbolic Machines},
  author={Jindong Jiang and Sungjin Ahn},
  journal={ArXiv},
  year={2020},
  volume={abs/2010.12152}
}
Reconciling symbolic and distributed representations is a crucial challenge that can potentially resolve the limitations of current deep learning. Remarkable advances in this direction have been achieved recently via generative object-centric representation models. While learning a recognition model that infers object-centric symbolic representations like bounding boxes from raw images in an unsupervised way, no such model can provide another important ability of a generative model, i.e… Expand
On the Binding Problem in Artificial Neural Networks
TLDR
This paper proposes a unifying framework that revolves around forming meaningful entities from unstructured sensory inputs, maintaining this separation of information at a representational level (representation), and using these entities to construct new inferences, predictions, and behaviors (composition). Expand
Efficient Iterative Amortized Inference for Learning Symmetric and Disentangled Multi-Object Representations
TLDR
This work introduces EfficientMORL, an efficient framework for the unsupervised learning of object-centric representation learning that demonstrates strong object decomposition and disentanglement on the standard multi-object benchmark while achieving nearly an order of magnitude faster training and test time inference over the previous state-of-theart model. Expand
Generative Scene Graph Networks
TLDR
Generative Scene Graph Networks are proposed, the first deep generative model that learns to discover the primitive parts and infer the part-whole relationship jointly from multi-object scenes without supervision and in an end-to-end trainable way. Expand
Illiterate DALL$\cdot$E Learns to Compose
  • Gautam Singh, Fei Deng, Sungjin Ahn
  • Computer Science
  • 2021
Although DALL·E has shown an impressive ability of composition-based systematic generalization in image generation, it requires the dataset of text-image pairs and the compositionality is provided byExpand
Neuro-Symbolic Artificial Intelligence: Current Trends
TLDR
A structured overview of current trends in neuro-Symbolic Artificial Intelligence is provided, by means of categorizing recent publications from key conferences, to serve as a convenient starting point for research on the general topic. Expand
SLASH: Embracing Probabilistic Circuits into Neural Answer Set Programming
TLDR
This work introduces SLASH – a novel deep probabilistic programming language (DPPL) that consists of Neural-Probabilistic Predicates (NPPs) and logical programs which are united via answer set programming to elegantly integrate the symbolic and neural components in a unified framework. Expand
Structured World Belief for Reinforcement Learning in POMDP
TLDR
This paper proposes Structured World Belief, a model for learning and inference of object-centric belief statesferred by Sequential Monte Carlo (SMC), and shows the efficacy of structured world belief in improving the performance of reinforcement learning, planning and supervised reasoning. Expand
Benchmarking Unsupervised Object Representations for Video Sequences
TLDR
A benchmark to compare the perceptual abilities of four object-centric approaches and suggests that the architectures with unconstrained latent representations learn more powerful representations in terms of object detection, segmentation and tracking than the spatial transformer based architectures. Expand

References

SHOWING 1-10 OF 48 REFERENCES
GENESIS: Generative Scene Inference and Sampling with Object-Centric Latent Representations
Generative latent-variable models are emerging as promising tools in robotics and reinforcement learning. Yet, even though tasks in these domains typically involve distinct objects, mostExpand
Importance Weighted Autoencoders
TLDR
The importance weighted autoencoder (IWAE), a generative model with the same architecture as the VAE, but which uses a strictly tighter log-likelihood lower bound derived from importance weighting, shows empirically that IWAEs learn richer latent space representations than VAEs, leading to improved test log- likelihood on density estimation benchmarks. Expand
Multi-Object Representation Learning with Iterative Variational Inference
TLDR
This work argues for the importance of learning to segment and represent objects jointly, and demonstrates that, starting from the simple assumption that a scene is composed of multiple entities, it is possible to learn to segment images into interpretable objects with disentangled representations. Expand
Spatial Broadcast Decoder: A Simple Architecture for Learning Disentangled Representations in VAEs
TLDR
This work presents a simple neural rendering architecture that helps variational autoencoders (VAEs) learn disentangled representations that improves disentangling, reconstruction accuracy, and generalization to held-out regions in data space and is complementary to state-of-the-art disentangle techniques and when incorporated improves their performance. Expand
NVAE: A Deep Hierarchical Variational Autoencoder
TLDR
NVAE is the first successful VAE applied to natural images as large as 256$\times$256 pixels and achieves state-of-the-art results among non-autoregressive likelihood-based models on the MNIST, CIFAR-10, CelebA 64, and CelebA HQ datasets and it provides a strong baseline on FFHQ. Expand
Attend, Infer, Repeat: Fast Scene Understanding with Generative Models
We present a framework for efficient inference in structured image models that explicitly reason about objects. We achieve this by performing probabilistic inference using a recurrent neural networkExpand
beta-VAE: Learning Basic Visual Concepts with a Constrained Variational Framework
Learning an interpretable factorised representation of the independent data generative factors of the world without supervision is an important precursor for the development of artificialExpand
Challenging Common Assumptions in the Unsupervised Learning of Disentangled Representations
TLDR
This paper theoretically shows that the unsupervised learning of disentangled representations is fundamentally impossible without inductive biases on both the models and the data, and trains more than 12000 models covering most prominent methods and evaluation metrics on seven different data sets. Expand
Entity Abstraction in Visual Model-Based Reinforcement Learning
TLDR
Object-centric perception, prediction, and planning (OP3), which is the first fully probabilistic entity-centric dynamic latent variable framework for model-based reinforcement learning that acquires entity representations from raw visual observations without supervision and uses them to predict and plan, is presented. Expand
MONet: Unsupervised Scene Decomposition and Representation
TLDR
The Multi-Object Network (MONet) is developed, which is capable of learning to decompose and represent challenging 3D scenes into semantically meaningful components, such as objects and background elements. Expand
...
1
2
3
4
5
...