Corpus ID: 27273534

Unified Classification and Generation Networks for Co-Creative Systems

@inproceedings{Singh2017UnifiedCA,
  title={Unified Classification and Generation Networks for Co-Creative Systems},
  author={Kunwar Yashraj Singh and N. Davis and Chih-Pin Hsiao and Ricardo Macias and B. Lin},
  booktitle={ICCC},
  year={2017}
}
This paper reports on a new deep machine learning architecture to classify and generate input for co-creative systems. Our approach combines the generational strengths of Variational Autoencoders with the image sharpness typically associated with Generative Adversarial Networks, thereby enabling a generative deep learning architecture for training co-creative agents called the Auxiliary Classifier Variational Autoencoder (AC-VAE). We report the experimental results of our network’s… 
Evaluating Creativity in Computational Co-Creative Systems
TLDR
It is concluded that existing co-Creative systems tend to focus on evaluating the user experience, and adopting evaluation methods from autonomous creative systems may lead to co-creative systems that are self-aware and intentional.
Embodiment and Computational Creativity
TLDR
A systematic review and a prescriptive analysis of publications at the International Conference on Computational Creativity show opportunities and challenges in embracing embodiment in CC as a reference for research, and put forward important directions to further the embodied CC research programme.
Intention-Aware Human-Robot Collaborative Design
Robots pose unique potential partners for human designers when thought of as physical and social embodiments of computational agents. In this work, we propose the efficacy of robotic collaborative
StrokeCoder: Path-Based Image Generation from Single Examples using Transformers
TLDR
This paper demonstrates how a Transformer Neural Network can be used to learn a Generative Model from a single path-based example image and how the model can be use to generate a large set of deviated images, which still represent the original image's style and concept.

References

SHOWING 1-10 OF 33 REFERENCES
Improved Techniques for Training GANs
TLDR
This work focuses on two applications of GANs: semi-supervised learning, and the generation of images that humans find visually realistic, and presents ImageNet samples with unprecedented resolution and shows that the methods enable the model to learn recognizable features of ImageNet classes.
Stacked Denoising Autoencoders: Learning Useful Representations in a Deep Network with a Local Denoising Criterion
TLDR
This work clearly establishes the value of using a denoising criterion as a tractable unsupervised objective to guide the learning of useful higher level representations.
Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks
TLDR
This work introduces a class of CNNs called deep convolutional generative adversarial networks (DCGANs), that have certain architectural constraints, and demonstrates that they are a strong candidate for unsupervised learning.
An Architecture for Deep, Hierarchical Generative Models
We present an architecture which lets us train deep, directed generative models with many layers of latent variables. We include deterministic paths between all latent variables and the generated
Generative Adversarial Nets
We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a
Human-level concept learning through probabilistic program induction
TLDR
A computational model is described that learns in a similar fashion and does so better than current deep learning algorithms and can generate new letters of the alphabet that look “right” as judged by Turing-like tests of the model's output in comparison to what real humans produce.
Image Style Transfer Using Convolutional Neural Networks
TLDR
A Neural Algorithm of Artistic Style is introduced that can separate and recombine the image content and style of natural images and provide new insights into the deep image representations learned by Convolutional Neural Networks and demonstrate their potential for high level image synthesis and manipulation.
Ladder Variational Autoencoders
TLDR
A new inference model is proposed, the Ladder Variational Autoencoder, that recursively corrects the generative distribution by a data dependent approximate likelihood in a process resembling the recently proposed Ladder Network.
How to Train Deep Variational Autoencoders and Probabilistic Ladder Networks
TLDR
This work proposes three advances in training algorithms of variational autoencoders, for the first time allowing to train deep models of up to five stochastic layers, using a structure similar to the Ladder network as the inference model and shows state-of-the-art log-likelihood results for generative modeling on several benchmark datasets.
Deep Residual Learning for Image Recognition
TLDR
This work presents a residual learning framework to ease the training of networks that are substantially deeper than those used previously, and provides comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth.
...
1
2
3
4
...