Corpus ID: 31613184

EmotiGAN : Emoji Art using Generative Adversarial Networks

  title={EmotiGAN : Emoji Art using Generative Adversarial Networks},
  author={Marcel Puyat},
We investigate a Generative Adversarial Network (GAN) approach to generating emojis from text. We focus on two interesting research areas related to GANs: training stability and mode collapse. In doing so, we explore a novel GAN training approach that involves training a generator with different images with the same conditional label in order to produce more varying kinds of images with Conditional GANs. 

Figures from this paper

Generating Emoji with Conditional Variational Autoencoders and Word Embedding
The aim of the present study is to generate an emoji based on input text automatically to facilitate easier communication and eliminate the process of designing new emoji. Expand
How Shell and Horn make a Unicorn: Experimenting with Visual Blending in Emoji
The experimental results show that the Visual Blending-based system is able to produce new emoji that represent the concepts introduced by the user, and according to the participants, the blends are not only visually appealing but also unexpected. Expand
Emojinating : Representing Concepts Using Emoji
Emoji system does not currently cover all possible concepts. In this paper, we present the platform Emojinating, which has the purpose of fostering creativity and aiding in ideation processes. ItExpand
Assessing Usefulness of a Visual Blending System: "Pictionary Has Used Image-making New Meaning Logic for Decades. We Don't Need a Computational Platform to Explore the Blending Phenomena", Do We?
This work addresses the topic of visual blending using different points of view and conducts two user studies to assess the usefulness of a visual blending system. Expand


Conditional Generative Adversarial Nets
The conditional version of generative adversarial nets is introduced, which can be constructed by simply feeding the data, y, to the generator and discriminator, and it is shown that this model can generate MNIST digits conditioned on class labels. Expand
Improved Techniques for Training GANs
This work focuses on two applications of GANs: semi-supervised learning, and the generation of images that humans find visually realistic, and presents ImageNet samples with unprecedented resolution and shows that the methods enable the model to learn recognizable features of ImageNet classes. Expand
StackGAN: Text to Photo-Realistic Image Synthesis with Stacked Generative Adversarial Networks
This paper proposes Stacked Generative Adversarial Networks (StackGAN) to generate 256 photo-realistic images conditioned on text descriptions and introduces a novel Conditioning Augmentation technique that encourages smoothness in the latent conditioning manifold. Expand
Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks
This work introduces a class of CNNs called deep convolutional generative adversarial networks (DCGANs), that have certain architectural constraints, and demonstrates that they are a strong candidate for unsupervised learning. Expand
Generative Adversarial Nets
We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and aExpand
Efficient Estimation of Word Representations in Vector Space
Two novel model architectures for computing continuous vector representations of words from very large data sets are proposed and it is shown that these vectors provide state-of-the-art performance on the authors' test set for measuring syntactic and semantic word similarities. Expand