• Publications
  • Influence
Character-level Convolutional Networks for Text Classification
TLDR
This article constructed several large-scale datasets to show that character-level convolutional networks could achieve state-of-the-art or competitive results in text classification.
Energy-based Generative Adversarial Network
We introduce the "Energy-based Generative Adversarial Network" model (EBGAN) which views the discriminator as an energy function that attributes low energies to the regions near the data manifold and
Deep Graph Library: Towards Efficient and Scalable Deep Learning on Graphs
TLDR
Deep Graph Library (DGL) enables arbitrary message handling and mutation operators, flexible propagation rules, and is framework agnostic so as to leverage high-performance tensor, autograd operations, and other feature extraction modules already available in existing frameworks.
Adversarially Regularized Autoencoders
TLDR
This work proposes a flexible method for training deep latent variable models of discrete structures based on the recently-proposed Wasserstein autoencoder (WAE), and shows that the latent representation can be trained to perform unaligned textual style transfer, giving improvements both in automatic/human evaluation compared to existing methods.
Disentangling factors of variation in deep representation using adversarial training
TLDR
A conditional generative model for learning to disentangle the hidden factors of variation within a set of labeled observations, and separate them into complementary codes that are capable of generalizing to unseen classes and intra-class variabilities.
Stacked What-Where Auto-encoders
We present a novel architecture, the "stacked what-where auto-encoders" (SWWAE), which integrates discriminative and generative pathways and provides a unified approach to supervised, semi-supervised
Adversarially Regularized Autoencoders for Generating Discrete Structures
TLDR
This work considers a simple approach for handling these two challenges jointly, employing a discrete structure autoencoder with a code space regularized by generative adversarial training, and demonstrates empirically how key properties of the data are captured in the model's latent space.
PIANOTREE VAE: Structured Representation Learning for Polyphonic Music
TLDR
The experiments prove the validity of the PianoTree VAE viasemantically meaningful latent code for polyphonic segments, more satisfiable reconstruction aside of decent geometry learned in the latent space, and this model's benefits to the variety of the downstream music generation.
GLoMo: Unsupervisedly Learned Relational Graphs as Transferable Representations
TLDR
This work explores the possibility of learning generic latent relational graphs that capture dependencies between pairs of data units from large-scale unlabeled data and transferring the graphs to downstream tasks, and shows that the learned graphs are generic enough to be transferred to different embeddings on which the graphs have been trained.
...
1
2
3
...