Corpus ID: 167217319

Unsupervised Controllable Text Generation with Global Variation Discovery and Disentanglement

@article{Xu2019UnsupervisedCT,
  title={Unsupervised Controllable Text Generation with Global Variation Discovery and Disentanglement},
  author={Peng Xu and Yanshuai Cao and J. C. K. Cheung},
  journal={ArXiv},
  year={2019},
  volume={abs/1905.11975}
}
Existing controllable text generation systems rely on annotated attributes, which greatly limits their capabilities and applications. [...] Key Method We do so by decomposing the latent space of the VAE into two parts: one incorporates structural constraints to capture dominant global variations implicitly present in the data, e.g., sentiment or topic; the other is unstructured and is used for the reconstruction of the source sentences.Expand
Stylized Text Generation: Approaches and Applications
TLDR
This tutorial delves deep into machine learning methods, including embedding learning techniques to represent style, adversarial learning, and reinforcement learning with cycle consistency to match content but to distinguish different styles. Expand
Formality Style Transfer with Shared Latent Space
TLDR
This paper presents a new approach, Sequence-to-Sequence with Shared Latent Space (S2S-SLS), for formality style transfer, where two auxiliary losses are proposed and joint training of bi-directional transfer and auto-encoding is adopted. Expand
Enhancing Controllability of Text Generation
There are many models used to generate text, conditioned on some context. However, those approaches do not provide an ability to control various aspects of the generated text like style, tone,Expand
Style Example-Guided Text Generation using Generative Adversarial Transformers
TLDR
This work introduces a language generative model framework for generating a styled paragraph based on a context sentence and a style reference example and proposes a novel objective function to train the framework. Expand
Harnessing Pre-Trained Neural Networks with Rules for Formality Style Transfer
TLDR
This work studies how to harness rules into a state-of-the-art neural network that is typically pretrained on massive corpora and achieves a new state- of- the-art on benchmark datasets. Expand
Do Sequence-to-sequence VAEs Learn Global Features of Sentences?
TLDR
This work proposes variants based on bag-of-words assumptions and language model pretraining that learn latents that are more global: they are more predictive of topic or sentiment labels, and their reconstructions are more faithful to the labels of the original documents. Expand
Exploring Controllable Text Generation Techniques
TLDR
This work provides a new schema of the pipeline of the generation process by classifying it into five modules, and presents an overview of different techniques used to perform the modulation of these modules. Expand

References

SHOWING 1-10 OF 20 REFERENCES
Toward Controlled Generation of Text
TLDR
A new neural generative model is proposed which combines variational auto-encoders and holistic attribute discriminators for effective imposition of semantic structures inGeneric generation and manipulation of text. Expand
Improved Variational Autoencoders for Text Modeling using Dilated Convolutions
TLDR
It is shown that with the right decoder, VAE can outperform LSTM language models, and perplexity gains are demonstrated on two datasets, representing the first positive experimental result on the use VAE for generative modeling of text. Expand
Unsupervised Text Style Transfer using Language Models as Discriminators
TLDR
This paper proposes a new technique that uses a target domain language model as the discriminator, providing richer and more stable token-level feedback during the learning process, and shows that this approach leads to improved performance on three tasks: word substitution decipherment, sentiment modification, and related language translation. Expand
Language Models are Unsupervised Multitask Learners
TLDR
It is demonstrated that language models begin to learn these tasks without any explicit supervision when trained on a new dataset of millions of webpages called WebText, suggesting a promising path towards building language processing systems which learn to perform tasks from their naturally occurring demonstrations. Expand
beta-VAE: Learning Basic Visual Concepts with a Constrained Variational Framework
Learning an interpretable factorised representation of the independent data generative factors of the world without supervision is an important precursor for the development of artificialExpand
Generating Sentences from a Continuous Space
TLDR
This work introduces and study an RNN-based variational autoencoder generative model that incorporates distributed latent representations of entire sentences that allows it to explicitly model holistic properties of sentences such as style, topic, and high-level syntactic features. Expand
Topic Compositional Neural Language Model
TLDR
The TCNLM learns the global semantic coherence of a document via a neural topic model, and the probability of each learned latent topic is used to build a Mixture-of-Experts language model, where each expert is a recurrent neural network that accounts for learning the local structure of a word sequence. Expand
Adversarially Regularized Autoencoders
TLDR
This work proposes a flexible method for training deep latent variable models of discrete structures based on the recently-proposed Wasserstein autoencoder (WAE), and shows that the latent representation can be trained to perform unaligned textual style transfer, giving improvements both in automatic/human evaluation compared to existing methods. Expand
Deep Contextualized Word Representations
TLDR
A new type of deep contextualized word representation is introduced that models both complex characteristics of word use and how these uses vary across linguistic contexts, allowing downstream models to mix different types of semi-supervision signals. Expand
InfoGAN: Interpretable Representation Learning by Information Maximizing Generative Adversarial Nets
TLDR
Experiments show that InfoGAN learns interpretable representations that are competitive with representations learned by existing fully supervised methods. Expand
...
1
2
...