SentiGAN: Generating Sentimental Texts via Mixture Adversarial Networks

@inproceedings{Wang2018SentiGANGS,
  title={SentiGAN: Generating Sentimental Texts via Mixture Adversarial Networks},
  author={Ke Wang and Xiaojun Wan},
  booktitle={IJCAI},
  year={2018}
}
Generating texts of different sentiment labels is getting more and more attention in the area of natural language generation. [] Key Method In our framework, multiple generators are trained simultaneously, aiming at generating texts of different sentiment labels without supervision. We propose a penalty based objective in the generators to force each of them to generate diversified examples of a specific sentiment label.

Figures and Tables from this paper

Automatic generation of sentimental texts via mixture adversarial networks
CatVRNN: Generating Category Texts via Multi-task Learning
Short Text Generation Based on Adversarial Graph Attention Networks
TLDR
A novel framework model-SGATGAN is proposed, which applies GAT (Generative Attention Nets) as the discriminator to establish the connection between the texts of the same type and provides a method of generating multispecies texts using a single generator.
Text Generation from Triple via Generative Adversarial Nets
TLDR
This paper proposes a new sequence to sequence model via GAN (Generative Adversarial Networks) rather than MLE (Maximum Likelihood Estimate) to avoid exposure bias, and the experimental results prove the model has achieved the best performance.
Controlled Text Generation with Adversarial Learning
TLDR
A novel network – the Controlled TExt generation Relational Memory GAN (CTERM-GAN) – that uses an external input to influence the coherence of sentence generation and retaining or improving the syntactic quality of the generated sentences while significantly improving their semantic coherence with the given input.
PS-GAN: Feature augmented text generation in Telugu
TLDR
A novel model called the POS-SentiGAN (PS-GAN), where the use of Parts-Of-Speech tag and sentiment features aid in the generation of better sentences is proposed, and the performance of the proposed models on three datasets, namely, Pravar, Telugu Wikipedia2 and Telugu News3 is shown.
Emotional Dialogue Generation with Generative Adversarial Networks
  • Yun Li, Bin Wu
  • Computer Science
    2020 IEEE 4th Information Technology, Networking, Electronic and Automation Control Conference (ITNEC)
  • 2020
TLDR
A dialogue generation framework - EDGAN, which has multiple generators and one multi-class discriminator, which can generate more high-quality emotional responses than previous baselines is proposed.
Pun-GAN: Generative Adversarial Network for Pun Generation
TLDR
Experiments show that the proposed Pun-GAN can generate sentences that are more ambiguous and diverse in both automatic and human evaluation.
CatGAN: Category-aware Generative Adversarial Networks with Hierarchical Evolutionary Learning for Category Text Generation
TLDR
A category-aware GAN (CatGAN) which consists of an efficient category- aware model for category text generation and a hierarchical evolutionary learning algorithm for training the authors' model.
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 36 REFERENCES
Adversarial Ranking for Language Generation
TLDR
This paper proposes a novel generative adversarial network, RankGAN, for generating high-quality language descriptions by viewing a set of data samples collectively and evaluating their quality through relative ranking scores, which helps to make better assessment which in turn helps to learn a better generator.
Long Text Generation via Adversarial Training with Leaked Information
TLDR
The discriminative net is allowed to leak its own high-level extracted features to the generative net to further help the guidance, and without any supervision, LeakGAN would be able to implicitly learn sentence structures only through the interaction between Manager and Worker.
Towards Automatic Generation of Product Reviews from Aspect-Sentiment Scores
TLDR
A deep neural network model is introduced to generate long Chinese reviews from aspect-sentiment scores representing users’ opinions and a hierarchical structure with aligned attention in the Long-Short Term Memory (LSTM) decoder is proposed.
SeqGAN: Sequence Generative Adversarial Nets with Policy Gradient
TLDR
Modeling the data generator as a stochastic policy in reinforcement learning (RL), SeqGAN bypasses the generator differentiation problem by directly performing gradient policy update.
Conditional Generative Adversarial Nets
TLDR
The conditional version of generative adversarial nets is introduced, which can be constructed by simply feeding the data, y, to the generator and discriminator, and it is shown that this model can generate MNIST digits conditioned on class labels.
MojiTalk: Generating Emotional Responses at Scale
TLDR
This paper collects a large corpus of Twitter conversations that include emojis in the response and investigates several conditional variational autoencoders training on these conversations, which allow us to use emojes to control the emotion of the generated text.
Generative Concatenative Nets Jointly Learn to Write and Classify Reviews
TLDR
A character-level Recurrent Neural Network (RNN) that generates personalized product reviews that convincingly learns styles and opinions of nearly 1000 distinct authors, using a large corpus of reviews from BeerAdvocate.com.
Learning to Generate Product Reviews from Attributes
TLDR
An attention-enhanced attribute-to-sequence model to generate product reviews for given attribute information, such as user, product, and rating, and an attention mechanism to jointly generate reviews and align words with input attributes is presented.
Deep Generative Image Models using a Laplacian Pyramid of Adversarial Networks
TLDR
A generative parametric model capable of producing high quality samples of natural images using a cascade of convolutional networks within a Laplacian pyramid framework to generate images in a coarse-to-fine fashion.
Professor Forcing: A New Algorithm for Training Recurrent Networks
TLDR
The Professor Forcing algorithm, which uses adversarial domain adaptation to encourage the dynamics of the recurrent network to be the same when training the network and when sampling from the network over multiple time steps, is introduced.
...
1
2
3
4
...