Corpus ID: 236428917

Towards Controlled and Diverse Generation of Article Comments

@article{Zhang2021TowardsCA,
  title={Towards Controlled and Diverse Generation of Article Comments},
  author={Linhao Zhang and Houfeng Wang},
  journal={ArXiv},
  year={2021},
  volume={abs/2107.11781}
}
Much research in recent years has focused on automatic article commenting. However, few of previous studies focus on the controllable generation of comments. Besides, they tend to generate dull and commonplace comments, which further limits their practical application. In this paper, we make the first step towards controllable generation of comments, by building a system that can explicitly control the emotion of the generated comments. To achieve this, we associate each kind of emotion… Expand

Figures and Tables from this paper

References

SHOWING 1-10 OF 26 REFERENCES
Learning Comment Generation by Leveraging User-generated Data
TLDR
The result shows that the proposed generative model significantly outperforms strong baseline such as Seq2Seq with attention and Information Retrieval models by around 27 and 30 BLEU-1 points respectively. Expand
Automatic Generation of News Comments Based on Gated Attention Neural Networks
TLDR
This paper introduces the gated attention mechanism to use news context self-adaptively and selectively to generate news comments and applies generative adversarial nets to improve GANN. Expand
Unsupervised Machine Commenting with Neural Variational Topic Model
TLDR
This work proposes a novel unsupervised approach to train an automatic article commenting model, relying on nothing but unpaired articles and comments, and shows that the proposed topic-based approach significantly outperforms previous lexicon-based models. Expand
Coherent Comments Generation for Chinese Articles with a Graph-to-Sequence Model
TLDR
A graph-to-sequence model that models the input news as a topic interaction graph can better understand the internal structure of the article and the connection between topics, which makes it better able to generate coherent and informative comments. Expand
Get To The Point: Summarization with Pointer-Generator Networks
TLDR
A novel architecture that augments the standard sequence-to-sequence attentional model in two orthogonal ways, using a hybrid pointer-generator network that can copy words from the source text via pointing, which aids accurate reproduction of information, while retaining the ability to produce novel words through the generator. Expand
Fast Abstractive Summarization with Reinforce-Selected Sentence Rewriting
TLDR
An accurate and fast summarization model that first selects salient sentences and then rewrites them abstractively to generate a concise overall summary is proposed, which achieves the new state-of-the-art on all metrics on the CNN/Daily Mail dataset, as well as significantly higher abstractiveness scores. Expand
Automatic Dialogue Generation with Expressed Emotions
TLDR
This research addresses the problem of forcing the dialogue generation to express emotion, and presents three models that either concatenate the desired emotion with the source input during the learning, or push the emotion in the decoder. Expand
Generating High-Quality and Informative Conversation Responses with Sequence-to-Sequence Models
TLDR
This work focuses on the single turn setting, introduces a stochastic beam-search algorithm with segment-by-segment reranking which lets us inject diversity earlier in the generation process, and proposes a practical approach, called the glimpse-model, for scaling to large datasets. Expand
Toward Controlled Generation of Text
TLDR
A new neural generative model is proposed which combines variational auto-encoders and holistic attribute discriminators for effective imposition of semantic structures inGeneric generation and manipulation of text. Expand
A Deep Reinforced Model for Abstractive Summarization
TLDR
A neural network model with a novel intra-attention that attends over the input and continuously generated output separately, and a new training method that combines standard supervised word prediction and reinforcement learning (RL) that produces higher quality summaries. Expand
...
1
2
3
...