A Hierarchical Neural Autoencoder for Paragraphs and Documents

Abstract

Natural language generation of coherent long texts like paragraphs or longer documents is a challenging problem for recurrent networks models. In this paper, we explore an important step toward this generation task: training an LSTM (Longshort term memory) auto-encoder to preserve and reconstruct multi-sentence paragraphs. We introduce an LSTM model that hierarchically builds an embedding for a paragraph from embeddings for sentences and words, then decodes this embedding to reconstruct the original paragraph. We evaluate the reconstructed paragraph using standard metrics like ROUGE and Entity Grid, showing that neural models are able to encode texts in a way that preserve syntactic, semantic, and discourse coherence. While only a first step toward generating coherent text units from neural models, our work has the potential to significantly impact natural language generation and summarization1.

Extracted Key Phrases

6 Figures and Tables

050201520162017
Citations per Year

160 Citations

Semantic Scholar estimates that this publication has 160 citations based on the available data.

See our FAQ for additional information.

Cite this paper

@inproceedings{Li2015AHN, title={A Hierarchical Neural Autoencoder for Paragraphs and Documents}, author={Jiwei Li and Minh-Thang Luong and Daniel Jurafsky}, booktitle={ACL}, year={2015} }