Distraction-based neural networks for modeling documents

@inproceedings{Chen2016DistractionbasedNN,
  title={Distraction-based neural networks for modeling documents},
  author={Qian Qian Chen and Xiaodan Zhu and Zhen-Hua Ling and Si Wei and Hui Jiang},
  booktitle={IJCAI 2016},
  year={2016}
}
Distributed representation learned with neural networks has recently shown to be effective in modeling natural languages at fine granularities such as words, phrases, and even sentences. Whether and how such an approach can be extended to help model larger spans of text, e.g., documents, is intriguing, and further investigation would still be desirable. This paper aims to enhance neural network models for such a purpose. A typical problem of document-level modeling is automatic summarization… CONTINUE READING

Citations

Publications citing this paper.
SHOWING 1-10 OF 22 CITATIONS

Aspect and Sentiment Aware Abstractive Review Summarization

  • COLING
  • 2018
VIEW 2 EXCERPTS
CITES BACKGROUND

References

Publications referenced by this paper.
SHOWING 1-10 OF 48 REFERENCES

CoRR

Zhaopeng Tu, Zhengdong Lu, Yang Liu, Xiaohua Liu, Hang Li. Coverage-based neural machine translation
  • abs/1601.04811,
  • 2016
VIEW 24 EXCERPTS
HIGHLY INFLUENTIAL

In NIPS

Ilya Sutskever, Oriol Vinyals, Quoc VV Le. Sequence to sequence learning with neural networks
  • pages 3104–3112,
  • 2014
VIEW 4 EXCERPTS
HIGHLY INFLUENTIAL

ADADELTA: An Adaptive Learning Rate Method

VIEW 2 EXCERPTS
HIGHLY INFLUENTIAL

CoRR

Tian Wang, Kyunghyun Cho. Larger-context language modelling
  • abs/1511.03729,
  • 2015

CoRR

Xiaodan Zhu, Parinaz Sobhani, Hongyu Guo. Long short-term memory over tree structures
  • abs/1503.04881,
  • 2015

CoRR

Konstantin Lopyrev. Generating news headlines with recurrent networks
  • abs/1512.01712,
  • 2015

Similar Papers

Loading similar papers…