Generating and Exploiting Large-scale Pseudo Training Data for Zero Pronoun Resolution

@inproceedings{Liu2017GeneratingAE,
  title={Generating and Exploiting Large-scale Pseudo Training Data for Zero Pronoun Resolution},
  author={Ting Liu and Yiming Cui and Qingyu Yin and Weinan Zhang and Shijin Wang and Guoping Hu},
  booktitle={ACL},
  year={2017}
}
Most existing approaches for zero pronoun resolution are heavily relying on annotated data, which is often released by shared task organizers. Therefore, the lack of annotated data becomes a major obstacle in the progress of zero pronoun resolution task. Also, it is expensive to spend manpower on labeling the data for better performance. To alleviate the problem above, in this paper, we propose a simple but novel approach to automatically generate large-scale pseudo training data for zero… CONTINUE READING
2
Twitter Mentions

Similar Papers

Figures, Tables, Results, and Topics from this paper.

Key Quantitative Results

  • Experimental results show that the proposed approach significantly outperforms the state-of-the-art systems with an absolute improvements of 3.1% F-score on OntoNotes 5.0 data.

Citations

Publications citing this paper.
SHOWING 1-10 OF 12 CITATIONS

Image-Text Surgery: Efficient Concept Learning in Image Captioning by Generating Pseudopairs

  • IEEE Transactions on Neural Networks and Learning Systems
  • 2018
VIEW 1 EXCERPT
CITES BACKGROUND