Corpus ID: 4842909

QANet: Combining Local Convolution with Global Self-Attention for Reading Comprehension

@article{Yu2018QANetCL,
  title={QANet: Combining Local Convolution with Global Self-Attention for Reading Comprehension},
  author={Adams Wei Yu and David Dohan and Minh-Thang Luong and R. Zhao and Kai Chen and Mohammad Norouzi and Quoc V. Le},
  journal={ArXiv},
  year={2018},
  volume={abs/1804.09541}
}
Current end-to-end machine reading and question answering (Q\&A) models are primarily based on recurrent neural networks (RNNs) with attention. [...] Key Method The speed-up gain allows us to train the model with much more data. We hence combine our model with data generated by backtranslation from a neural machine translation model. On the SQuAD dataset, our single model, trained with augmented data, achieves 84.6 F1 score on the test set, which is significantly better than the best published F1 score of 81.8.Expand
602 Citations
Transformer with CSAN for Question Answering : SQuAD 2 . 0
  • 2019
Beyond RNNs: Positional Self-Attention with Co-Attention for Video Question Answering
Positioning Self in SQuAD
Question Answering with Self-Attention and Residuals
QANet : Convolutions and Attention are All you Need
R-Trans: RNN Transformer Network for Chinese Machine Reading Comprehension
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 53 REFERENCES
Attention is All you Need
Effective Approaches to Attention-based Neural Machine Translation
Gated Self-Matching Networks for Reading Comprehension and Question Answering
Globally Normalized Reader
Learning to Skim Text
...
1
2
3
4
5
...