Exploring and Exploiting Multi-Granularity Representations for Machine Reading Comprehension
@article{Chen2022ExploringAE, title={Exploring and Exploiting Multi-Granularity Representations for Machine Reading Comprehension}, author={Nuo Chen and Chenyu You}, journal={ArXiv}, year={2022}, volume={abs/2208.08750} }
Recently, the attention-enhanced multi-layer encoder, such as Transformer, has been extensively studied in Machine Reading Comprehension (MRC). To predict the answer, it is common practice to employ a predictor to draw information only from the final encoder layer which generates the coarse-grained representations of the source sequences, i.e., passage and question. The analysis shows that the representation of source sequence becomes more coarse-grained from fine-grained as the encoding layer…
Figures and Tables from this paper
References
SHOWING 1-10 OF 60 REFERENCES
Adaptive Bi-Directional Attention: Exploring Multi-Granularity Representations for Machine Reading Comprehension
- Computer ScienceICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)
- 2021
A novel approach called Adaptive Bidirectional Attention is proposed, which adaptively exploits the source representations of different levels to the predictor, and the results are better than the previous state-of-the-art model by 2.5% EM and 2.3% F1 scores.
SG-Net: Syntax-Guided Machine Reading Comprehension
- Computer ScienceAAAI
- 2020
This work uses syntax to guide the text modeling by incorporating explicit syntactic constraints into attention mechanism for better linguistically motivated word representations and shows that the proposed SG-Net design helps achieve substantial performance improvement over strong baselines.
R-Trans: RNN Transformer Network for Chinese Machine Reading Comprehension
- Computer ScienceIEEE Access
- 2019
The results show that the proposed model outperforms the baseline and other prevalent MRC models notably, and established a new state-of-the-art record on the Les MMRC dataset.
Neural Machine Reading Comprehension: Methods and Trends
- Computer ScienceApplied Sciences
- 2019
Document Modeling with Graph Attention Networks for Multi-grained Machine Reading Comprehension
- Computer ScienceACL
- 2020
A novel multi-grained machine reading comprehension framework that focuses on modeling documents at their hierarchical nature, which are different levels of granularity: documents, paragraphs, sentences, and tokens, which significantly outperforms previous systems at both long and short answer criteria.
FusionNet: Fusing via Fully-Aware Attention with Application to Machine Comprehension
- Computer ScienceICLR
- 2018
This paper introduces a new neural structure called FusionNet, which extends existing attention approaches from three perspectives. First, it puts forward a novel concept of "history of word" to…
Bidirectional Attention Flow for Machine Comprehension
- Computer ScienceICLR
- 2017
The BIDAF network is introduced, a multi-stage hierarchical process that represents the context at different levels of granularity and uses bi-directional attention flow mechanism to obtain a query-aware context representation without early summarization.
Dual Multi-head Co-attention for Multi-choice Reading Comprehension
- Computer ScienceArXiv
- 2020
This work proposes a novel going-back-to-the-basic solution which straightforwardly models the MRC relationship as attention mechanism inside network and the proposed DUal Multi-head Co-Attention (DUMA) has been shown simple but effective and is capable of generally promoting pre-trained Language Models.
Attention is All you Need
- Computer ScienceNIPS
- 2017
A new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely is proposed, which generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data.
Gated Self-Matching Networks for Reading Comprehension and Question Answering
- Computer ScienceACL
- 2017
The gated self-matching networks for reading comprehension style question answering, which aims to answer questions from a given passage, are presented and holds the first place on the SQuAD leaderboard for both single and ensemble model.