Share This Author
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
A new language representation model, BERT, designed to pre-train deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers, which can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks.
Natural Questions: A Benchmark for Question Answering Research
The Natural Questions corpus, a question answering data set, is presented, introducing robust metrics for the purposes of evaluating question answering systems; demonstrating high human upper bounds on these metrics; and establishing baseline results using competitive methods drawn from related literature.
Latent Retrieval for Weakly Supervised Open Domain Question Answering
It is shown for the first time that it is possible to jointly learn the retriever and reader from question-answer string pairs and without any IR system, and outperforming BM25 by up to 19 points in exact match.
Semantic Parsing via Staged Query Graph Generation: Question Answering with Knowledge Base
This work proposes a novel semantic parsing framework for question answering using a knowledge base that leverages the knowledge base in an early stage to prune the search space and thus simplifies the semantic matching problem.
BoolQ: Exploring the Surprising Difficulty of Natural Yes/No Questions
- Christopher Clark, Kenton Lee, Ming-Wei Chang, T. Kwiatkowski, Michael Collins, Kristina Toutanova
- Computer ScienceNAACL
- 1 May 2019
It is found that transferring from entailment data is more effective than transferring from paraphrase or extractive QA data, and that it, surprisingly, continues to be very beneficial even when starting from massive pre-trained language models such as BERT.
REALM: Retrieval-Augmented Language Model Pre-Training
- Kelvin Guu, Kenton Lee, Z. Tung, Panupong Pasupat, Ming-Wei Chang
- Computer ScienceArXiv
- 10 February 2020
The effectiveness of Retrieval-Augmented Language Model pre-training (REALM) is demonstrated by fine-tuning on the challenging task of Open-domain Question Answering (Open-QA) and is found to outperform all previous methods by a significant margin, while also providing qualitative benefits such as interpretability and modularity.
A Knowledge-Grounded Neural Conversation Model
A novel, fully data-driven, and knowledge-grounded neural conversation model aimed at producing more contentful responses that generalizes the widely-used Sequence-to-Sequence (seq2seq) approach by conditioning responses on both conversation history and external “facts”, allowing the model to be versatile and applicable in an open-domain setting.
The Value of Semantic Parse Labeling for Knowledge Base Question Answering
- Wen-tau Yih, Matthew Richardson, Christopher Meek, Ming-Wei Chang, Jina Suh
- Computer ScienceACL
- 7 August 2016
The value of collecting semantic parse labels for knowledge base question answering is demonstrated and the largest semantic-parse labeled dataset to date is created and shared in order to advance research in question answering.
Well-Read Students Learn Better: On the Importance of Pre-training Compact Models
It is shown that pre-training remains important in the context of smaller architectures, and fine-tuning pre-trained compact models can be competitive to more elaborate methods proposed in concurrent work.
Question Answering Using Enhanced Lexical Semantic Models
This work focuses on improving the performance using models of lexical semantic resources and shows that these systems can be consistently and significantly improved with rich lexical semantics information, regardless of the choice of learning algorithms.