Corpus ID: 6291204

Machine Reading at the University of Washington

@inproceedings{Poon2010MachineRA,
  title={Machine Reading at the University of Washington},
  author={Hoifung Poon and Janara Christensen and Pedro M. Domingos and Oren Etzioni and R. Hoffmann and Chlo{\'e} Kiddon and Thomas Lin and Xiao Ling and Mausam and Alan Ritter and Stefan Schoenmackers and S. Soderland and Daniel S. Weld and Fei Wu and Congle Zhang},
  booktitle={HLT-NAACL 2010},
  year={2010}
}
Machine reading is a long-standing goal of AI and NLP. In recent years, tremendous progress has been made in developing machine learning approaches for many of its subtasks such as parsing, information extraction, and question answering. However, existing end-to-end solutions typically require substantial amount of human efforts (e.g., labeled data and/or manual engineering), and are not well poised for Web-scale knowledge acquisition. In this paper, we propose a unifying approach for machine… Expand
Enhance Machine Reading Comprehension on Multiple Sentence Questions with Gated and Dense Coreference Information
TLDR
This paper proposes a deep learning model that incorporates coreference information to improve the prediction performance especially on multiple sentence question and proposes the bi-directional answering technique that can help the model avoid a local maxima of the single directional answering method in a traditional model. Expand
Probing Prior Knowledge Needed in Challenging Chinese Machine Reading Comprehension
TLDR
Experimental results demonstrate that linguistic and general world knowledge may help improve the performance of the baseline reader in both general and domain-specific tasks. Expand
Machine reading: from wikipedia to the web
TLDR
The results of the experiments show that these automatically learned systems can render much of Wikipedia into high-quality semantic data, which provides a solid base to bootstrapping toward the general Web. Expand
Teaching Machines to Read and Comprehend
TLDR
A new methodology is defined that resolves this bottleneck and provides large scale supervised reading comprehension data that allows a class of attention based deep neural networks that learn to read real documents and answer complex questions with minimal prior knowledge of language structure to be developed. Expand
Understanding Multi-Perspective Context Matching for Machine Comprehension
Question-answer prediction has been one of the most desired tasks in a machine since the beginning of artificial intelligence. We study a recent model for reading comprehension calledExpand
Investigating Prior Knowledge for Challenging Chinese Machine Reading Comprehension
TLDR
This paper presents the first free-form multiple-Choice Chinese machine reading Comprehension dataset (C3), containing 13,369 documents and their associated 19,577 multiple-choicefree-form questions collected from Chinese-as-a-second-language examinations, and presents a comprehensive analysis of the prior knowledge needed for these real-world problems. Expand
Neural Machine Reading Comprehension: Methods and Trends
TLDR
A thorough review of this research field, covering different aspects including typical MRC tasks: their definitions, differences and representative datasets, and new trends: some emerging focuses in neural MRC as well as the corresponding challenges. Expand
IIT-KGP at COIN 2019: Using pre-trained Language Models for modeling Machine Comprehension
TLDR
Experimental results show that the model gives substantial improvements over the baseline and other systems incorporating knowledge bases and got the 2nd position on the final test set leaderboard with an accuracy of 90.5%. Expand
Dynamic Entity Representation with Max-pooling Improves Machine Reading
TLDR
A novel neural network model for machine reading, DER Network, which explicitly implements a reader building dynamic meaning representations for entities by gathering and accumulating information around the entities as it reads a document, finds that max-pooling is suited for modeling the accumulation of information on entities. Expand
Building Large Machine Reading-Comprehension Datasets using Paragraph Vectors
TLDR
This work uses the MC-dataset generation technique to build a dataset of around 2 million examples, for which it empirically determine the high-ceiling of human performance (around 91% accuracy), as well as the performance of a variety of computer models. Expand
...
1
2
3
4
...

References

SHOWING 1-10 OF 50 REFERENCES
Strategies for lifelong knowledge extraction from the web
TLDR
Alice, a lifelong learning agent whose goal is to automatically discover a collection of concepts, facts and generalizations that describe a particular topic of interest directly from a large volume of Web text, is introduced. Expand
Coarse-to-Fine Natural Language Processing
  • Slav Petrov
  • Computer Science
  • Theory and Applications of Natural Language Processing
  • 2009
TLDR
This dissertation describes several coarse-to-fine systems that learn increasingly refined models that capture phone internal structures, as well as context-dependent variations in an automatic way, while streamlining the learning procedure. Expand
Learning 5000 Relational Extractors
TLDR
LUCHS is presented, a self-supervised, relation-specific IE system which learns 5025 relations --- more than an order of magnitude greater than any previous approach --- with an average F1 score of 61%. Expand
Unsupervised Ontology Induction from Text
TLDR
OntoUSP builds on the USP unsupervised semantic parser by jointly forming ISA and IS-PART hierarchies of lambda-form clusters and improves on the recall of USP by 47% and greatly outperforms previous state-of-the-art approaches. Expand
Information extraction from Wikipedia: moving down the long tail
TLDR
Three novel techniques for increasing recall from Wikipedia's long tail of sparse classes are presented: shrinkage over an automatically-learned subsumption taxonomy, a retraining technique for improving the training data, and supplementing results by extracting from the broader Web. Expand
Extracting Semantic Networks from Text Via Relational Clustering
TLDR
This paper uses the TextRunner system to extract tuples from text, and then induce general concepts and relations from them by jointly clustering the objects and relational strings in the tuples using Markov logic. Expand
Building large knowledge bases by mass collaboration
TLDR
This paper uses first-order probabilistic reasoning techniques to combine potentially inconsistent knowledge sources of varying quality, and it uses machine-learning techniques to estimate the quality of knowledge. Expand
Semantic Role Labeling for Open Information Extraction
TLDR
This work investigates the use of semantic features (semantic roles) for the task of Open IE and finds that SRL-IE is robust to noisy heterogeneous Web data and outperforms TextRunner on extraction quality. Expand
Joint Unsupervised Coreference Resolution with Markov Logic
TLDR
This paper presents the first unsupervised approach that is competitive with supervised ones, made possible by performing joint inference across mentions, in contrast to the pairwise classification typically used in supervised methods, and by using Markov logic as a representation language. Expand
Scaling Textual Inference to the Web
TLDR
The Holmes system, which utilizes textual inference over tuples extracted from text to scale TI to a corpus of 117 million Web pages, and its runtime is linear in the size of its input corpus. Expand
...
1
2
3
4
5
...