• Publications
  • Influence
HotpotQA: A Dataset for Diverse, Explainable Multi-hop Question Answering
TLDR
It is shown that HotpotQA is challenging for the latest QA systems, and the supporting facts enable models to improve performance and make explainable predictions. Expand
Graph Convolution over Pruned Dependency Trees Improves Relation Extraction
TLDR
An extension of graph convolutional networks that is tailored for relation extraction, which pools information over arbitrary dependency structures efficiently in parallel is proposed, and a novel pruning strategy is applied to the input trees by keeping words immediately around the shortest path between the two entities among which a relation might hold. Expand
Stanford's Graph-based Neural Dependency Parser at the CoNLL 2017 Shared Task
TLDR
This paper describes the neural dependency parser submitted by Stanford to the CoNLL 2017 Shared Task on parsing Universal Dependencies, which was ranked first according to all five relevant metrics for the system. Expand
Stanza: A Python Natural Language Processing Toolkit for Many Human Languages
TLDR
This work introduces Stanza, an open-source Python natural language processing toolkit supporting 66 human languages that features a language-agnostic fully neural pipeline for text analysis, including tokenization, multi-word token expansion, lemmatization, part-of-speech and morphological feature tagging, dependency parsing, and named entity recognition. Expand
Universal Dependency Parsing from Scratch
TLDR
A complete neural pipeline system that takes raw text as input, and performs all tasks required by the shared task, ranging from tokenization and sentence segmentation, to POS tagging and dependency parsing is introduced. Expand
Sharp Nearby, Fuzzy Far Away: How Neural Language Models Use Context
TLDR
This paper investigates the role of context in an LSTM LM, through ablation studies, and analyzes the increase in perplexity when prior context words are shuffled, replaced, or dropped to provide a better understanding of how neural LMs use their context. Expand
Answering Complex Open-domain Questions Through Iterative Query Generation
TLDR
This work presents GoldEn (Gold Entity) Retriever, which iterates between reading context and retrieving more supporting documents to answer open-domain multi-hop questions, and demonstrates that it outperforms the best previously published model despite not using pretrained language models such as BERT. Expand
Building DNN acoustic models for large vocabulary speech recognition
TLDR
An empirical investigation on which aspects of DNN acoustic model design are most important for speech recognition system performance, and suggests that a relatively simple DNN architecture and optimization technique produces strong results. Expand
Arc-swift: A Novel Transition System for Dependency Parsing
TLDR
This paper proposes a novel transition system, arc-swift, that enables direct attachments between tokens farther apart with a single transition, which allows the parser to leverage lexical information more directly in transition decisions. Expand
Modeling response properties of V2 neurons using a hierarchical K-means model
TLDR
A hierarchical model based on a simple algorithm, K-means, which can be realized by competitive Hebbian learning is proposed, which exhibits some response properties of V2 neurons, and is more biologically feasible and computationally efficient than the sparse DBN. Expand
...
1
2
3
4
...