Learn More
We propose a grammar induction technique for AMR semantic parsing. While previous grammar induction techniques were designed to re-learn a new parser for each target application, the recently annotated AMR Bank provides a unique opportunity to induce a single model for understanding broad-coverage newswire text and support a wide range of applications. We(More)
The reading comprehension task, that asks questions about a given evidence document, is a central problem in natural language understanding. Recent formulations of this task have typically focused on answer selection from a set of candidates pre-defined manually or through the use of an external NLP pipeline. However, Rajpurkar et al. (2016) recently(More)
We demonstrate that a state-of-the-art parser can be built using only a lexical tagging model and a deterministic grammar, with no explicit model of bi-lexical dependencies. Instead, all dependencies are implicitly encoded in an LSTM supertagger that assigns CCG lexical categories. The parser significantly outperforms all previously published CCG results,(More)
We present an approach for learning context-dependent semantic parsers to identify and interpret time expressions. We use a Combinatory Categorial Grammar to construct compositional meaning representations, while considering contextual cues, such as the document creation time and the tense of the governing verb, to compute the final time values. Experiments(More)
The Landsat 8 spacecraft was launched on 11 February 2013 carrying the Operational Land Imager (OLI) payload for moderate resolution imaging in the visible, near infrared (NIR), and short-wave infrared (SWIR) spectral bands. During the 90-day commissioning period following launch, several on-orbit geometric calibration activities were performed to refine(More)
We introduce a new deep learning model for semantic role labeling (SRL) that significantly improves the state of the art, along with detailed analyses to reveal its strengths and limitations. We use a deep highway BiLSTM architecture with constrained decoding, while observing a number of recent best practices for initialization and regularization. Our(More)
We introduce the first global recursive neural parsing model with optimality guarantees during decoding. To support global features, we give up dynamic programs and instead search directly in the space of all possible subtrees. Although this space is exponentially large in the sentence length, we show it is possible to learn an efficient A* parser. We(More)
This paper discusses the pre-launch spectral characterization of the Operational Land Imager (OLI) at the component, assembly and instrument levels and relates results of those measurements to artifacts observed in the on-orbit imagery. It concludes that the types of artifacts observed and their magnitudes are consistent with the results of the pre-launch(More)
Events are communicated in natural language with varying degrees of certainty. For example, if you are “hoping for a raise,” it may be somewhat less likely than if you are “expecting” one. To study these distinctions, we present scalable, highquality annotation schemes for event detection and fine-grained factuality assessment. We find that non-experts,(More)
The Landsat Data Continuity Mission (LDCM) is being developed by NASA and USGS and is currently planned for launch in January 2013 [1]. Once on-orbit and checked out, it will be operated by USGS and officially named Landsat-8. Two sensors will be on LDCM: the Operational Land Imager (OLI), which has been built and delivered by Ball Aerospace &(More)