• Publications
  • Influence
Statistical Script Learning with Multi-Argument Events
TLDR
Experiments on a large corpus using the task of inferring held-out events and the “narrative cloze evaluation” demonstrate that modeling multi-argument events improves predictive accuracy. Expand
Learning Statistical Scripts with LSTM Recurrent Neural Networks
TLDR
A Recurrent Neural Network model for statistical script learning using Long Short-Term Memory, an architecture which has been demonstrated to work well on a range of Artificial Intelligence tasks is described. Expand
Using Sentence-Level LSTM Language Models for Script Inference
TLDR
This work compares systems that operate on structured verb-argument events produced by an NLP pipeline with recent Recurrent Neural Net models that directly operate on raw tokens to predict sentences, finding the latter to be roughly comparable to the former in terms of predicting missing events in documents. Expand
Statistical Script Learning with Recurrent Neural Networks
TLDR
It is demonstrated that incorporating multiple arguments into events, yielding a more complex event representation than is used in previous work, helps to improve a co-occurrence-based script system’s predictive power. Expand
Better Conditional Density Estimation for Neural Networks
TLDR
Two novel approaches to conditional density estimation (CDE) are presented: Multiscale Nets (MSNs) and CDE Trend Filtering, which are compared against plain multinomial classifier networks and mixture density networks (MDNs) on a simulated dataset and three real-world datasets. Expand
Identifying Phrasal Verbs Using Many Bilingual Corpora
TLDR
The experimental evaluation demonstrates that combining statistical evidence from many parallel corpora using a novel ranking-oriented boosting algorithm produces a comprehensive set of English phrasal verbs, achieving performance comparable to a human-curated set. Expand
Benchmarking Hierarchical Script Knowledge
TLDR
KidsCook is introduced, a parallel script corpus, as well as a cloze task which matches video captions with missing procedural details, which shows that state-of-the-art models struggle at this task, which requires inducing functional commonsense knowledge not explicitly stated in text. Expand
Relational theories with null values and non-herbrand stable models
TLDR
This paper develops a general method for calculating stable models under the domain closure assumption but without the unique name assumption, and shows that any such theory can be turned into an equivalent logic program, so that models of the theory could be generated using computational methods of answer set programming. Expand
Does BERT Pretrained on Clinical Notes Reveal Sensitive Data?
TLDR
This work designs a battery of approaches intended to recover Personal Health Information (PHI) from a trained BERT, and finds that simple probing methods are not able to meaningfully extract sensitive information from BERT trained over the MIMIC-III corpus of EHR. Expand
Recurrent Neural Nets and Long Short-Term Memory
Scripts encode knowledge of prototypical sequences of events. We describe a Recurrent Neural Network model for statistical script learning using Long Short-Term Memory, an architecture which has beenExpand
...
1
2
...