• Publications
  • Influence
Attention Modeling for Targeted Sentiment
TLDR
Results show that by using attention to model the contribution of each word in a sentence with respect to the target, this model gives significantly improved results over two standard benchmarks. Expand
In-Order Transition-based Constituent Parsing
TLDR
A novel parsing system based on in-order traversal over syntactic trees, designing a set of transition actions to find a compromise between bottom-up constituent information and top-down lookahead information is proposed. Expand
Shift-Reduce Constituent Parsing with Neural Lookahead Features
TLDR
A bidirectional LSTM model is built, which leverages full sentence information to predict the hierarchy of constituents that each word starts and ends and gives the highest reported accuracies for fully-supervised parsing. Expand
Evaluating Models’ Local Decision Boundaries via Contrast Sets
TLDR
A more rigorous annotation paradigm for NLP that helps to close systematic gaps in the test data, and recommends that the dataset authors manually perturb the test instances in small but meaningful ways that (typically) change the gold label, creating contrast sets. Expand
Discourse Representation Structure Parsing
TLDR
An open-domain neural semantic parser which generates formal meaning representations in the style of Discourse Representation Theory (DRT) and develops a structure-aware model which decomposes the decoding process into three stages. Expand
Learning Domain Representation for Multi-Domain Sentiment Classification
TLDR
A descriptor vector is learned for representing each domain, which is used to map adversarially trained domain-general Bi-LSTM input representations into domain-specific representations, which outperforms existing methods on multi-domain sentiment analysis significantly. Expand
Discourse Representation Parsing for Sentences and Documents
TLDR
A neural model equipped with a supervised hierarchical attention mechanism and a linguistically-motivated copy strategy is presented that outperforms competitive baselines by a wide margin and presents a general framework for parsing discourse structures of arbitrary length and granularity. Expand
Discourse Representation Structure Parsing with Recurrent Neural Networks and the Transformer Model
TLDR
The systems developed for Discourse Representation Structure (DRS) parsing are described as part of the IWCS-2019 Shared Task of DRS Parsing, which uses the open-source neural machine translation system implemented in PyTorch, OpenNMT-py to implement the model. Expand
Evaluating NLP Models via Contrast Sets
TLDR
A new annotation paradigm for NLP is proposed that helps to close systematic gaps in the test data, and it is recommended that after a dataset is constructed, the dataset authors manually perturb the test instances in small but meaningful ways that change the gold label, creating contrast sets. Expand
Encoder-Decoder Shift-Reduce Syntactic Parsing
TLDR
This work empirically investigate the effectiveness of applying the encoder-decoder network to transition-based parsing and gives comparable results to the stack LSTM parser for dependency parsing, and significantly better results compared to the aforementioned parser for constituent parsing, which uses bracketed tree formats. Expand
...
1
2
3
...