Attention Modeling for Targeted Sentiment
- Yue Zhang, Jiangming Liu
- Computer ScienceConference of the European Chapter of the…
- 1 April 2017
Results show that by using attention to model the contribution of each word in a sentence with respect to the target, this model gives significantly improved results over two standard benchmarks.
Evaluating Models’ Local Decision Boundaries via Contrast Sets
- Matt Gardner, Yoav Artzi, Ben Zhou
- Computer ScienceFindings
- 6 April 2020
A more rigorous annotation paradigm for NLP that helps to close systematic gaps in the test data, and recommends that the dataset authors manually perturb the test instances in small but meaningful ways that (typically) change the gold label, creating contrast sets.
In-Order Transition-based Constituent Parsing
- Jiangming Liu, Yue Zhang
- Computer ScienceInternational Conference on Topology, Algebra and…
- 17 July 2017
A novel parsing system based on in-order traversal over syntactic trees, designing a set of transition actions to find a compromise between bottom-up constituent information and top-down lookahead information is proposed.
Shift-Reduce Constituent Parsing with Neural Lookahead Features
- Jiangming Liu, Yue Zhang
- Computer ScienceInternational Conference on Topology, Algebra and…
- 2 December 2016
A bidirectional LSTM model is built, which leverages full sentence information to predict the hierarchy of constituents that each word starts and ends and gives the highest reported accuracies for fully-supervised parsing.
Discourse Representation Structure Parsing
- Jiangming Liu, Shay B. Cohen, Mirella Lapata
- Computer ScienceAnnual Meeting of the Association for…
- 1 July 2018
An open-domain neural semantic parser which generates formal meaning representations in the style of Discourse Representation Theory (DRT) and develops a structure-aware model which decomposes the decoding process into three stages.
Discourse Representation Parsing for Sentences and Documents
- Jiangming Liu, Shay B. Cohen, Mirella Lapata
- Computer ScienceAnnual Meeting of the Association for…
- 1 July 2019
A neural model equipped with a supervised hierarchical attention mechanism and a linguistically-motivated copy strategy is presented that outperforms competitive baselines by a wide margin and presents a general framework for parsing discourse structures of arbitrary length and granularity.
Evaluating NLP Models via Contrast Sets
- Matt Gardner, Yoav Artzi, Ben Zhou
- Computer ScienceArXiv
- 6 April 2020
A new annotation paradigm for NLP is proposed that helps to close systematic gaps in the test data, and it is recommended that after a dataset is constructed, the dataset authors manually perturb the test instances in small but meaningful ways that change the gold label, creating contrast sets.
Learning Domain Representation for Multi-Domain Sentiment Classification
- Qi Liu, Yue Zhang, Jiangming Liu
- Computer ScienceNorth American Chapter of the Association for…
- 1 June 2018
A descriptor vector is learned for representing each domain, which is used to map adversarially trained domain-general Bi-LSTM input representations into domain-specific representations, which outperforms existing methods on multi-domain sentiment analysis significantly.
Discourse Representation Structure Parsing with Recurrent Neural Networks and the Transformer Model
- Jiangming Liu, Shay B. Cohen, Mirella Lapata
- Computer ScienceProceedings of the IWCS Shared Task on Semantic…
- 23 May 2019
The systems developed for Discourse Representation Structure (DRS) parsing are described as part of the IWCS-2019 Shared Task of DRS Parsing, which uses the open-source neural machine translation system implemented in PyTorch, OpenNMT-py to implement the model.
Encoder-Decoder Shift-Reduce Syntactic Parsing
- Jiangming Liu, Yue Zhang
- Computer ScienceInternational Workshop/Conference on Parsing…
- 24 June 2017
This work empirically investigate the effectiveness of applying the encoder-decoder network to transition-based parsing and gives comparable results to the stack LSTM parser for dependency parsing, and significantly better results compared to the aforementioned parser for constituent parsing, which uses bracketed tree formats.
...
...