Simple and Accurate Dependency Parsing Using Bidirectional LSTM Feature Representations

@article{Kiperwasser2016SimpleAA,
  title={Simple and Accurate Dependency Parsing Using Bidirectional LSTM Feature Representations},
  author={E. Kiperwasser and Yoav Goldberg},
  journal={Transactions of the Association for Computational Linguistics},
  year={2016},
  volume={4},
  pages={313-327}
}
We present a simple and effective scheme for dependency parsing which is based on bidirectional-LSTMs (BiLSTMs. [] Key Method The BiLSTM is trained jointly with the parser objective, resulting in very effective feature extractors for parsing. We demonstrate the effectiveness of the approach by applying it to a greedy transition-based parser as well as to a globally optimized graph-based parser. The resulting parsers have very simple architectures, and match or surpass the state-of-the-art accuracies on…

Incremental Parsing with Minimal Features Using Bi-Directional LSTM

TLDR
This work uses bi-directional LSTM sentence representations to model a parser state with only three sentence positions, which automatically identifies important aspects of the entire sentence, and achieves state-of-the-art results among greedy dependency parsers for English.

A Fast and Lightweight System for Multilingual Dependency Parsing

TLDR
This work presents a transition-based projective parser with a bidirectional-LSTM (BiL STM) feature extractor and a multi-layer perceptron (MLP) classifier that is fast, lightweight and effective on big treebanks.

A Simple and Effective Dependency Parser for Telugu

TLDR
A simple and effective dependency parser for Telugu, a morphologically rich, free word order language, using contextual vector representations from the Telugu Wikipedia data and a BERT model to train the parser.

A Simple LSTM model for Transition-based Dependency Parsing

TLDR
A new initialization method is proposed that uses the pre-trained weights from a feed-forward neural network to initialize the LSTM-based model and it is shown that using dropout on the input layer has a positive effect on performance.

Using BiLSTM in Dependency Parsing for Vietnamese

TLDR
The use of bidirectional long short-term memory network models for both transition-based and graph-based dependency parsing for the Vietnamese language is investigated and their contribution in building a Vietnamese dependency treebank whose tagset conforms to the Universal Dependency schema is reported.

Effective Representation for Easy-First Dependency Parsing

TLDR
This work introduces a bottom-up subtree encoding method based on the child-sum tree-LSTM, and shows that the effective subtree encoder does promote the parsing process, and can make a greedy search easy-first parser achieve promising results on benchmark treebanks compared to state-of-the-art baselines.

A Novel Neural Network Model for Joint POS Tagging and Graph-based Dependency Parsing

TLDR
A novel neural network model that learns POS tagging and graph-based dependency parsing jointly and outperforms the state-of-the-art neural network-based Stack-propagation model for joint joint tagging and transition- based dependency parsing, resulting in a new state of the art model.

Recursive LSTM Tree Representation for Arc-Standard Transition-Based Dependency Parsing

  • Mohab ElkarefBernd Bohnet
  • Computer Science
    Proceedings of the Third Workshop on Universal Dependencies (UDW, SyntaxFest 2019)
  • 2019
TLDR
The dense vectors produced by Recursive LSTM Trees replace the need for structural features by using them as feature vectors for a greedy Arc-Standard transition-based dependency parser, and have the ability to incorporate useful information from the bi-LSTM contextualized representation.

Explorer Dependency Parsing as Head Selection

TLDR
The model, which is called DENSE (as shorthand for Dependency Neural Selection), produces a distribution over possible heads for each word using features obtained from a bidirectional recurrent neural network.

LD-Parser: Leaf Detection Based Dependency Parsing Using BiLSTM and Attention Mechanism

TLDR
A new method named Leaf Detection based Dependency Parsing (LD-Parser), which is a bottom-up framework to detect leaf nodes of the dependency parsing tree and an attention mechanism is introduced to sum the children of nodes as an extra feature according to the attention weight.
...

References

SHOWING 1-10 OF 67 REFERENCES

Incremental Parsing with Minimal Features Using Bi-Directional LSTM

TLDR
This work uses bi-directional LSTM sentence representations to model a parser state with only three sentence positions, which automatically identifies important aspects of the entire sentence, and achieves state-of-the-art results among greedy dependency parsers for English.

Easy-First Dependency Parsing with Hierarchical Tree LSTMs

TLDR
A compositional vector representation of parse trees that relies on a recursive combination of recurrent-neural network encoders is suggested, achieving very strong accuracies for English and Chinese, without relying on external word embeddings.

A Re-ranking Model for Dependency Parser with Recursive Convolutional Neural Network

TLDR
This work proposes a recursive convolutional neural network (RCNN) architecture to capture syntactic and compositional-semantic representations of phrases and words in a dependency tree to improve the state-of-the-art dependency parsing on both English and Chinese datasets.

An Effective Neural Network Model for Graph-based Dependency Parsing

TLDR
This paper proposes a general and effective Neural Network model for graph-based dependency parsing that can automatically learn high-order feature combinations using only atomic features by exploiting a novel activation function tanhcube.

Transition-based Dependency Parsing with Rich Non-local Features

TLDR
This paper shows that it can improve the accuracy of transition-based dependency parsers by considering even richer feature sets than those employed in previous systems by improving the accuracy in the standard Penn Treebank setup and rivaling the best results overall.

A Fast and Accurate Dependency Parser using Neural Networks

TLDR
This work proposes a novel way of learning a neural network classifier for use in a greedy, transition-based dependency parser that can work very fast, while achieving an about 2% improvement in unlabeled and labeled attachment scores on both English and Chinese datasets.

Simple Semi-supervised Dependency Parsing

TLDR
This work focuses on the problem of lexical representation, introducing features that incorporate word clusters derived from a large unannotated corpus, and shows that the cluster-based features yield substantial gains in performance across a wide range of conditions.

Grammar as a Foreign Language

TLDR
The domain agnostic attention-enhanced sequence-to-sequence model achieves state-of-the-art results on the most widely used syntactic constituency parsing dataset, when trained on a large synthetic corpus that was annotated using existing parsers.

A Tale of Two Parsers: Investigating and Combining Graph-based and Transition-based Dependency Parsing

TLDR
A beam-search-based parser that combines both graph-based and transition-based parsing into a single system for training and decoding is proposed, showing that it outperforms both the pure graph- based and the pure transition- based parsers.

Dependency Parsing

TLDR
This book surveys the three major classes of parsing models that are in current use: transition- based, graph-based, and grammar-based models, and gives a thorough introduction to the methods that are most widely used today.
...