Improving Tree-LSTM with Tree Attention

@article{Ahmed2019ImprovingTW,
  title={Improving Tree-LSTM with Tree Attention},
  author={Mahtab Ahmed and Muhammad Rifayat Samee and Robert E. Mercer},
  journal={2019 IEEE 13th International Conference on Semantic Computing (ICSC)},
  year={2019},
  pages={247-254}
}
In Natural Language Processing (NLP), we often need to extract information from tree topology. [...] Key Result We evaluated our models on a semantic relatedness task and achieved notable results compared to Tree-Lstmbased methods with no attention as well as other neural and non-neural methods and good results compared to Tree-Lstmbased methods with attention.Expand
The sentiment analysis model with multi-head self-attention and Tree-LSTM
  • Lei Li, Yijian Pei, Chenyang Jin
  • Engineering
  • International Workshop on Pattern Recognition
  • 2021
In the natural language processing task.We need to extract information from the tree topology. Sentence structure can be achieved by the dependency tree or constituency tree structure toExpand
Neural Machine Translation with Attention Based on a New Syntactic Branch Distance
TLDR
An attention mechanism based on a new syntactic branch distance, which simultaneously pays attention to words with similar linear index distances and syntax-related words is proposed, which outperforms a recent baseline method on the English-to-German translation task. Expand
SA-NLI: A Supervised Attention based framework for Natural Language Inference
TLDR
A Supervised Attention based Natural Language Inference (SA-NLI) framework is proposed, which is trained to focus on syntactically related tokens, while the inter attention module is constrained to capture alignment between sentences. Expand
Structured Context and High-Coverage Grammar for Conversational Question Answering over Knowledge Graphs
TLDR
A new Logical Form (LF) grammar is introduced that can model a wide range of queries on the graph while remaining sufficiently simple to generate supervision data efficiently, and this Transformer-based model takes a JSON-like structure as input, allowing it to easily incorporate both Knowledge Graph and conversational contexts. Expand
Make It Directly: Event Extraction Based on Tree-LSTM and Bi-GRU
TLDR
A novel event extraction model is proposed, which is built upon a Tree-LSTM network and a Bi-GRU network and carries syntactically related information and achieves competitive results compared to previous works. Expand
Structured Prediction in NLP - A survey
TLDR
A brief of major techniques in structured prediction and its applications in the NLP domains like parsing, sequence labeling, text generation, and sequence to sequence tasks is provided. Expand
Learning Clause Representation from Dependency-Anchor Graph for Connective Prediction
TLDR
A novel clause embedding method that applies graph learning to a data structure the authors refer to as a dependency-anchor graph, which demonstrates a significant improvement over tree-based models, confirming the importance of emphasizing the subject and verb phrase. Expand
Contrasting distinct structured views to learn sentence embeddings
TLDR
A self-supervised method that builds sentence embeddings from the combination of diverse explicit syntactic structures of a sentence and proposes an original contrastive multi-view framework that induces an explicit interaction between models during the training phase. Expand
Corpus-Based Paraphrase Detection Experiments and Review
TLDR
A performance overview of various types of corpus-based models, especially deep learning (DL) models, with the task of paraphrase detection shows that DL models are very competitive with traditional state-of-the-art approaches and have potential that should be further developed. Expand
Investigating Relational Recurrent Neural Networks with Variable Length Memory Pointer
TLDR
A novel Relational Memory Core (RMC) is encoded as the cell state inside an LSTM cell using the standard multi-head self attention mechanism with variable length memory pointer and called \(\text {LSTM}_{\textit{RMC}}\). Expand
...
1
2
...

References

SHOWING 1-10 OF 29 REFERENCES
An Attention-Based Syntax-Tree and Tree-LSTM Model for Sentence Summarization
TLDR
This work proposes an attention-based Tree-LSTM model for sentence summarization, which utilizes an Attention-based syntactic structure as auxiliary information and has achieved the state-of-art on DUC-2004 shared task. Expand
Improved Semantic Representations From Tree-Structured Long Short-Term Memory Networks
TLDR
The Tree-LSTM is introduced, a generalization of LSTMs to tree-structured network topologies that outperform all existing systems and strong LSTM baselines on two tasks: predicting the semantic relatedness of two sentences and sentiment classification. Expand
Modelling Sentence Pairs with Tree-structured Attentive Encoder
TLDR
An attentive encoder that combines tree-structured recursive neural networks and sequential recurrent neural networks for modelling sentence pairs and uses the representation of one sentence, which generated via an RNN, to guide the structural encoding of the other sentence on the dependency parse tree. Expand
Sentence-State LSTM for Text Representation
TLDR
This work investigates an alternative LSTM structure for encoding text, which consists of a parallel state for each word, and shows that the proposed model has strong representation power, giving highly competitive performances compared to stacked BiLSTM models with similar parameter numbers. Expand
Enhanced LSTM for Natural Language Inference
TLDR
A new state-of-the-art result is presented, achieving the accuracy of 88.6% on the Stanford Natural Language Inference Dataset, and it is demonstrated that carefully designing sequential inference models based on chain LSTMs can outperform all previous models. Expand
A Decomposable Attention Model for Natural Language Inference
We propose a simple neural architecture for natural language inference. Our approach uses attention to decompose the problem into subproblems that can be solved separately, thus making it triviallyExpand
Deep Sentence Embedding Using Long Short-Term Memory Networks: Analysis and Application to Information Retrieval
  • H. Palangi, L. Deng, +5 authors R. Ward
  • Computer Science
  • IEEE/ACM Transactions on Audio, Speech, and Language Processing
  • 2016
TLDR
A model that addresses sentence embedding, a hot topic in current natural language processing research, using recurrent neural networks (RNN) with Long Short-Term Memory (LSTM) cells is developed and is shown to significantly outperform several existing state of the art methods. Expand
Multi-Perspective Sentence Similarity Modeling with Convolutional Neural Networks
TLDR
This work proposes a model for comparing sentences that uses a multiplicity of perspectives, first model each sentence using a convolutional neural network that extracts features at multiple levels of granularity and uses multiple types of pooling. Expand
Effective Approaches to Attention-based Neural Machine Translation
TLDR
A global approach which always attends to all source words and a local one that only looks at a subset of source words at a time are examined, demonstrating the effectiveness of both approaches on the WMT translation tasks between English and German in both directions. Expand
A Fast and Accurate Dependency Parser using Neural Networks
TLDR
This work proposes a novel way of learning a neural network classifier for use in a greedy, transition-based dependency parser that can work very fast, while achieving an about 2% improvement in unlabeled and labeled attachment scores on both English and Chinese datasets. Expand
...
1
2
3
...