“Killing Me” Is Not a Spoiler: Spoiler Detection Model using Graph Neural Networks with Dependency Relation-Aware Attention Mechanism

  title={“Killing Me” Is Not a Spoiler: Spoiler Detection Model using Graph Neural Networks with Dependency Relation-Aware Attention Mechanism},
  author={Buru Chang and Inggeol Lee and Hyunjae Kim and Jaewoo Kang},
Several machine learning-based spoiler detection models have been proposed recently to protect users from spoilers on review websites. Although dependency relations between context words are important for detecting spoilers, current attention-based spoiler detection models are insufficient for utilizing dependency relations. To address this problem, we propose a new spoiler detection model called SDGNN that is based on syntax-aware graph neural networks. In the experiments on two real-world… 

Figures and Tables from this paper


A Deep Neural Spoiler Detection Model Using a Genre-Aware Attention Mechanism
A new deep neural spoiler detection model that uses a genre-aware attention mechanism based on the attention mechanism that utilizes genre information for detecting spoilers which vary by genres is proposed.
Graph Convolution over Pruned Dependency Trees Improves Relation Extraction
An extension of graph convolutional networks that is tailored for relation extraction, which pools information over arbitrary dependency structures efficiently in parallel is proposed, and a novel pruning strategy is applied to the input trees by keeping words immediately around the shortest path between the two entities among which a relation might hold.
Spoiler detection in TV program tweets
Graph Convolutional Networks With Argument-Aware Pooling for Event Detection
This work investigates a convolutional neural network based on dependency trees to perform event detection and proposes a novel pooling method that relies on entity mentions to aggregate the convolution vectors.
Encoding Sentences with Graph Convolutional Networks for Semantic Role Labeling
A version of graph convolutional networks (GCNs), a recent class of neural networks operating on graphs, suited to model syntactic dependency graphs, is proposed, observing that GCN layers are complementary to LSTM ones.
Convolutional Neural Networks for Sentence Classification
The CNN models discussed herein improve upon the state of the art on 4 out of 7 tasks, which include sentiment analysis and question classification, and are proposed to allow for the use of both task-specific and static vectors.
GloVe: Global Vectors for Word Representation
A new global logbilinear regression model that combines the advantages of the two major model families in the literature: global matrix factorization and local context window methods and produces a vector space with meaningful substructure.
Hierarchical Attention Networks for Document Classification
Experiments conducted on six large scale text classification tasks demonstrate that the proposed architecture outperform previous methods by a substantial margin.
The Stanford CoreNLP Natural Language Processing Toolkit
The design and use of the Stanford CoreNLP toolkit is described, an extensible pipeline that provides core natural language analysis, and it is suggested that this follows from a simple, approachable design, straightforward interfaces, the inclusion of robust and good quality analysis components, and not requiring use of a large amount of associated baggage.
Long Short-Term Memory
A novel, efficient, gradient based method called long short-term memory (LSTM) is introduced, which can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units.