• Publications
  • Influence
Text Processing Like Humans Do: Visually Attacking and Shielding NLP Systems
TLDR
This work investigates the impact of visual adversarial attacks on current NLP systems on character-, word-, and sentence-level tasks, showing that both neural and non-neural models are, in contrast to humans, extremely sensitive to such attacks, suffering performance decreases of up to 82%.
Generating Coherent Summaries of Scientific Articles Using Coherence Patterns
TLDR
This work introduces a graph-based approach to summarize scientific articles using coherence patterns in a corpus of abstracts and proposes a method to combine coherence, importance and non-redundancy to generate the summary.
A Neural Local Coherence Model for Text Quality Assessment
We propose a local coherence model that captures the flow of what semantically connects adjacent sentences in a text. We represent the semantics of a sentence by a vector and capture its state at
Graph-based Coherence Modeling For Assessing Readability
TLDR
Novel graph-based coherence features based on frequent subgraphs are introduced and they outperform Pitler and Nenkova (2008) in the readability ranking task by more than 5% accuracy thus establishing a new state-of-the-art on this dataset.
Lexical Coherence Graph Modeling Using Word Embeddings
TLDR
The lexical coherence graph (LCG), a new graph-based model to represent lexical relations among sentences, is introduced and Kneser-Ney smoothing is adapted to smooth subgraphs’ frequencies, which improves performance.
Dialogue Coherence Assessment Without Explicit Dialogue Act Labels
TLDR
This work uses dialogue act prediction as an auxiliary task in a multi-task learning scenario to obtain informative utterance representations for coherence assessment, and alleviates the need for explicit dialogue act labels during evaluation.
Reward Learning for Efficient Reinforcement Learning in Extractive Document Summarisation
TLDR
RELIS is proposed, a novel RL paradigm that learns a reward function with Learning-to-Rank (L2R) algorithms at training time and uses this reward function to train an input-specific RL policy at test time and it is proved that RELIS guarantees to generate near-optimal summaries with appropriate L2R and RL algorithms.
Normalized Entity Graph for Computing Local Coherence
TLDR
A computationally efficient normalization method is proposed and evaluated on three tasks: sentence ordering, summary coherence rating and readability assessment and in all tasks normalization improves the results.
Using a Graph-based Coherence Model in Document-Level Machine Translation
TLDR
The graph-based coherence model proposed by Mesgar and Strube, (2016) with Docent a document-level machine translation system is integrated and shows that it slightly improves the quality of translation in terms of the average Meteor score.
A Neural Model for Dialogue Coherence Assessment
TLDR
This paper proposes a novel dialogue coherence model trained in a hierarchical multi-task learning scenario where coherence assessment is the primary and the high- level task, and dialogue act prediction is the auxiliary and the low-level task.
...
1
2
...