• Publications
  • Influence
Does It Make Sense? And Why? A Pilot Study for Sense Making and Explanation
tl;dr
This paper, we release a benchmark to directly test whether a system can differentiate natural language statements that make sense from those that do not make sense. Expand
  • 25
  • 10
  • Open Access
Design Challenges and Misconceptions in Neural Sequence Labeling
tl;dr
We investigate the design challenges of constructing effective and efficient neural sequence labeling systems, by reproducing neural models, which include most of the state-of-the-art structures, and conduct a systematic model comparison on three benchmarks (i.e. NER, Chunking, and POS tagging). Expand
  • 76
  • 9
  • Open Access
Subword Encoding in Lattice LSTM for Chinese Word Segmentation
tl;dr
We investigate a lattice LSTM network for Chinese word segmentation (CWS) to utilize words or subwords. Expand
  • 19
  • 2
  • Open Access
SemEval-2020 Task 4: Commonsense Validation and Explanation
tl;dr
We present SemEval-2020 Task 4, Commonsense Validation and Explanation (ComVE), which includes three subtasks, aiming to evaluate whether a system can distinguish a natural language statement that makes sense to human from one that does not, and provide the reasons. Expand
  • 11
  • 2
  • Open Access
Who Blames Whom in a Crisis? Detecting Blame Ties from News Articles Using Neural Networks
tl;dr
Blame games tend to follow major disruptions, be they financial crises, natural disasters or terrorist attacks. Expand
  • 2
  • 1
  • Open Access