• Publications
  • Influence
Social Bias Frames: Reasoning about Social and Power Implications of Language
TLDR
It is found that while state-of-the-art neural models are effective at high-level categorization of whether a given statement projects unwanted social bias, they are not effective at spelling out more detailed explanations in terms of Social Bias Frames.
A Stacking Gated Neural Architecture for Implicit Discourse Relation Classification
TLDR
A stacking neural network model is proposed to solve the classification problem in which a convolutional neural network is utilized for sentence modeling and a collaborative gated neural network (CGNN) is proposed for feature transformation.
Automatic Article Commenting: the Task and Dataset
TLDR
A large-scale Chinese dataset with millions of real comments and a human-annotated subset characterizing the comments’ varying quality is introduced, and automatic metrics that generalize a broad set of popular reference-based metrics and exhibit greatly improved correlations with human evaluations are developed.
Adversarial Connective-exploiting Networks for Implicit Discourse Relation Classification
TLDR
This work develops an adversarial model to enable an adaptive imitation scheme through competition between the implicit network and a rival feature discriminator, and achieves state-of-the-art performance on the PDTB benchmark.
Counterfactual Story Reasoning and Generation
TLDR
This paper proposes Counterfactual Story Rewriting: given an original story and an intervening counterfactual event, the task is to minimally revise the story to make it compatible with the given counterfactually event.
Conversing by Reading: Contentful Neural Conversation with On-demand Machine Reading
TLDR
A new end-to-end approach to contentful neural conversation that jointly models response generation and on-demand machine reading is presented, allowing for more focused integration of external knowledge than has been possible in prior approaches.
Implicit Discourse Relation Recognition with Context-aware Character-enhanced Embeddings
TLDR
This paper proposes a neural model utilizing context-aware character-enhanced embeddings to alleviate the drawbacks of the current word level representation and obtains state-of-the-art results.
Back to the Future: Unsupervised Backprop-based Decoding for Counterfactual and Abductive Commonsense Reasoning
TLDR
This paper proposes DeLorean, a new unsupervised decoding algorithm that can flexibly incorporate both the past and future contexts using only off-the-shelf, left-to-right language models and no supervision.
Probabilistic Graph-based Dependency Parsing with Convolutional Neural Network
TLDR
This paper presents neural probabilistic parsing models which explore up to thirdorder graph-based parsing with maximum likelihood training criteria and evaluated on English and Chinese Penn Treebanks and obtain competitive accuracies.
TIMEDIAL: Temporal Commonsense Reasoning in Dialog
TLDR
This paper presents the first study to investigate pre-trained LMs for their temporal reasoning capabilities in dialogs by introducing a new task and a crowd-sourced English challenge set, TimeDial, and reveals that the models fail to reason about dialog context correctly; instead, they rely on shallow cues based on existing temporal patterns in context.
...
1
2
3
...