Author pages are created from data sourced from our academic publisher partnerships and public sources.
Share This Author
CommonGen: A Constrained Text Generation Challenge for Generative Commonsense Reasoning
A constrained text generation task, CommonGen associated with a benchmark dataset, to explicitly test machines for the ability of generative commonsense reasoning, and demonstrates that the learned generative Commonsense reasoning capability can be transferred to improve downstream tasks such as CommonsenseQA by generating additional context.
Examining Gender Bias in Languages with Grammatical Gender
Experiments on modified Word Embedding Association Test, word similarity, word translation, and word pair translation tasks show that the proposed approaches can effectively reduce the gender bias while preserving the utility of the original embeddings.
Can BERT Reason? Logically Equivalent Probes for Evaluating the Inference Capabilities of Language Models
- Pei Zhou, Rahul Khanna, Bill Yuchen Lin, Daniel Ho, Xiang Ren, J. Pujara
- Computer ScienceArXiv
- 2 May 2020
It is found that despite the recent success of large PTLMs on commonsense benchmarks, their performances on probes are no better than random guessing (even with fine-tuning) and are heavily dependent on biases--the poor overall performance inhibits us from studying robustness.
CommonGen: A Constrained Text Generation Dataset Towards Generative Commonsense Reasoning
This work presents CommonGen: a challenging dataset for testing generative commonsense reasoning with a constrained text generation task, and provides high-quality rationales behind the reasoning process for the development and test sets from the human annotators.
Incorporating Commonsense Knowledge Graph in Pretrained Models for Social Commonsense Tasks
- Ting-Yun Chang, Yang Liu, Karthik Gopalakrishnan, Behnam Hedayatnia, Pei Zhou, Dilek Z. Hakkani-Tür
- Computer ScienceDEELIO
- 1 November 2020
This work proposes two approaches to implicitly and explicitly infuse external commonsense knowledge graphs (KGs) into pretrained language models, and demonstrates that these methods perform well on SocialIQA, a social commonsense reasoning task, in both limited and full training data regimes.
Retrofitting Contextualized Word Embeddings with Paraphrases
This work proposes a post-processing approach to retrofit the contextualized word embedding with paraphrases, which seeks to minimize the variance of word representations on paraphrased contexts and significantly improves ELMo on various sentence classification and inference tasks.
Lawyers are Dishonest? Quantifying Representational Harms in Commonsense Knowledge Resources
- Ninareh Mehrabi, Pei Zhou, Fred Morstatter, J. Pujara, Xiang Ren, A. Galstyan
- Computer ScienceEMNLP
- 21 March 2021
This work establishes the presence of bias in the form of two types of representational harms, overgeneralization of polarized perceptions and representation disparity across different demographic groups in both CSKBs, and proposes a filtering-based approach for mitigating such harms.
Multi-graph Affinity Embeddings for Multilingual Knowledge Graphs
This paper proposes an improved model by learning a generalized affine-map-based alignment model for knowledge alignment tasks that effectively addresses the limitations of existing approaches, especially in handling the incoherence of embedding spaces on different languages.
In-depth analysis reveals complex molecular aetiology in a cohort of idiopathic cerebral palsy
The defective TYW1, a tRNA hypermodification enzyme, caused primary microcephaly and problems in motion and cognition by hindering neuronal proliferation and migration and a dichotomous classification system according to the expression patterns of these genes and associated cognitive impairments was proposed.
RICA: Evaluating Robust Inference Capabilities Based on Commonsense Axioms
- Pei Zhou, Rahul Khanna, Bill Yuchen Lin, Daniel Ho, J. Pujara, Xiang Ren
- Computer ScienceEMNLP
- 2 May 2020
A new challenge, RICA: Robust Inference using Commonsense Axioms, that evaluates robust commonsense inference despite textual perturbations and shows that PTLMs perform no better than random guessing on the zero-shot setting, are heavily impacted by statistical biases, and are not robust to perturbation attacks.