• Publications
  • Influence
Exploring BERT’s sensitivity to lexical cues using tests from semantic priming
TLDR
A case study analyzing the pre-trained BERT model with tests informed by semantic priming finds that BERT too shows “priming”, predicting a word with greater probability when the context includes a related word versus an unrelated one.
Do language models learn typicality judgments from text?
TLDR
It is suggested that text-based exposure alone is insufficient to acquire typicality knowledge, and two tests for LMs are proposed, showing modest—but not completely absent—correspondence between LMs and humans.
Authorship Analysis of Online Predatory Conversations using Character Level Convolution Neural Networks
TLDR
This work presents an authorship attribution model that trains on a corpus of online conversations involving predators, and performs subsequent analysis of the message representations to highlight differences between predatory and non-predatory message styles.
Not So Cute but Fuzzy: Estimating Risk of Sexual Predation in Online Conversations
TLDR
A neural network model is developed that uses fuzzy membership functions of each line in a chat as input and predict the risk of interaction, tied to stages and themes of the grooming process, using fuzzy sets.
A Sentiment Based Non-Factoid Question-Answering Framework
TLDR
This study proposes an architecture that adds extended representation of sentiment information to questions and answers, and reports on to what extent a prediction of the best answer be improved by the proposed architecture.
Exploring Lexical Sensitivities in Word Prediction Models: A case study on BERT
TLDR
This thesis relates BERT's sensitivity towards lexical cues with predictive contextual constraints and finer-grained lexical relations and establishes the importance of considering predictive constraint effects of context in studies that behaviorally analyze language processing models, and highlight possible parallels with human processing.
Finding Fuzziness in Neural Network Models of Language Processing
TLDR
This paper testifies to the extent to which models trained to capture the distributional statistics of language show correspondence to fuzzy-membership patterns, and finds the model to show patterns that are similar to classical fuzzy-set theoretic formulations of linguistic hedges, albeit with a substantial amount of noise.
...
1
2
...