Understanding and Improving Word Embeddings through a Neuroscientific Lens

@article{Fereidooni2020UnderstandingAI,
  title={Understanding and Improving Word Embeddings through a Neuroscientific Lens},
  author={Sam Fereidooni and Viola Mocz and Dragomir Radev and M. Chun},
  journal={bioRxiv},
  year={2020}
}
Despite the success of models making use of word embeddings on many natural language tasks, these models often perform significantly worse than humans on several natural language understanding tasks. This difference in performance motivates us to ask: (1) if existing word vector representations have any basis in the brain’s representational structure for individual words, and (2) whether features from the brain can be used to improve word embedding model performance, defined as their… Expand
Discrete representations in neural models of spoken language
TLDR
A systematic analysis of the impact of architectural choices, the learning objective and training dataset, and the evaluation metric on the merits of four commonly used metrics in the context of weakly supervised models of spoken language finds that the different evaluation metrics can give inconsistent results. Expand

References

SHOWING 1-10 OF 38 REFERENCES
Experiential, Distributional and Dependency-based Word Embeddings have Complementary Roles in Decoding Brain Activity
TLDR
It is shown that neural word embedding models exhibit superior performance on the tasks the authors consider, beating experiential word representation model, which may support the idea that the brain uses different systems for processing different kinds of words. Expand
Inducing brain-relevant bias in natural language processing models
TLDR
It is demonstrated that a version of BERT, a recently introduced and powerful language model, can improve the prediction of brain activity after fine-tuning and the relationship between language and brain activity learned by BERT during this fine- Tuning transfers across multiple participants. Expand
Linking artificial and human neural representations of language
TLDR
The results constrain the space of NLU models that could best account for human neural representations of language, but also suggest limits on the possibility of decoding fine-grained syntactic information from fMRI human neuroimaging. Expand
Enriching Word Vectors with Subword Information
TLDR
A new approach based on the skipgram model, where each word is represented as a bag of character n-grams, with words being represented as the sum of these representations, which achieves state-of-the-art performance on word similarity and analogy tasks. Expand
Towards Sentence-Level Brain Decoding with Distributed Representations
TLDR
This work builds decoders to associate brain activities with sentence stimulus via distributed representations, the currently dominant sentence representation approach in natural language processing (NLP) and is the first comprehensive evaluation of distributed sentence representations for brain decoding. Expand
GloVe: Global Vectors for Word Representation
TLDR
A new global logbilinear regression model that combines the advantages of the two major model families in the literature: global matrix factorization and local context window methods and produces a vector space with meaningful substructure. Expand
Decoding Brain Activity Associated with Literal and Metaphoric Sentence Comprehension Using Distributional Semantic Models
TLDR
The results suggest that compositional models and word embeddings are able to capture differences in the processing of literal and metaphoric sentences, providing support for the idea that the literal meaning is not fully accessible during familiar metaphor comprehension. Expand
Visually Grounded and Textual Semantic Models Differentially Decode Brain Activity Associated with Concrete and Abstract Nouns
TLDR
This work applies state-of-the-art computational models to decode functional Magnetic Resonance Imaging activity patterns, elicited by participants reading and imagining a diverse set of both concrete and abstract nouns, and confirms that current computational models are sufficiently advanced to assist in investigating the representational structure of abstract concepts in the brain. Expand
Toward a universal decoder of linguistic meaning from brain activation
TLDR
It is shown that a decoder trained on neuroimaging data of single concepts sampling the semantic space can robustly decode meanings of semantically diverse new sentences with topics not encountered during training. Expand
Decoding of generic mental representations from functional MRI data using word embeddings
TLDR
It is shown that this approach for building forward models of conceptual stimuli, concrete or abstract, and for using these models to carry out decoding of semantic information from new imaging data generalizes to topics not seen in training, and provides a straightforward path to decoding from more complex stimuli such as sentences or paragraphs. Expand
...
1
2
3
4
...