• Publications
  • Influence
More data trumps smarter algorithms: Comparing pointwise mutual information with latent semantic analysis
TLDR
This work evaluates a simple metric of pointwise mutual information and demonstrates that this metric benefits from training on extremely large amounts of data and correlates more closely with human semantic similarity ratings than do publicly available implementations of several more complex models. Expand
The semantic richness of abstract concepts
TLDR
It is suggested that rich linguistic contexts (many semantic neighbors) facilitate early activation of abstract concepts, whereas concrete concepts benefit more from rich physical contexts ( many associated objects and locations). Expand
The shape of action.
TLDR
A detailed comparison of the joint time courses of these variables showed that looking time and physical change were locally maximal at breakpoints and greater for higher level action units than for lower level units, showing that breakpoints are distinct even out of context. Expand
Graph-Theoretic Properties of Networks Based on Word Association Norms: Implications for Models of Lexical Semantic Memory
TLDR
The results suggest that participants switch between a contextual representation and an associative network when generating free associations, and that the role that each of these representations may play in lexical semantic memory. Expand
The role of semantic diversity in lexical organization.
TLDR
This work demonstrates the importance of contextual redundancy in lexical access, suggesting that contextual repetitions in language only increase a word's memory strength if the repetitions are accompanied by a modulation in semantic context. Expand
A Comparison of String Similarity Measures for Toponym Matching
TLDR
A task in which place names had to be matched to variants of those names listed in the GEOnet Names Server was constructed, comparing 21 different measures on datasets containing romanized toponyms from 11 different countries and finding best-performing measures varied widely across datasets, but were highly consistent within-country and within-language. Expand
Reproducing affective norms with lexical co-occurrence statistics: Predicting valence, arousal, and dominance
TLDR
This paper investigated whether affective ratings can be predicted from length, contextual diversity, co-occurrences with words of known valence, and orthographic similarity to words of Known Valence, providing an algorithm for estimating Affective ratings for larger and different datasets. Expand
Encoding Sequential Information in Semantic Space Models: Comparing Holographic Reduced Representation and Random Permutation
TLDR
This work performs several controlled comparisons of circular convolution and random permutation as means of encoding paired associates as well as encoding sequential information, increasing the neurological plausibility of random permutations and highlighting their utility in vector space models of semantics. Expand
Encoding Sequential Information in Vector Space Models of Semantics: Comparing Holographic Reduced Representation and Random Permutation
Encoding information about the order in which words typically appear has been shown to improve the performance of high-dimensional semantic space models. This requires an encoding operation capableExpand
Toward a scalable holographic word-form representation
TLDR
A representation that assumes that word-internal letter groups are encoded relative to word-terminal letter groups is found to predict qualitative patterns in masked priming, as well as lexical decision and naming latencies and is integrated with the BEAGLE model of lexical semantics to enable the model to encompass a wider range of verbal tasks. Expand
...
1
2
3
4
...