Corpus ID: 563946

In Defense of Spatial Models of Lexical Semantics

@article{Jones2011InDO,
  title={In Defense of Spatial Models of Lexical Semantics},
  author={Michael N. Jones and T. Gruenenfelder and Gabriel Recchia},
  journal={Cognitive Science},
  year={2011},
  volume={33}
}
In Defense of Spatial Models of Lexical Semantics Michael N. Jones, Thomas M. Gruenenfelder, & Gabriel Recchia [jonesmn][ tgruenen][grecchia]@indiana.edu Department of Psychological and Brain Sciences Indiana University, Bloomington, Indiana USA Abstract Semantic space models of lexical semantics learn vector representations for words by observing statistical redundancies in a text corpus. A word’s meaning is represented as a point in a high-dimensional semantic space. However, these spatial… Expand
Modelling the acquisition of natural language categories
TLDR
This thesis identifies two key properties necessary for cognitive plausibility in a model of category acquisition, incrementality and non-parametricity, and constructs a pair of models designed around these constraints, based on a graphical representation of semantics in which a category represents a densely connected subgraph. Expand
Graph-Theoretic Properties of Networks Based on Word Association Norms: Implications for Models of Lexical Semantic Memory
TLDR
The results suggest that participants switch between a contextual representation and an associative network when generating free associations, and that the role that each of these representations may play in lexical semantic memory. Expand
A synchronization account of false recognition
TLDR
A computational model is described that is capable of accounting for standard recognition results that are challenging for classic global memory models, and can also explain a wide variety of false recognition effects and make item-specific predictions for critical lures. Expand
Bridging the theoretical gap between semantic representation models without the pressure of a ranking: some lessons learnt from LSA
TLDR
A critical review of latent semantic analysis (LSA) to clarify some of the misunderstandings regarding LSA and other space models and proposes using long LSA experiences in other models, especially in predicting models such as word2vec. Expand
11 Models of Semantic Memory
Meaning is a fundamental component of nearly all aspects of human cognition, but formal models of semantic memory have classically lagged behind many other areas of cognition. However, computationalExpand
Predicting the Good Guy and the Bad Guy: Attitudes are Encoded in Language Statistics
TLDR
In three studies, negative-valence words were found to be more closely associated in language with individuals commonly considered villains, and positive valence words with heroes (both fictional and historical), suggesting that attitudes toward persons can be inferred from lexical associations. Expand
Visualizing multiple word similarity measures
TLDR
The “Word-2-Word” visualization environment allows for easy manipulation of graph data to test word similarity measures on their own or in comparisons between multiple similarity metrics, and contains a large library of statistical relationship models. Expand
Distant Concept Connectivity in Network-Based and Spatial Word Representations
TLDR
Response latencies for relatedness judgments for word-pairs followed a quadratic relationship with network path lengths and spatial cosines, suggesting that simple association networks can capture distant semantic relationships. Expand
The Oxford Handbook of Computational and Mathematical Psychology
Preface 1. Introduction Jerome R. Busemeyer, Zheng Wang, James T. Townsend, and Ami Eidels Part I. Elementary Cognitive Mechanisms 2. Multidimensional Signal Detection Theory F. Gregory Ashby andExpand
Attempted prime retrieval is a double-edged sword: Facilitation and disruption in repeated lexical retrieval.
  • A. Kumar, D. Balota
  • Psychology, Medicine
  • Journal of experimental psychology. Learning, memory, and cognition
  • 2020
TLDR
The findings are consistent with ongoing retrospective processes during target retrieval, which reengage prime retrieval success or failure and consequently produce benefits and costs during repeated retrieval from semantic memory. Expand
...
1
2
...

References

SHOWING 1-10 OF 23 REFERENCES
Representing word meaning and order information in a composite holographic lexicon.
TLDR
A computational model that builds a holographic lexicon representing both word meaning and word order from unsupervised experience with natural language demonstrates that a broad range of psychological data can be accounted for directly from the structure of lexical representations learned in this way, without the need for complexity to be built into either the processing mechanisms or the representations. Expand
Topics in semantic representation.
TLDR
This article analyzes the abstract computational problem underlying the extraction and use of gist, formulating this problem as a rational statistical inference that leads to a novel approach to semantic representation in which word meanings are represented in terms of a set of probabilistic topics. Expand
A Solution to Plato's Problem: The Latent Semantic Analysis Theory of Acquisition, Induction, and Representation of Knowledge.
How do people know as much as they do with as little information as they get? The problem takes many forms; learning vocabulary from text is an especially dramatic and convenient case for research. AExpand
Latent structure in measures of associative, semantic, and thematic knowledge
TLDR
In this study, scaling, clustering, and factor-analytic techniques were used to reveal the structure underlying 13 variables, and semantic similarity determined from lexicographic measures is shown to be separable from the associative strength determined from word association norms. Expand
More data trumps smarter algorithms: Comparing pointwise mutual information with latent semantic analysis
TLDR
This work evaluates a simple metric of pointwise mutual information and demonstrates that this metric benefits from training on extremely large amounts of data and correlates more closely with human semantic similarity ratings than do publicly available implementations of several more complex models. Expand
The Large-Scale Structure of Semantic Networks: Statistical Analyses and a Model of Semantic Growth
TLDR
A simple model for semantic growth is described, in which each new word or concept is connected to an existing network by differentiating the connectivity pattern of an existing node, which generates appropriate small-world statistics and power-law connectivity distributions. Expand
The nature and measurement of meaning.
  • C. Osgood
  • Psychology, Medicine
  • Psychological bulletin
  • 1952
s, 1934, 8, No. 153. 50. KARWOSKI, T. F., & BERTHOLD, F., JR. Psychological studies in semantics: II. Reliability of free association tests. /. soc. Psychol, 1945,22,87-102. 51. KARWOSKI, T. F.,Expand
Features of Similarity
The metric and dimensional assumptions that underlie the geometric representation of similarity are questioned on both theoretical and empirical grounds. A new set-theoretical approach to similarityExpand
Concerning the Applicability of Geometric Models to Similarity Data : The Interrelationship Between Similarity and Spatial Density
In a recent article, Tversky questioned the application of geometric models to similarity data and proposed an alternative set-theoretic approach. He suggested that geometric models are inappropriateExpand
Hyperdimensional Computing: An Introduction to Computing in Distributed Representation with High-Dimensional Random Vectors
  • P. Kanerva
  • Computer Science
  • Cognitive Computation
  • 2009
TLDR
The thesis of the paper is that hyperdimensional representation has much to offer to students of cognitive science, theoretical neuroscience, computer science and engineering, and mathematics. Expand
...
1
2
3
...