Corpus ID: 3647455

Preserved Structure Across Vector Space Representations

  title={Preserved Structure Across Vector Space Representations},
  author={Andrei Amatuni and Estelle He and Elika Bergelson},
Certain concepts, words, and images are intuitively more similar than others (dog vs. cat, dog vs. spoon), though quantifying such similarity is notoriously difficult. Indeed, this kind of computation is likely a critical part of learning the category boundaries for words within a given language. Here, we use a set of 27 items (e.g. 'dog') that are highly common in infants' input, and use both image- and word-based algorithms to independently compute similarity among them. We find three key… Expand


GloVe: Global Vectors for Word Representation
A new global logbilinear regression model that combines the advantages of the two major model families in the literature: global matrix factorization and local context window methods and produces a vector space with meaningful substructure. Expand
Representation is representation of similarities.
  • S. Edelman
  • Mathematics, Medicine
  • The Behavioral and brain sciences
  • 1998
A unified approach to visual representation is proposed, addressing the need for superordinate and basic-level categorization and for the identification of specific instances of familiar categories. Expand
Features of Similarity
The metric and dimensional assumptions that underlie the geometric representation of similarity are questioned on both theoretical and empirical grounds. A new set-theoretical approach to similarityExpand
Semantic Networks Generated from Early Linguistic Input
This work extends previous work by creating semantic networks using the SEEDLingS corpus, a newly collected corpus of linguistic input to infants, and using a recently developed LSA-like approach (GLoVe vectors), confirms the robustness of certain aspects of network organization, and provides novel evidence in support of preferential acquisition accounts. Expand
Syntactic context and the shape bias in children's and adults' lexical learning
Abstract Previous research has shown that young children and adults share a shape bias in learning novel object count nouns: they generalize the label to objects sharing the same shape as a standardExpand
Toward a universal decoder of linguistic meaning from brain activation
It is shown that a decoder trained on neuroimaging data of single concepts sampling the semantic space can robustly decode meanings of semantically diverse new sentences with topics not encountered during training. Expand
Hard Words
How do children acquire the meaning of words? And why are words such as know harder for learners to acquire than words such as dog or jump? We suggest that the chief limiting factor in acquiring theExpand
The Emergence of Category Representations During Infancy: Are Separate Perceptual and Conceptual Processes Required?
A long-standing issue in cognitive development concerns the manner in which the earliest, presumably perceptually based, categorical representations of young infants, become knowledge-richExpand
Categories and induction in young children
The present work addresses how expectations about natural kinds originate by examining how young children, with their usual reliance on perceptual appearances and only rudimentary scientific knowledge, might not induce new information within natural kind categories. Expand
How much does a shared name make things similar? Linguistic labels, similarity, and the development of inductive inference.
Overall results support predictions of the model and point to a developmental shift from treating linguistic labels as an attribute contributing to similarity to treating them as markers of a common category-a shift that appears to occur between 8 and 11 years of age. Expand