Corpus ID: 33752769

Outperforming Word2Vec on Analogy Tasks with Random Projections

@article{Demski2014OutperformingWO,
  title={Outperforming Word2Vec on Analogy Tasks with Random Projections},
  author={A. Demski and Volkan Ustun and P. Rosenbloom and C. Kommers},
  journal={ArXiv},
  year={2014},
  volume={abs/1412.6616}
}
We present a distributed vector representation based on a simplification of the BEAGLE system, designed in the context of the Sigma cognitive architecture. Our method does not require gradient-based training of neural networks, matrix decompositions as with LSA, or convolutions as with BEAGLE. All that is involved is a sum of random vectors and their pointwise products. Despite the simplicity of this technique, it gives state-of-the-art results on analogy problems, in most cases better than… Expand
2 Citations
The Role of Negative Information in Distributional Semantic Learning
TLDR
The role of negative information in developing a semantic representation is assessed and its power does not reflect the use of a prediction mechanism, and how negative information can be efficiently integrated into classic count-based semantic models using parameter-free analytical transformations is shown. Expand
Learning Knowledge from User Search
TLDR
This paper proposes the SCKE framework to extract new knowledge triples which can be executed in the online scenario, and shows that new triples can also be identified in the very beginning after the event happens, enabling the capability to provide the up-to-date knowledge summary for most user queries. Expand

References

SHOWING 1-10 OF 11 REFERENCES
Distributed Vector Representations of Words in the Sigma Cognitive Architecture
TLDR
A new algorithm for learning distributed-vector word representations from large, shallow information resources, and how this algorithm can be implemented via small modifications to Sigma is described. Expand
Efficient Estimation of Word Representations in Vector Space
TLDR
Two novel model architectures for computing continuous vector representations of words from very large data sets are proposed and it is shown that these vectors provide state-of-the-art performance on the authors' test set for measuring syntactic and semantic word similarities. Expand
GloVe: Global Vectors for Word Representation
TLDR
A new global logbilinear regression model that combines the advantages of the two major model families in the literature: global matrix factorization and local context window methods and produces a vector space with meaningful substructure. Expand
Don’t count, predict! A systematic comparison of context-counting vs. context-predicting semantic vectors
TLDR
An extensive evaluation of context-predicting models with classic, count-vector-based distributional semantic approaches, on a wide range of lexical semantics tasks and across many parameter settings shows that the buzz around these models is fully justified. Expand
Holographic reduced representations
  • T. Plate
  • Computer Science, Medicine
  • IEEE Trans. Neural Networks
  • 1995
TLDR
This paper describes a method for representing more complex compositional structure in distributed representations that uses circular convolution to associate items, which are represented by vectors. Expand
Representing word meaning and order information in a composite holographic lexicon.
TLDR
A computational model that builds a holographic lexicon representing both word meaning and word order from unsupervised experience with natural language demonstrates that a broad range of psychological data can be accounted for directly from the structure of lexical representations learned in this way, without the need for complexity to be built into either the processing mechanisms or the representations. Expand
An algorithmic theory of learning: Robust concepts and random projection
TLDR
This work provides a novel algorithmic analysis via a model of robust concept learning (closely related to “margin classifiers”), and shows that a relatively small number of examples are sufficient to learn rich concept classes. Expand
Random projection in dimensionality reduction: applications to image and text data
TLDR
It is shown that projecting the data onto a random lower-dimensional subspace yields results comparable to conventional dimensionality reduction methods such as principal component analysis: the similarity of data vectors is preserved well under random projection. Expand
Experiments with Random Projection
TLDR
Results of random projection as a promising dimensionality reduction technique for learning mixtures of Gaussians are summarized by a wide variety of experiments on synthetic and real data. Expand
Distributions of angles in random packing on spheres
TLDR
The results reveal interesting differences in the two settings and provide a precise characterization of the folklore that "all high-dimensional random vectors are almost always nearly orthogonal to each other". Expand
...
1
2
...