Automatic Selection of Context Configurations for Improved Class-Specific Word Representations

@inproceedings{Vulic2017AutomaticSO,
  title={Automatic Selection of Context Configurations for Improved Class-Specific Word Representations},
  author={Ivan Vulic and Roy Schwartz and Ari Rappoport and Roi Reichart and Anna Korhonen},
  booktitle={CoNLL},
  year={2017}
}
This paper is concerned with identifying contexts useful for training word representation models for different word classes such as adjectives (A), verbs (V), and nouns (N). We introduce a simple yet effective framework for an automatic selection of class-specific context configurations. We construct a context configuration space based on universal dependency relations between words, and efficiently search this space with an adapted beam search algorithm. In word similarity tasks for each word… Expand
Syntactic Interchangeability in Word Embedding Models
TLDR
The extent to which word embedding models preserve syntactic interchangeability, as reflected by distances between word vectors, and the effect of hyper-parameters---context window size in particular is investigated, suggesting a principle for an appropriate selection of the context window size parameter depending on the use-case. Expand
On the evaluation of retrofitting for supervised short-text classification
TLDR
This paper empirically study if retrofitting, a class of techniques used to update word vectors in a way that takes into account knowledge expressed in knowledge resources, is beneficial for short text classification, and shows that the retrofitting approach is helpful for some classifiers settings. Expand
Multi-SimLex: A Large-Scale Evaluation of Multilingual and Crosslingual Lexical Semantic Similarity
TLDR
The public release of Multi-SimLex data sets, their creation protocol, strong baseline results, and in-depth analyses can be helpful in guiding future developments in multilingual lexical semantics and representation learning are made available via a Web site that will encourage community effort in further expansion ofMulti-Simlex. Expand
A Survey of Cross-lingual Word Embedding Models
TLDR
A comprehensive typology of cross-lingual word embedding models is provided, showing that many of the models presented in the literature optimize for the same objectives, and that seemingly different models are often equivalent modulo optimization strategies, hyper-parameters, and such. Expand
Adversarial Propagation and Zero-Shot Cross-Lingual Transfer of Word Vector Specialization
TLDR
This work proposes a novel approach to specializing the full distributional vocabulary by combining a standard L2-distance loss with a adversarial loss, and proposes a cross-lingual transfer method for zero-shot specialization which successfully specializes a full target distributional space without any lexical knowledge in the target language and without any bilingual data. Expand
BioVerbNet: a large semantic-syntactic classification of verbs in biomedicine
TLDR
This work introduces the first large, annotated semantic-syntactic classification of biomedical verbs, providing a detailed account of the annotation process, the key differences in verb behaviour between the general and biomedical domain, and the design choices made to accurately capture the meaning and properties of verbs used in biomedical texts. Expand
Bio-SimVerb and Bio-SimLex: wide-coverage evaluation sets of word similarity in biomedicine
TLDR
Bio-SimVerb and Bio-SimLex enable intrinsic evaluation of word representations and highlight the importance of developing dedicated evaluation resources for NLP in biomedicine for particular word classes (e.g. verbs). Expand
Bridging Languages through Images with Deep Partial Canonical Correlation Analysis
TLDR
A non-linear Deep PCCA (DPCCA) model is introduced, and a new stochastic iterative algorithm is developed for its optimization, which outperform a large variety of previous methods on multilingual word similarity and cross-lingual image description retrieval. Expand
A neural classification method for supporting the creation of BioVerbNet
TLDR
A state-of-the-art architecture for neural representation learning to biomedical verb classification is applied and promising results suggest that the automatic classification released with this article can be used to readily support application tasks in biomedicine. Expand

References

SHOWING 1-10 OF 66 REFERENCES
Is “Universal Syntax” Universally Useful for Learning Distributed Word Representations?
TLDR
The results suggest that the universal DEPS (UDEPS) are useful for detecting functional similarity among languages, but their advantage over BOW is not as prominent as previously reported on English. Expand
Symmetric Patterns and Coordinations: Fast and Enhanced Representations of Verbs and Adjectives
TLDR
Using symmetric pattern contexts (SPs) improves word2vec verb similarity performance by up to 15% and is also instrumental in adjective similarity prediction and demonstrates that Coor contexts are superior to other dependency contexts including the set of all dependency contexts, although they are still inferior to SPs. Expand
Symmetric Pattern Based Word Embeddings for Improved Word Similarity Prediction
TLDR
A novel word level vector representation based on symmetric patterns (SPs) that performs exceptionally well on verbs, and a simple combination of the word similarity scores generated by the method and by word2vec results in a superior predictive power over that of each individual model. Expand
Not All Contexts Are Created Equal: Better Word Representations with Variable Attention
We introduce an extension to the bag-ofwords model for learning words representations that take into account both syntactic and semantic properties within language. This is done by employing anExpand
Dependency-Based Construction of Semantic Space Models
TLDR
This article presents a novel framework for constructing semantic spaces that takes syntactic relations into account, and introduces a formalization for this class of models, which allows linguistic knowledge to guide the construction process. Expand
GloVe: Global Vectors for Word Representation
TLDR
A new global logbilinear regression model that combines the advantages of the two major model families in the literature: global matrix factorization and local context window methods and produces a vector space with meaningful substructure. Expand
Word Representations: A Simple and General Method for Semi-Supervised Learning
TLDR
This work evaluates Brown clusters, Collobert and Weston (2008) embeddings, and HLBL (Mnih & Hinton, 2009) embeds of words on both NER and chunking, and finds that each of the three word representations improves the accuracy of these baselines. Expand
Strudel: A Corpus-Based Semantic Model Based on Properties and Types
TLDR
This model outperforms comparable algorithms in cognitive tasks pertaining not only to concept-internal structures but also to inter-concept relations (clustering into superordinates), suggesting the empirical validity of the property-based approach. Expand
Judgment Language Matters: Multilingual Vector Space Models for Judgment Language Aware Lexical Semantics
TLDR
This paper shows that the judgment language in which word pairs are presented to human evaluators, all fluent in that language, has a substantial impact on their produced similarity scores and highlights the importance of constructing judgment language aware VSMs. Expand
Crosslingual and Multilingual Construction of Syntax-Based Vector Space Models
TLDR
It is found that the models exhibit complementary profiles: crosslingual models yield higher accuracies while monolingual models provide better coverage, and it is shown that simple multilingual models can successfully combine their strengths. Expand
...
1
2
3
4
5
...