Corpus ID: 52182104

xSense: Learning Sense-Separated Sparse Representations and Textual Definitions for Explainable Word Sense Networks

@article{Chang2018xSenseLS,
  title={xSense: Learning Sense-Separated Sparse Representations and Textual Definitions for Explainable Word Sense Networks},
  author={Ting-Yun Chang and Ta-Chung Chi and Shang-Chi Tsai and Yun-Nung Chen},
  journal={ArXiv},
  year={2018},
  volume={abs/1809.03348}
}
  • Ting-Yun Chang, Ta-Chung Chi, +1 author Yun-Nung Chen
  • Published 2018
  • Computer Science
  • ArXiv
  • Despite the success achieved on various natural language processing tasks, word embeddings are difficult to interpret due to the dense vector representations. [...] Key Method Specifically, given a context together with a target word, our algorithm first projects the target word embedding to a high-dimensional sparse vector and picks the specific dimensions that can best explain the semantic meaning of the target word by the encoded contextual information, where the sense of the target word can be indirectly…Expand Abstract
    What Does This Word Mean? Explaining Contextualized Embeddings with Natural Language Definition
    • 6
    • PDF
    Mark my Word: A Sequence-to-Sequence Approach to Definition Modeling
    • 4
    • Highly Influenced
    • PDF
    Transformation of Dense and Sparse Text Representations
    • 1
    • PDF
    Toward Cross-Lingual Definition Generation for Language Learners
    Improving Interpretability of Word Embeddings by Generating Definition and Usage

    References

    Publications referenced by this paper.
    SHOWING 1-10 OF 24 REFERENCES
    Do Multi-Sense Embeddings Improve Natural Language Understanding?
    • 178
    • PDF
    Definition Modeling: Learning to Define Word Embeddings in Natural Language
    • 38
    • Highly Influential
    • PDF
    Learning to Understand Phrases by Embedding the Dictionary
    • 102
    • PDF
    Linear Algebraic Structure of Word Senses, with Applications to Polysemy
    • 104
    • Highly Influential
    • PDF
    A Simple but Tough-to-Beat Baseline for Sentence Embeddings
    • 654
    A Latent Variable Model Approach to PMI-based Word Embeddings
    • 135
    • PDF
    MUSE: Modularizing Unsupervised Sense Embeddings
    • 19
    • PDF
    Skip-Thought Vectors
    • 1,500
    • PDF
    A Compositional and Interpretable Semantic Space
    • 57
    • PDF