Learn More
Tree-structured neural networks exploit valuable syntactic parse information as they interpret the meanings of sentences. However, they suffer from two key technical problems that make them slow and unwieldy for large-scale NLP tasks: they usually operate on parsed sentences and they do not directly support batched computation. We address these issues by(More)
Distributional word representation methods exploit word co-occurrences to build compact vector encodings of words. While these representations enjoy widespread use in modern natural language processing, it is unclear whether they accurately encode all necessary facets of conceptual meaning. In this paper, we evaluate how well these representations can(More)
—We develop a model for identifying languages and accents in audio recordings. Our Hierarchical-Sequential Nodule Model (HSNM) incorporates both short-distance features (which capture simple linguistic distinctions, e.g. phoneme inventories) and long-distance features (which detect long-distance suprasegmental patterns, e.g. tone and prosody) which help a(More)
  • 1