Share This Author
Neural Architectures for Named Entity Recognition
- Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, Chris Dyer
- 4 March 2016
Comunicacio presentada a la 2016 Conference of the North American Chapter of the Association for Computational Linguistics, celebrada a San Diego (CA, EUA) els dies 12 a 17 de juny 2016.
Learning to Create and Reuse Words in Open-Vocabulary Neural Language Modeling
A hierarchical LSTM language model that generates sequences of word tokens character by character with a caching mechanism that learns to reuse previously generated words is augmented.
Learning Robust and Multilingual Speech Representations
- Kazuya Kawakami, Luyu Wang, Chris Dyer, P. Blunsom, Aäron van den Oord
- Computer ScienceFINDINGS
- 29 January 2020
This paper learns representations from up to 8000 hours of diverse and noisy speech data and evaluates the representations byLooking at their robustness to domain shifts and their ability to improve recognition performance in many languages finds that the representations confer significant robustness advantages to the resulting recognition systems.
Learning to Discover, Ground and Use Words with Segmental Neural Language Models
Experiments show that the unconditional model learns predictive distributions better than character LSTM models, discovers words competitively with nonparametric Bayesian word segmentation models, and that modeling language conditional on visual context improves performance on both.
Character Sequence Models for Colorful Words
- Kazuya Kawakami, Chris Dyer, Bryan R. Routledge, Noah A. Smith
- Computer ScienceEMNLP
- 28 September 2016
A neural network architecture to predict a point in color space from the sequence of characters in the color's name is presented and it is found that, given a name, the colors predicted by the model are preferred by annotators to color names created by humans.
Unsupervised Word Discovery with Segmental Neural Language Models
We propose a segmental neural language model that combines the representational power of neural networks and the structure learning mechanism of Bayesian nonparametrics, and show that it learns to…
Contrastive Predictive Coding of Audio with an Adversary
This work investigates learning general audio representations directly from raw signals using the Contrastive Predictive Coding objective and extends it by leveraging ideas from adversarial machine learning to produce additive perturbations that effectively makes the learning harder, such that the predictive tasks will not be distracted by trivial details.
Learning to Represent Words in Context with Multilingual Supervision
A neural network architecture based on bidirectional LSTMs to compute representations of words in the sentential contexts suitable for context-sensitive word representations, and obtains state-of-the-art results on all of these.
Unsupervised Learning of Efficient and Robust Speech Representations
Inferring win-lose product network from user behavior
- S. Iitsuka, Kazuya Kawakami, S. Hagiwara, T. Kawakami, Takayuki Hamada, Y. Matsuo
- Computer ScienceWI
- 23 August 2017
This paper proposes a win-lose relation, a new product relation analysis method that retrieves the superiority relation between competitive products in terms of product attractiveness, and proposes superiority factor analysis to examine keywords that represent the superiority factor by mining product reviews.