Unsupervised Learning of Spoken Language with Visual Context

@inproceedings{Harwath2016UnsupervisedLO,
  title={Unsupervised Learning of Spoken Language with Visual Context},
  author={David F. Harwath and Antonio Torralba and James R. Glass},
  booktitle={NIPS},
  year={2016}
}
Humans learn to speak before they can read or write, so why can’t computers do the same? In this paper, we present a deep neural network model capable of rudimentary spoken language acquisition using untranscribed audio training data, whose only supervision comes in the form of contextually relevant visual images. We describe the collection of our data comprised of over 120,000 spoken audio captions for the Places image dataset and evaluate our model on an image search and annotation task. We… CONTINUE READING
Highly Cited
This paper has 52 citations. REVIEW CITATIONS

6 Figures & Tables

Topics

Statistics

02040201620172018
Citations per Year

53 Citations

Semantic Scholar estimates that this publication has 53 citations based on the available data.

See our FAQ for additional information.